CN107274449B - Space positioning system and method for object by optical photo - Google Patents
Space positioning system and method for object by optical photo Download PDFInfo
- Publication number
- CN107274449B CN107274449B CN201710364382.XA CN201710364382A CN107274449B CN 107274449 B CN107274449 B CN 107274449B CN 201710364382 A CN201710364382 A CN 201710364382A CN 107274449 B CN107274449 B CN 107274449B
- Authority
- CN
- China
- Prior art keywords
- axis
- cameras
- plane
- camera
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for spatially positioning an object by an optical photo, which comprises the steps of establishing a spatial rectangular coordinate system, respectively arranging cameras on an x-axis plane, a y-axis plane and a z-axis plane, determining mass points of the object corresponding to pixels according to the change of gray values of the object, respectively selecting vertical planes of the mass points in the spatial rectangular coordinate system corresponding to the three cameras, converting coordinates of the mass points from the spatial rectangular coordinate system to a plane coordinate system through straight lines intersected by every two of the three vertical planes, obtaining coordinates of the mass points in the x-axis, the y-axis and the z-axis, and spatially positioning the object to obtain an actual position. The invention adopts MATLAB image processing function to process the picture into a matrix form and output the gray value of each pixel in the picture, when the color of the object is obviously different from the scene, the image of the object corresponding to the pixels can be easily determined through the change of the gray value, thereby obtaining the position of the image of the object in the picture, namely obtaining the actual position of the object.
Description
[ technical field ] A method for producing a semiconductor device
The invention belongs to the technical field of optical positioning, and particularly relates to a system and a method for positioning an object in space by an optical photo.
[ background of the invention ]
The current monitoring system mostly adopts two-dimensional plane monitoring, and video image signals collected by all the camera devices are independently displayed through a monitor. There are many disadvantages to this monitoring approach, such as poor intuitiveness, poor integrity and interactivity, etc. For places such as airports, production bases or other large facilities, because of the great similarity of different positions in the building, under the condition that the backgrounds of two monitoring places are similar, the monitoring place is difficult to be analyzed immediately; the two-dimensional plane monitoring system is lack of interactivity due to independent display of the two-dimensional plane monitoring system, video images can be directly output only by a monitor, the editing function is lacked, and the function expansion cannot be carried out; the number of imaging devices is too large to enable rapid positioning. The above problems indicate that the monitoring system should develop towards intellectualization, strong interactivity, three-dimensional performance and function expandability.
Therefore, the prior film and television monitoring adopted by real-time monitoring has the defects that the picture is not clear, the accurate position of the object to be monitored cannot be accurately obtained from the picture, and the timely treatment of a user is influenced.
[ summary of the invention ]
The technical problem to be solved by the present invention is to provide a system and a method for spatially positioning an object by an optical photo, which can directly calculate the specific position of the object, and facilitate the user to handle the object in time.
The invention adopts the following technical scheme:
a space positioning method of an object by an optical photo comprises the steps of establishing a space rectangular coordinate system, arranging cameras on an x-axis plane, a y-axis plane and a z-axis plane respectively, determining mass points of pixels corresponding to the object according to gray value changes of the object, selecting vertical planes of the three cameras corresponding to the mass points in the photo in the space rectangular coordinate system respectively, converting coordinates of the mass points from the space rectangular coordinate system to a plane coordinate system through lines intersected by every two of the three vertical planes, obtaining coordinates of the mass points in the x-axis, the y-axis and the z-axis, and carrying out space positioning on the object to obtain an actual position.
Further, the rectangular spatial coordinate system uses B as an origin, BC as an x-axis, BA as a y-axis, and B 'B as a z-axis, establishes a monitoring space ABCD-A' B 'C' D 'of a cuboid structure, determines two vertical lines MQ and M' Q 'corresponding to the horizontal coordinate of the particle on the photo through two cameras, respectively, and obtains the coordinate of the point S according to the intersection line SS' of the two vertical planes, wherein the two vertical lines correspond to the vertical plane MN and the P 'Q' M 'N' in the space.
Further, the coordinates of the S point are specifically:
wherein, theta is an included angle between the vertical plane and the horizontal plane, a is the side length of BA, and b is the side length of BC.
Further, the picture is placed in the transverse view field of each camera, and an included angle theta between the vertical plane and the view field center vertical plane is obtained as follows:
wherein m is the distance between the position of the mass point in the picture and the left boundary of the picture, n is the picture width,the top angle of the lateral view of the camera.
Further, the particles are on the intersection line SS' and are perpendicular to the horizontal plane.
Furthermore, the mass point is an intersection point of an optical axis of the camera and the monitoring space, and the optical axis is perpendicular to a boundary surface of the monitoring space.
A system for an object space positioning method by optical photos comprises a plurality of monitoring cameras and a computing unit adopting MATLAB image processing, wherein the monitoring cameras are respectively arranged on a monitoring space and connected with the computing unit, and the computing unit obtains the space position of an object through function calculation according to the position of the object in a monitoring image of each monitoring camera.
Further, the monitoring cameras comprise at least three monitoring cameras.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a method for positioning an object space by an optical photo, which comprises the steps of establishing a rectangular spatial coordinate system, arranging cameras on an x-axis plane, a y-axis plane and a z-axis plane respectively, processing the photo into a matrix by adopting an MATLAB image processing function, outputting a gray value of each pixel in the photo, and easily determining the pixels corresponding to the pixels by the change of the gray value when the color of the object is obviously different from the scene, so that the position of the image of the object in the photo is obtained, and the actual position of the object can be obtained.
Further, it is known that the coordinates (x and y coordinates) of the object point can be calculated by x and y coordinates of another point (S point) on the same vertical line as the object point, and the spatial position coordinate of the S point can be easily calculated by obtaining the coordinate of the S point through the intersection line of two vertical planes.
Further, how the vertical lines corresponding to the dots on the picture correspond exactly to the vertical plane of space is the simplest way to put the picture directly into space when it is positioned just enough to fill the entire camera field of view, then θ is represented in FIG. 51The angle is equivalent to the angle between the vertical plane of the space in which the object is located and the vertical plane of the center of the field of view of the camera, as shown in fig. 4, and can be easily calculated in such a way that the phase change cannot be directly obtained.
Further, the orientation of the camera is fixed in space, and for the sake of calculation, the present invention makes the optical axis of the camera perpendicular to and centered on one face of the monitored space (rectangular parallelepiped) and makes the taken picture horizontal, so that the vertical plane of the center of the camera field of view is perpendicular to one face of the monitored space.
The invention also discloses a space positioning system of the optical photo to the object, which comprises monitoring cameras and a computing unit adopting MATLAB image processing, wherein the computing unit obtains the space position of the object through function calculation according to the position of the object in the monitoring image of each monitoring camera, the monitoring cameras can obtain the space position relation of the object relative to each camera in real time, these relationships are interrelated and do not exist independently, and more than two spatial position relationships must be combined to accurately obtain the spatial position of the object, so that a computing unit capable of extracting information from the picture is required, matlab is good at processing image information, the position of the object in the picture can be judged through the change of the gray level in the picture, and the spatial coordinate of the object can be calculated through the functional relation between the information and the spatial position by combining the information of a plurality of pictures.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
[ description of the drawings ]
FIG. 1 is a schematic view of the present invention in a plane perpendicular to the lens;
FIG. 2 is a three-dimensional schematic of a monitoring model of the present invention;
FIG. 3 is a schematic plan view of a monitoring model of the present invention;
FIG. 4 is a view of a camera head of the present invention;
fig. 5 is an equivalent view of the field of view of the camera of the present invention.
[ detailed description ] embodiments
The invention provides a space positioning system of an object by optical photos. Wherein, the monitoring part uses a common camera and is arranged at a specific position of a monitored space; the calculation part adopts the function of MATLAB image processing, the monitoring cameras are respectively arranged on the monitoring space and connected with the calculation unit, and the spatial position of the object is obtained through function calculation according to the position of the object in the monitoring image. And the plane image information is converted into a position space coordinate form to be output, so that the monitoring result is more visual.
The method comprises the following specific steps:
1. determining the position of a point
Assuming that the object is a particle in three-dimensional space, 3 spatial coordinates are obtained to determine the spatial location of the particle. Here, we use a cartesian coordinate system to represent, and assuming that the spatial position coordinates of the object are (x, y, z), then there are three unknown variables, which require at least 3 mutually independent equations to obtain the coordinate solution, and at least three cameras are needed to accurately obtain the spatial position of the object.
2. Imaging features of camera
Referring to fig. 1, the imaging mechanism of the camera is also convex lens imaging, using the principle of convex lens imaging. The convex lens imaging has the characteristic that light rays passing through the optical center of the convex lens cannot change the propagation direction and reach an image plane in a straight line. The following can be concluded by this feature:
one mass point in the space and an image are formed at one point, and the point is uniquely determined;
the image of a plane in space passing through the optical center is a straight line.
3. Determining the position of a particle by imaging
Because the equation of the coordinates written by the independent camera is too complex and has errors easily, the direct solving of the 3 coordinates is difficult to solve, the method utilizes the second item of the imaging characteristic to solve the space problem by changing the space problem into the plane problem, and the method specifically comprises the following steps:
referring to fig. 2, a photograph is an enlarged photograph imaged by a convex lens, an object corresponding to a vertical line on the photograph, that is, a vertical line of a phase plane, is a vertical plane, a black point of a monitored space (a cuboid ABCD-a ' B ' C ' D ') perpendicular to an optical axis of a camera P1 is a mass point, B is an origin, BC is an x-axis, BA is a y-axis, and B ' B is a z-axis, and a rectangular spatial coordinate system is established, and is uniformly used in a subsequent calculation process.
And taking a picture of the camera at another angle P2 in the same way that the horizontal coordinate of the mass point on the picture corresponds to a vertical line and corresponds to a vertical plane PQMN in space, and the position (x, y, z) of the mass point is on the vertical plane P 'Q' M 'N'.
Plane PQMN and plane P 'Q' M 'N' intersect at a line SS ', and the particles are in a straight line, and line SS' is perpendicular to the horizontal plane. That is, the X and Y coordinates of the S point are coincident with the particle point. The coordinates of point S in the X, Y directions are the X, Y coordinates of the mass point.
The solving process of the S point coordinate is as follows:
referring to fig. 3, in the plane ABCD, the cameras are at points P1 and P2. Given an AB side length of a, a BC side length of b, M and M' are at the midpoint of BC and AB, respectively.
Let the S point coordinates (x, y) result in the following equation:
the two formulas (1) and (2) can be arranged:
referring to FIG. 4, a transverse view of the camera is taken, where the triangle is an isosceles triangle and θ is shown in the figure1It is the included angle that needs to be found,the top angle of the lateral view of the camera.
The triangle shown in FIG. 5 is obtained by replacing objects in the field of view with a photograph in which the positions of the particles are m away from the left boundary of the photograph and the width of the photograph is n, where n,Are known constants and are available in the size of the photograph.
The following relationship is derived from the geometrical relationship:
the same can be obtained
Where m' is the particle position of the picture taken by the P2 camera.
Substituting the expressions (5) and (6) into the expressions (3) and (4) to obtain the values of x and y
If a camera P3 is added to the ABCD plane in fig. 1, the above process is repeated to obtain the coordinates of the z axis by using the relationship between P2 and P3, and the relationship between x and z is obtained by installing cameras on the ABB 'a' plane and the ABCD plane, where the expression of z is:
where c is the length of AA ', and n, m and m' are data of another set of photographs, different from the values at the time of calculating the x, y coordinates
When the color of the object is obviously different from that of the scene, the pixels corresponding to the object can be easily determined by the change of the gray value, so that the position of the object in the picture, namely the values of m and m 'in the formulas (5) and (6), are substituted into the values (7) and (8), and the actual position of the object can be obtained by substituting m and m' measured by another group of devices into the value (9).
Claims (4)
1. A method for spatially positioning an object by an optical photo is characterized by establishing a spatial rectangular coordinate system, wherein the spatial rectangular coordinate system takes B as an origin, BC as an x-axis, BA as a y-axis and B 'B as a z-axis, establishing a monitoring space ABCD-A' B 'C' D 'of a cuboid structure, respectively arranging cameras on an x-axis plane, a y-axis plane and a z-axis plane, respectively determining two vertical lines MQ and M' Q 'corresponding to the horizontal coordinates of particles on the photo by the two cameras, respectively corresponding to a vertical plane PQMN and a P' Q 'M' N 'in space by the two vertical lines, respectively obtaining the coordinates of a point S according to an intersection line SS' of the two vertical planes, respectively selecting the vertical planes of the particles in the photo corresponding to the three cameras in the spatial rectangular coordinate system, converting the coordinates of the particles from the spatial rectangular coordinate system to a planar coordinate system by two-by-two intersected straight lines of the three vertical planes, obtaining coordinates of the particles on x, y and z axes, and carrying out space positioning on the object to obtain an actual position;
the mass point is on the intersection line SS' and is vertical to the horizontal plane, the mass point is the intersection point of the optical axis of the camera and the monitoring space, and the optical axis is vertical to a boundary surface of the monitoring space;
in the projection plane ABCD, the cameras are points P1 and P2, the cameras are points P1, the M points P2, and the M 'are coincident, the side length of AB is a, the side length of BC is b, and the M and M' are respectively at the midpoint between BC and AB, and the coordinates of the point S are specifically:
wherein a is the side length of BA, and b is the side length of BC;
the relation between the camera P2 and the camera P3 is utilized, the process is repeated to obtain the coordinate of the z axis, the cameras are arranged on the ABB 'A' surface and the ABCD surface by utilizing the relation between x and z to obtain the relation between x and z, and the expression of z is as follows:
where c is the length of AA ', n, m and m ' are data for another set of photographs, m is the distance of the position of the mass point in the photograph from the left boundary of the photograph, n is the width of the photograph, m ' is the distance of the mass point in the photograph taken by camera p3 from the left boundary of the photograph, different from the values at the time of calculating the x, y coordinates,the top angle of the lateral view of the camera.
2. The method of claim 1, wherein the photo is placed in the transverse field of view of each camera, and the vertical plane at the center of the field of view have an included angle θ of:
3. A system for spatially localizing an object using an optical photograph as defined in claim 1 or 2, comprising a plurality of monitoring cameras and a computing unit using MATLAB image processing, wherein the monitoring cameras are respectively disposed on the monitored space and connected to the computing unit, and the computing unit obtains the spatial position of the object by function calculation according to the position of the object in the monitored image of each of the monitoring cameras.
4. A photo-optical object spatial location system as claimed in claim 3 wherein said surveillance cameras include at least three.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710364382.XA CN107274449B (en) | 2017-05-22 | 2017-05-22 | Space positioning system and method for object by optical photo |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710364382.XA CN107274449B (en) | 2017-05-22 | 2017-05-22 | Space positioning system and method for object by optical photo |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107274449A CN107274449A (en) | 2017-10-20 |
CN107274449B true CN107274449B (en) | 2020-11-13 |
Family
ID=60065238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710364382.XA Active CN107274449B (en) | 2017-05-22 | 2017-05-22 | Space positioning system and method for object by optical photo |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107274449B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109949625B (en) * | 2019-04-09 | 2021-10-15 | 义乌市万博创意设计有限公司 | Can contrast trainer of chinese and western aerobics exercises difference |
CN110389352A (en) * | 2019-08-16 | 2019-10-29 | 国网内蒙古东部电力有限公司电力科学研究院 | Optical 3-dimensional motion capture method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777182A (en) * | 2010-01-28 | 2010-07-14 | 南京航空航天大学 | Video positioning method of coordinate cycling approximation type orthogonal camera system and system thereof |
CN105631859A (en) * | 2015-12-21 | 2016-06-01 | 中国兵器工业计算机应用技术研究所 | Three-degree of freedom bionic stereo vision system |
CN105979211A (en) * | 2016-06-07 | 2016-09-28 | 中国地质大学(武汉) | 3D coverage rate calculation method suitable for multi-view-point video monitoring system |
CN205718941U (en) * | 2016-05-05 | 2016-11-23 | 陕西科技大学 | A kind of optical ranging system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8537219B2 (en) * | 2009-03-19 | 2013-09-17 | International Business Machines Corporation | Identifying spatial locations of events within video image data |
-
2017
- 2017-05-22 CN CN201710364382.XA patent/CN107274449B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777182A (en) * | 2010-01-28 | 2010-07-14 | 南京航空航天大学 | Video positioning method of coordinate cycling approximation type orthogonal camera system and system thereof |
CN105631859A (en) * | 2015-12-21 | 2016-06-01 | 中国兵器工业计算机应用技术研究所 | Three-degree of freedom bionic stereo vision system |
CN205718941U (en) * | 2016-05-05 | 2016-11-23 | 陕西科技大学 | A kind of optical ranging system |
CN105979211A (en) * | 2016-06-07 | 2016-09-28 | 中国地质大学(武汉) | 3D coverage rate calculation method suitable for multi-view-point video monitoring system |
Non-Patent Citations (3)
Title |
---|
Calibration of a Microlens Array for a Plenoptic Camera;Chelsea M. Thomason 等;《American Institute of Aeronautics and Astronautics》;20141231;第1-18页 * |
Indexing Method for Three-Dimensional Position Estimation;Iris FERMIN 等;《ResearchGate》;20120918;第1957-1604页 * |
多摄像头非刚体目标检测与空间定位系统;谢松;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;第2016年卷(第03期);正文第1-6章 * |
Also Published As
Publication number | Publication date |
---|---|
CN107274449A (en) | 2017-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN111062873B (en) | Parallax image splicing and visualization method based on multiple pairs of binocular cameras | |
US8848035B2 (en) | Device for generating three dimensional surface models of moving objects | |
US11816829B1 (en) | Collaborative disparity decomposition | |
CN110677599B (en) | System and method for reconstructing 360-degree panoramic video image | |
CN111028155B (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
CN110300292B (en) | Projection distortion correction method, device, system and storage medium | |
Zhang et al. | A robust and rapid camera calibration method by one captured image | |
CN103226838A (en) | Real-time spatial positioning method for mobile monitoring target in geographical scene | |
CN112686877B (en) | Binocular camera-based three-dimensional house damage model construction and measurement method and system | |
CN111192321B (en) | Target three-dimensional positioning method and device | |
CN110827392B (en) | Monocular image three-dimensional reconstruction method, system and device | |
CN104318604A (en) | 3D image stitching method and apparatus | |
CN103852060A (en) | Visible light image distance measuring method based on monocular vision | |
CN105739106A (en) | Somatosensory multi-view point large-size light field real three-dimensional display device and method | |
CN107274449B (en) | Space positioning system and method for object by optical photo | |
Jiang et al. | An accurate and flexible technique for camera calibration | |
CN110807413B (en) | Target display method and related device | |
CN116152471A (en) | Factory safety production supervision method and system based on video stream and electronic equipment | |
JP2006215939A (en) | Free viewpoint image composition method and device | |
CN115880643A (en) | Social distance monitoring method and device based on target detection algorithm | |
CN109272445A (en) | Panoramic video joining method based on Sphere Measurement Model | |
CN108592789A (en) | A kind of steel construction factory pre-assembly method based on BIM and machine vision technique | |
CN114494427A (en) | Method, system and terminal for detecting illegal behavior of person standing under suspension arm | |
Shivaram et al. | A new technique for finding the optical center of cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |