CN112465947A - Virtual space establishing method and system for image - Google Patents
Virtual space establishing method and system for image Download PDFInfo
- Publication number
- CN112465947A CN112465947A CN202011294053.0A CN202011294053A CN112465947A CN 112465947 A CN112465947 A CN 112465947A CN 202011294053 A CN202011294053 A CN 202011294053A CN 112465947 A CN112465947 A CN 112465947A
- Authority
- CN
- China
- Prior art keywords
- image
- parallel
- line
- lens
- lines
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000011084 recovery Methods 0.000 claims abstract description 57
- 238000006243 chemical reaction Methods 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000003068 static effect Effects 0.000 claims description 14
- 230000001678 irradiating effect Effects 0.000 claims description 6
- 239000007787 solid Substances 0.000 claims description 4
- 230000008859 change Effects 0.000 description 24
- 238000004458 analytical method Methods 0.000 description 19
- 238000005516 engineering process Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 238000005457 optimization Methods 0.000 description 12
- 230000009467 reduction Effects 0.000 description 12
- 230000000007 visual effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000012937 correction Methods 0.000 description 8
- 238000013499 data model Methods 0.000 description 8
- 238000005259 measurement Methods 0.000 description 8
- 238000011160 research Methods 0.000 description 8
- 230000009466 transformation Effects 0.000 description 8
- 230000009977 dual effect Effects 0.000 description 5
- 210000004556 brain Anatomy 0.000 description 4
- 230000019771 cognition Effects 0.000 description 4
- 230000001149 cognitive effect Effects 0.000 description 4
- 238000005429 filling process Methods 0.000 description 4
- 230000001788 irregular Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 239000003550 marker Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 210000001525 retina Anatomy 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 238000011161 development Methods 0.000 description 2
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Abstract
The invention relates to a method and a system for establishing a virtual space of an image, which are characterized in that the axis of a lens of a shooting device is parallel to a mark straight line with known vertical distance in a real scene, and the real scene and parallel mark lines are shot, wherein the parallel mark lines are actual parallel mark lines in the shot real scene or laser parallel mark lines projected into the real scene by a laser head attached or attached to the shooting device; or shooting the same real scene through the main and auxiliary lenses, and forming an oblique auxiliary lens axis as a parallel mark line in an image shot by the main lens; the actual vertical distance represented by the image central line where the parallel mark line and the lens axis are located at any distance in the image is taken as a reference, and scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a real scene recovery mode. The method has the advantages of easy shooting and capability of establishing the shot images in the virtual space.
Description
Technical Field
The present invention relates to an image space establishing method, and more particularly, to a virtual space establishing method and system for an image.
Background
The popularization of smart phones and other modern image capturing devices makes it extremely easy to capture images, but the development and utilization of image resources are far from the development of image acquisition itself, for example: how to establish the image shooting object in a virtual space mode inevitably leads to infinite application value of the ordinary and common image if the technology can be realized.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned drawbacks of the prior art, and to providing a method for creating a virtual space of an image.
In order to achieve the above object, the method for establishing virtual space of image of the present invention is characterized in that the lens axis of the shooting device is parallel to the mark straight line with known vertical distance in the real scene, and the real scene and the parallel mark line are shot, wherein the parallel mark line is the actual parallel mark line in the real scene or the laser parallel mark line projected to the real scene by the laser head attached or attached to the shooting device; or the shooting device is provided with a main lens and an auxiliary lens which are parallel to the lens axis and have known intervals, the main lens and the auxiliary lens shoot the real scene at the same time, the center line or the center line reference object of the lens axis in the auxiliary image shot by the auxiliary lens is identified and marked, the inclined center line of the auxiliary lens axis in the main image shot by the main lens is marked according to the identification mark, and the inclined center line of the auxiliary lens axis marked in the main image shot by the main lens is used as the parallel mark line in the shot real scene. The central line is identified and marked, the pixel parameters on the path of the central line in the image can be identified and marked, or at least two obvious mark points on the path of the central line in the image can be identified and marked, at least the two obvious mark points are found in the main image shot by the main lens, and the straight line connecting the two obvious mark points is the inclined central line where the axis of the auxiliary lens is located and is used as the parallel mark line in the shot real scene. The method comprises the steps of obtaining a central line reference object, wherein the central line reference object comprises an obvious mark point which is passed by a central line in an image and a reference object contour line which is parallel to the central line on a reference object outside the central line, drawing a straight line which passes the obvious mark point and is parallel to the reference object contour line in a main image, namely an inclined central line where an auxiliary lens axis is located as a parallel mark line in a shot real scene, measuring and identifying the transverse distance from each obvious mark point to the central line in the auxiliary image, reconstructing the inclined mark line according to the distance ratio from each pair of obvious mark points to the central line in the main image, and the like. The vertical distance is the distance between the two parallel lines when the two parallel lines are both vertical to the virtual ground of the photo object of the shot scenery, and the two parallel lines are respectively positioned on the vertical plane vertical to the virtual ground of the photo object.
The actual vertical distance represented by the image central line where the parallel mark line and the lens axis are located at any distance in the image is taken as a reference, and scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a real scene recovery mode. Namely, the distance between the central line in the shot image and the parallel marking line in the inclined state is recovered to the parallel and known fixed distance in the real scene from the big-small-big state as basic data, the real scene recovery is carried out on all the scenery in the whole image, and the real scene recovery three-dimensional image is established. The shooting device and the attached laser head are respectively provided with a gyroscope, a theodolite, a compass, a level meter and a range finder for calibrating direction and level. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio. The method has the advantages of easy shooting and capability of establishing the shot images in the virtual space.
As optimization, a lens of the shooting device is positioned between two parallel mark lines with known spacing, the axis of the lens is parallel to the two parallel mark lines, and images of the two parallel mark lines longitudinally and parallelly extending from a near end to a far end are shot; or the two left and right parallel lenses of the shooting device respectively and jointly shoot left and right and middle images of the same real scene, the center line or the center line reference object of the lens axis in the left and right images respectively shot by the left and right parallel lenses is identified and marked, the inclined center line of the left and right parallel lenses axis is respectively marked in the jointly shot middle image according to the identification mark, and the inclined center line of the left and right parallel lenses axis marked in the jointly shot middle image is used as two parallel mark lines in the shot real scene.
Based on the known distance between two parallel mark lines in the image at any distance, the scene with large and small distances in the plane image is built in a three-dimensional virtual space in a recovery mode. The two parallel marking lines can be parallel solid lines in the real scene, and can also be parallel laser lines formed on the real scene by a laser head of the shooting device. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio. The method has the advantages of easy shooting and capability of establishing the shot images in the virtual space.
As optimization, the restoration of the longitudinal length is that the distance between the near end and the transverse distance of the two parallel mark lines in the image is known, the included angle between one end of the near end transverse line and the midpoint of the far end transverse line of the two parallel mark lines in the image is the same as the included angle of the real scene, the actual longitudinal depth length from the near end to the far end is obtained by geometric calculation on the basis of measurement, and the longitudinal depth length of the scene in the image with the length is established in the virtual space in a restoration mode. The geometric calculation is that the length of the short side of the straight triangle is half of the distance between the near-end transverse distance line of the two parallel mark lines in the known image, the included angle of the connecting line between the near-end transverse distance line of the two parallel mark lines in the image and the midpoint between the near-end transverse distance line and the far-end transverse distance line of the two parallel mark lines in the image is equal to the included angle of the image, and the length of the long side of the straight triangle is calculated by utilizing a right-angle function.
The transverse length recovery in the live-action recovery is a proportional relation value of the image space between two parallel mark lines in the image on each recovered longitudinal length section and the near-end image space between two parallel mark lines in the image, and the transverse length of all the scenes on each recovered longitudinal length section is reversely amplified in a virtual space. The vertical height recovery in the recovery is a proportional relation value of the image space between two parallel mark lines in the image on each recovered longitudinal length section and the near-end image space between two parallel mark lines in the image, and the vertical height of all scenery on each recovered longitudinal length section is reversely amplified in a virtual space. The size recovery in the live-action recovery is that the ratio relation value of the image space between two parallel mark lines in the image and the near-end image space between two parallel mark lines in the image on each recovered longitudinal length section is reversely amplified and corrected in the virtual space to correct the three-dimensional size of all the scenery on each recovered longitudinal length section.
As optimization, each picture has a center line and is superposed with the center line of the shot actual road surface and the camera; each picture has a central point, namely a picture core, and is also in a plane which is formed by the data of the bottom edge and the height of the central line; when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to that of the vertical line B theoretically; the far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image.
The recovery is that a relative virtual ground plane data which is suitable for the image vision is determined firstly, the conversion of the actual size and the picture size of a segment of a central line and a near line on a picture is followed, the size of an equidistant size is far and near, and the R inclination angle extends far; the plane is lifted integrally in an upward water-submerged mode or is descended in a reverse horizontal plane, and pixels with the same horizontal height are equal in height, distance and area; three-dimensional data of each pixel comes out.
In general, we always think that a computer only recognizes 0 and 1, and has no space and plane, that is, no space element is added to a logic architecture, and we observe far and near things with eyes, remember the color of an object and also remember the space position of the object, and naturally divide things with space positions in front of eyes into independent objects, and the objects with different sizes follow the rules of far, small, near, high, near, low and the like.
Human eyes are the first observer, the light is projected to the retina and then is collected and planned by the brain, the near part is sensed to be horizontal, flat and vertical, no problem exists, the image recorded after the human eyes observe by the camera, the human eyes observe the image at the angle of the second person, the space is deformed in the image, the lens and the eyes can see the same thing, and the computer can only read 0 and 1 due to cognitive limitation of the human eyes when the human eyes interpret.
How to restore the data after the image is recorded as a deformed image and objects in the space are separated one by one, which needs to research the invariant elements of the space in the image picture and the conversion rule.
Taking a straight standard road in a picture of a ninety-degree vertical road standing on the center line of the standard road as an example, the following points are found through research: 1. each picture has a central line, and the central line of the picture coincides with the central line of the camera and the actual road surface. 2. Each frame has a center point, called the frame center, as shown in fig. 1, and is also in the plane formed by the data with the center line as the bottom line and the height. And when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to the height of the vertical line B theoretically. 3. The far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image. 4. The equidistant lines in the image are arranged on the visual oblique lines at two sides of the central line of the image, and the inclination angle R of the visual oblique lines is not changed in the same static image. 5. And an included angle B obtained by any line segment which is arbitrarily vertical to the center line from the near end and is used as the bottom of the triangle, and an included angle B obtained by any point outside the center line and the far end of the center line is equal to the actually measured angle. The angle from the distal end is in turn equal to angle B by adding the angle of inclination R.
The width of a standard road surface is determined after the angle is obtained, the distance between a far end line and a near end line can be calculated by utilizing a trigonometric function, the data can form a virtual data space, the measured angle and the calculated distance only correct errors of the data space, the distance and the position information of two places are determined, the finished three-dimensional space data can be embedded in a picture, and the data of other arbitrary points are calculated according to the image space nested data model reduction transformation rule.
The above is the standard road surface condition, how to do irregular road surface, need to shoot and use two specific settings on camera equipment at the same time, its equidistant two laser heads that divide and arrange in the lens center both sides, send out two laser lines that are parallel to the central line, its central line that scans two lines on the ground is coincident with the central line in the image data, also can be used for measuring vision oblique angle R, the interval of two laser lines, with the characteristic of laser line unchangeable, just as the image has been furnished with suitable ruler, just so can adapt to the measurement and application of most conditions, and correct the error in the image that moves at any time.
There is also a method of dual lens space building as with human eyes, where the two lens analyses can work independently or correct each other. After the photos obtained by the two lenses are restored and overlapped, a three-dimensional space is constructed at the distance between the two central lines, and the central line at the distance between the two central lines is the central line after the pictures are synthesized. After the model is built, each pixel point is equivalent to double data positioning, and the positioning can also be summarized as triangle positioning. The work can be finished by moving the lens at a certain distance. The method has accurate positioning and error correction at any time. Can reach centimeter level or above. The data information is directly measured instead of indirectly measured.
The methods firstly determine the distance on a section of picture, build a face by the section and then reach the height, and then combine the picture space transformation rule, so that a virtual space model can be built, and the virtual space model can be nested on the picture to extract the required data through software analysis.
Three-dimensional virtual space modeling detailed solution: 1. firstly, any line segment perpendicular to the center line of the picture at the near end is a near end A line. 2. The distance between the point D and the point H, through which the two laser lines pass, on the near-end line is the distance between the centers of the two laser heads, and is known data. 3. The conversion of the picture length and the actual space length of the near-end A line is determined by taking the drawing length of the DH line segment as a basis. 4. Because the actual spacing of the laser lines is equal, the actual distances represented by the other line segments of any vertical center line through which the laser lines pass in the picture are equal although the spacing lengths on the picture are different. 5. And the length of the line segment where any other laser line passes through and is perpendicular to the central line can be measured by taking the laser spacing line on the line segment as the basic length. The actual spatial distances represented by the DH line segment and the D1H1 line segment are illustrated as being equal. The two laser lines are equidistant from each other and on the visual oblique line. The distance may also be calculated by the ratio of the length of the drawing of the proximal line DH to the distal line D1H 1. 6. The distance between the near-end line A and the far-end line B can be calculated by using the related trigonometric function data of the right triangle of the picture PDO, and the angle of the angle PDO is the same as the angle of the actual spatial relative position. 7. Angle n.h1.h is the varying tilt angle of the picture. 8. By combining the data and the change rule, a plane change data model is established, other three surfaces are also established, and a three-dimensional virtual space model is also established. The virtual deformation space can be restored to an actual square space. The method is applied to traffic navigation, and can enable individuals to easily construct own three-dimensional traffic maps.
The space composition technology can be applied to real-time processing and post analysis of all image data, such as public security, water circulation, fire fighting, city management, traffic, emergency and the like. And the method surpasses the prior computer plane cognition, leads the data to be directly three-dimensionally, and leads the computer to directly distinguish the information such as the attribute, the spatial position and the like of each object. The technology can realize automatic modeling, and simply speaking, the method is completed by constructing a virtual image change space and then nesting the virtual image change space into a shot image, an object naturally obtains a space position coordinate, any pixel is given a coordinate, and the length, the width, the height, the area and the like naturally exist, namely a filling process. The camera equipment faithfully records all what is seen, and then whether the camera equipment can restore the seen image in a correct way is seen. The cost is only that an analysis software is added on the basis of the original camera equipment. In the application, inertial calculation and laser multipoint scanning only assist error correction and are not central.
The technology is that a relative virtual ground plane data suitable for image vision is determined, and conversion of the actual size of a segment of a confirmed central line and a near-end line on a picture and the size of the picture is followed, the size of an equidistant size is far and near, the R inclination angle extends far, and the like. The plane is lifted upwards in a water-submerged mode or is lowered in a reverse horizontal plane, pixels with the same horizontal height are equal in height, distance and area, and three-dimensional data of all the pixels are obtained.
As an optimization, the two parallel marker lines are two solid parallel marker lines with known spacing inherent in the image; or the two laser heads are symmetrically distributed on two sides of the vertical line of the lens on the shooting device or the two laser parallel marking lines with known intervals are formed by irradiating two laser lines which are emitted by the two laser heads symmetrically distributed on the ground on two sides of the vertical line of the lens during shooting and are parallel to the axis of the lens in the image live-action scene. The two laser heads and the shooting device which are additionally arranged are respectively provided with a gyroscope, a theodolite or a compass and a level instrument which are used for calibrating direction and level.
As optimization, the shooting device for shooting the left, right and middle images is provided with three parallel lenses which are distributed at equal intervals, and the middle lens is used for shooting the middle image. And the central line or the central line reference object is identified and marked as an image identification characteristic of the central line or the central line reference object in the image. The image identification feature on the central line in the image is a feature image point element arrangement number, such as a point color and a brightness value, which is sequentially arranged on the line of the central line in the shot image. The image recognition feature of the center line reference object is a scene outline with a remarkable image feature in the shot image, and two parallel mark lines in the middle image are calculated and marked according to the change of the outline in the left image, the right image and the middle image or the change of the transverse distance between the outline and the center line in the left image, the right image and the middle image.
The system for realizing the virtual space establishing method of the image comprises a single-lens shooting device or a shooting device which is attached with a laser head axis parallel to the lens axis and has a known axis vertical distance, wherein the lens axis of the shooting device is parallel to a mark straight line with the known vertical distance in a real scene, and is used for shooting the real scene and parallel mark lines, and the parallel mark lines are actual parallel mark lines in the shot real scene or laser parallel mark lines projected to the real scene by the laser head attached or attached to the shooting device; or the main and auxiliary lenses are arranged in parallel with the lens axis and have known intervals by the shooting device, the main and auxiliary lenses shoot the real scene at the same time, the center line or the center line reference object where the lens axis is located in the auxiliary image shot by the auxiliary lens is identified and marked, the inclined center line where the auxiliary lens axis is located is marked in the main image shot by the main lens according to the identification mark, and the inclined center line where the auxiliary lens axis is located in the main image shot by the main lens is used as the parallel mark line in the shot real scene. The central line is identified and marked, the pixel parameters on the path of the central line in the image can be identified and marked, or at least two obvious mark points on the path of the central line in the image can be identified and marked, at least the two obvious mark points are found in the main image shot by the main lens, and the straight line connecting the two obvious mark points is the inclined central line where the axis of the auxiliary lens is located and is used as the parallel mark line in the shot real scene. The method comprises the steps of obtaining a central line reference object, wherein the central line reference object comprises an obvious mark point which is passed by a central line in an image and a reference object contour line which is parallel to the central line on a reference object outside the central line, drawing a straight line which passes the obvious mark point and is parallel to the reference object contour line in a main image, namely an inclined central line where an auxiliary lens axis is located as a parallel mark line in a shot real scene, measuring and identifying the transverse distance from each obvious mark point to the central line in the auxiliary image, reconstructing the inclined mark line according to the distance ratio from each pair of obvious mark points to the central line in the main image, and the like. The vertical distance is the distance between the two parallel lines when the two parallel lines are both vertical to the virtual ground of the photo object of the shot scenery, and the two parallel lines are respectively positioned on the vertical plane vertical to the virtual ground of the photo object.
The actual vertical distance represented by the image central line where the parallel mark line and the lens axis are located at any distance in the image is taken as a reference, and scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a real scene recovery mode. Namely, the distance between the central line in the shot image and the parallel marking line in the inclined state is recovered to the parallel and known fixed distance in the real scene from the big-small-big state as basic data, the real scene recovery is carried out on all the scenery in the whole image, and the real scene recovery three-dimensional image is established. The shooting device and the attached laser head are respectively provided with a gyroscope, a theodolite, a compass, a level meter and a range finder for calibrating direction and level. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio. The method has the advantages of easy shooting and capability of establishing the shot images in the virtual space.
As optimization, two laser heads which are symmetrically distributed are arranged on the two sides of the vertical line of the lens on the shooting device, two laser lines which are parallel to the axis of the lens are emitted by the two laser heads and form two laser parallel mark lines in an image, or the shooting device shoots a live-action image of the two parallel mark lines which longitudinally extend in parallel from the near end to the far end; or the shooting device is provided with two left and right parallel lenses, or the two left and right parallel lenses of the shooting device respectively and jointly shoot left and right and middle images of the same real scene, the center lines or center line reference objects of the lens axes in the left and right images respectively shot by the left and right parallel lenses are identified and marked, the inclined center lines of the left and right parallel lenses are respectively marked in the jointly shot middle image according to the identification marks, and the inclined center lines of the left and right parallel lenses marked in the jointly shot middle image are used as two parallel mark lines in the shot real scene.
Based on the known distance represented by the two parallel mark lines at any distance in the image, the scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a recovery mode. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio. The method has the advantages of easy shooting and capability of establishing the shot images in the virtual space.
As optimization, the restoration of the longitudinal length is that the distance between the near end and the transverse distance of the two parallel mark lines in the image is known, the included angle between one end of the near end transverse line and the midpoint of the far end transverse line of the two parallel mark lines in the image is the same as the included angle of the real scene, the actual longitudinal depth length from the near end to the far end is obtained by geometric calculation on the basis of measurement, and the longitudinal depth length of the scene in the image with the length is established in the virtual space in a restoration mode. The geometric calculation is that the length of the short side of the straight triangle is half of the distance between the near-end transverse distance line of the two parallel mark lines in the known image, the included angle of the connecting line between the near-end transverse distance line of the two parallel mark lines in the image and the midpoint between the near-end transverse distance line and the far-end transverse distance line of the two parallel mark lines in the image is equal to the included angle of the image, and the length of the long side of the straight triangle is calculated by utilizing a right-angle function.
As an optimization, the transverse length restoration in the restoration is a proportional relation value of the image spacing between two parallel mark lines in the image and the near-end image spacing between two parallel mark lines in the image on each restored longitudinal length section, and the transverse length of all scenes on each restored longitudinal length section is reversely amplified in the virtual space. The vertical height recovery in the recovery is a proportional relation value of the image space between two parallel mark lines in the image on each recovered longitudinal length section and the near-end image space between two parallel mark lines in the image, and the vertical height of all scenery on each recovered longitudinal length section is reversely amplified in a virtual space. The size recovery in the live-action recovery is that the ratio relation value of the image space between two parallel mark lines in the image and the near-end image space between two parallel mark lines in the image on each recovered longitudinal length section is reversely amplified and corrected in the virtual space to correct the three-dimensional size of all the scenery on each recovered longitudinal length section.
As optimization, each picture has a center line and is superposed with the center line of the shot actual road surface and the camera; each picture has a central point, namely a picture core, and is also in a plane which is formed by the data of the bottom edge and the height of the central line; when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to that of the vertical line B theoretically; the far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image.
The recovery is that a relative virtual ground plane data which is suitable for the image vision is determined firstly, the conversion of the actual size and the picture size of a segment of a central line and a near line on a picture is followed, the size of an equidistant size is far and near, and the R inclination angle extends far; the plane is lifted integrally in an upward water-submerged mode or is descended in a reverse horizontal plane, and pixels with the same horizontal height are equal in height, distance and area; three-dimensional data of each pixel comes out.
In general, we always think that a computer only recognizes 0 and 1, and has no space and plane, that is, no space element is added to a logic architecture, and we observe far and near things with eyes, remember the color of an object and also remember the space position of the object, and naturally divide things with space positions in front of eyes into independent objects, and the objects with different sizes follow the rules of far, small, near, high, near, low and the like.
Human eyes are the first observer, the light is projected to the retina and then is collected and planned by the brain, the near part is sensed to be horizontal, flat and vertical, no problem exists, the image recorded after the human eyes observe by the camera, the human eyes observe the image at the angle of the second person, the space is deformed in the image, the lens and the eyes can see the same thing, and the computer can only read 0 and 1 due to cognitive limitation of the human eyes when the human eyes interpret.
How to restore the data after the image is recorded as a deformed image and objects in the space are separated one by one, which needs to research the invariant elements of the space in the image picture and the conversion rule.
Taking a straight standard road in a picture of a ninety-degree vertical road standing on the center line of the standard road as an example, the following points are found through research: 1. each picture has a central line, and the central line of the picture coincides with the central line of the camera and the actual road surface. 2. Each frame has a center point, called the frame center, as shown in fig. 1, and is also in the plane formed by the data with the center line as the bottom line and the height. And when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to the height of the vertical line B theoretically. 3. The far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image. 4. The equidistant lines in the image are arranged on the visual oblique lines at two sides of the central line of the image, and the inclination angle R of the visual oblique lines is not changed in the same static image. 5. And an included angle B obtained by any line segment which is arbitrarily vertical to the center line from the near end and is used as the bottom of the triangle, and an included angle B obtained by any point outside the center line and the far end of the center line is equal to the actually measured angle. The angle from the distal end is in turn equal to angle B by adding the angle of inclination R.
The width of a standard road surface is determined after the angle is obtained, the distance between a far end line and a near end line can be calculated by utilizing a trigonometric function, the data can form a virtual data space, the measured angle and the calculated distance only correct errors of the data space, the distance and the position information of two places are determined, the finished three-dimensional space data can be embedded in a picture, and the data of other arbitrary points are calculated according to the image space nested data model reduction transformation rule.
The above is the standard road surface condition, how to do irregular road surface, need to shoot and use two specific settings on camera equipment at the same time, its equidistant two laser heads that divide and arrange in the lens center both sides, send out two laser lines that are parallel to the central line, its central line that scans two lines on the ground is coincident with the central line in the image data, also can be used for measuring vision oblique angle R, the interval of two laser lines, with the characteristic of laser line unchangeable, just as the image has been furnished with suitable ruler, just so can adapt to the measurement and application of most conditions, and correct the error in the image that moves at any time.
There is also a method of dual lens space building as with human eyes, where the two lens analyses can work independently or correct each other. After the photos obtained by the two lenses are restored and overlapped, a three-dimensional space is constructed at the distance between the two central lines, and the central line at the distance between the two central lines is the central line after the pictures are synthesized. After the model is built, each pixel point is equivalent to double data positioning, and the positioning can also be summarized as triangle positioning. The work can be finished by moving the lens at a certain distance. The method has accurate positioning and error correction at any time. Can reach centimeter level or above. The data information is directly measured instead of indirectly measured.
The methods firstly determine the distance on a section of picture, build a face by the section and then reach the height, and then combine the picture space transformation rule, so that a virtual space model can be built, and the virtual space model can be nested on the picture to extract the required data through software analysis.
Three-dimensional virtual space modeling detailed solution: 1. firstly, any line segment perpendicular to the center line of the picture at the near end is a near end A line. 2. The distance between the point D and the point H, through which the two laser lines pass, on the near-end line is the distance between the centers of the two laser heads, and is known data. 3. The conversion of the picture length and the actual space length of the near-end A line is determined by taking the drawing length of the DH line segment as a basis. 4. Because the actual spacing of the laser lines is equal, the actual distances represented by the other line segments of any vertical center line through which the laser lines pass in the picture are equal although the spacing lengths on the picture are different. 5. And the length of the line segment where any other laser line passes through and is perpendicular to the central line can be measured by taking the laser spacing line on the line segment as the basic length. The actual spatial distances represented by the DH line segment and the D1H1 line segment are illustrated as being equal. The two laser lines are equidistant from each other and on the visual oblique line. The distance may also be calculated by the ratio of the length of the drawing of the proximal line DH to the distal line D1H 1. 6. The distance between the near-end line A and the far-end line B can be calculated by using the related trigonometric function data of the right triangle of the picture PDO, and the angle of the angle PDO is the same as the angle of the actual spatial relative position. 7. Angle n.h1.h is the varying tilt angle of the picture. 8. By combining the data and the change rule, a plane change data model is established, other three surfaces are also established, and a three-dimensional virtual space model is also established. The virtual deformation space can be restored to an actual square space. The method is applied to traffic navigation, and can enable individuals to easily construct own three-dimensional traffic maps.
The space composition technology can be applied to real-time processing and post analysis of all image data, such as public security, water circulation, fire fighting, city management, traffic, emergency and the like. And the method surpasses the prior computer plane cognition, leads the data to be directly three-dimensionally, and leads the computer to directly distinguish the information such as the attribute, the spatial position and the like of each object. The technology can realize automatic modeling, and simply speaking, the method is completed by constructing a virtual image change space and then nesting the virtual image change space into a shot image, an object naturally obtains a space position coordinate, any pixel is given a coordinate, and the length, the width, the height, the area and the like naturally exist, namely a filling process. The camera equipment faithfully records all what is seen, and then whether the camera equipment can restore the seen image in a correct way is seen. The cost is only that an analysis software is added on the basis of the original camera equipment. In the application, inertial calculation and laser multipoint scanning only assist error correction and are not central.
The technology is that a relative virtual ground plane data suitable for image vision is determined, and conversion of the actual size of a segment of a confirmed central line and a near-end line on a picture and the size of the picture is followed, the size of an equidistant size is far and near, the R inclination angle extends far, and the like. The plane is lifted upwards in a water-submerged mode or is lowered in a reverse horizontal plane, pixels with the same horizontal height are equal in height, distance and area, and three-dimensional data of all the pixels are obtained.
As an optimization, the two parallel marker lines are two solid parallel marker lines with known spacing inherent in the image; or the two laser heads are symmetrically distributed on two sides of the vertical line of the lens on the shooting device or the two laser parallel marking lines with known intervals are formed by irradiating two laser lines which are emitted by the two laser heads symmetrically distributed on the ground on two sides of the vertical line of the lens during shooting and are parallel to the axis of the lens in the image live-action scene. The two laser heads and the shooting device which are additionally arranged are respectively provided with a gyroscope, a theodolite or a compass and a level instrument which are used for calibrating direction and level.
As optimization, the shooting device for shooting the left, right and middle images is provided with three parallel lenses which are distributed at equal intervals, and the middle lens is used for shooting the middle image. And the central line or the central line reference object is identified and marked as an image identification characteristic of the central line or the central line reference object in the image. The image identification feature on the central line in the image is a feature image point element arrangement number, such as a point color and a brightness value, which is sequentially arranged on the line of the central line in the shot image. The image recognition feature of the center line reference object is a scene outline with a remarkable image feature in the shot image, and two parallel mark lines in the middle image are calculated and marked according to the change of the outline in the left image, the right image and the middle image or the change of the transverse distance between the outline and the center line in the left image, the right image and the middle image.
By adopting the technical scheme, the method and the system for establishing the virtual space of the image have the advantages of easiness in shooting and capability of establishing the shot image in the virtual space.
Drawings
FIG. 1 is a side view analysis diagram of the relationship between the image and the actual space of the method and system for establishing the virtual space of the image according to the present invention; FIG. 2 is a front view analysis diagram of an image of the method and system for establishing virtual space of an image according to the present invention; FIG. 3 is a schematic diagram of the application of dual laser lines in the image according to the method and system for establishing virtual space of the image of the present invention; FIG. 4 is a schematic diagram of distance-determining principle analysis of two eyes and two lenses for the method and system for establishing virtual space of images according to the present invention; FIG. 5 is a schematic diagram of a single lens fixed-distance traversing analysis method for the method and system for establishing a virtual space of an image according to the present invention; fig. 6 is a near-end a-line virtual space modeling analysis diagram of the image virtual space establishing method and system of the present invention. Fig. 7 is an analysis diagram of the matching between the left and right laser lines and the central line of the left and right laser heads at the two sides of the lens in the captured image according to the virtual space establishing method and system for images of the present invention.
Reference numbers in fig. 1: the camera is 1, the equidistant line C is 2, the central extension line is 3, the equidistant line B is 4, the perpendicular line is 5, the perpendicular line B is 6, and the circle center is 7; in FIG. 3, laser line A is 8 and laser line B is 9; in fig. 4, the first lens centerline is 10, the distance line is 11, and the second lens centerline is 12; in fig. 6, proximal line a is 13, which may also be referred to as the proximal lateral line, distal line B is 14, which may also be referred to as the distal lateral line, and centerline is 15. In FIG. 7, the centerline is 15, the left laser line is 16, and the right laser line is 17.
Detailed Description
The virtual space establishing method of the image is to make the lens axis of the shooting device parallel to the mark straight line with known vertical distance in the real scene, and shoot the real scene and the parallel mark line, wherein the parallel mark line is the actual parallel mark line in the shot real scene or the laser parallel mark line projected to the real scene by the laser head attached or attached to the shooting device; or the shooting device is provided with a main lens and an auxiliary lens which are parallel to the lens axis and have known intervals, the main lens and the auxiliary lens shoot the real scene at the same time, the center line or the center line reference object of the lens axis in the auxiliary image shot by the auxiliary lens is identified and marked, the inclined center line of the auxiliary lens axis in the main image shot by the main lens is marked according to the identification mark, and the inclined center line of the auxiliary lens axis marked in the main image shot by the main lens is used as the parallel mark line in the shot real scene. The central line is identified and marked, the pixel parameters on the path of the central line in the image can be identified and marked, or at least two obvious mark points on the path of the central line in the image can be identified and marked, at least the two obvious mark points are found in the main image shot by the main lens, and the straight line connecting the two obvious mark points is the inclined central line where the axis of the auxiliary lens is located and is used as the parallel mark line in the shot real scene. The method comprises the steps of obtaining a central line reference object, wherein the central line reference object comprises an obvious mark point which is passed by a central line in an image and a reference object contour line which is parallel to the central line on a reference object outside the central line, drawing a straight line which passes the obvious mark point and is parallel to the reference object contour line in a main image, namely an inclined central line where an auxiliary lens axis is located as a parallel mark line in a shot real scene, measuring and identifying the transverse distance from each obvious mark point to the central line in the auxiliary image, reconstructing the inclined mark line according to the distance ratio from each pair of obvious mark points to the central line in the main image, and the like. The vertical distance is the distance between the two parallel lines when the two parallel lines are both vertical to the virtual ground of the photo object of the shot scenery, and the two parallel lines are respectively positioned on the vertical plane vertical to the virtual ground of the photo object.
The actual vertical distance represented by the image central line where the parallel mark line and the lens axis are located at any distance in the image is taken as a reference, and scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a real scene recovery mode. Namely, the distance between the central line in the shot image and the parallel marking line in the inclined state is recovered to the parallel and known fixed distance in the real scene from the big-small-big state as basic data, the real scene recovery is carried out on all the scenery in the whole image, and the real scene recovery three-dimensional image is established. The shooting device and the attached laser head are respectively provided with a gyroscope, a theodolite, a compass, a level meter and a range finder for calibrating direction and level. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio. The method has the advantages of easy shooting and capability of establishing the shot images in the virtual space.
The lens of the shooting device is positioned between two parallel mark lines with known spacing, the axis of the lens is parallel to the two parallel mark lines, and the images of the two parallel mark lines longitudinally extending in parallel from the near end to the far end are shot; or the two left and right parallel lenses of the shooting device respectively and jointly shoot left and right and middle images of the same real scene, the center line or the center line reference object of the lens axis in the left and right images respectively shot by the left and right parallel lenses is identified and marked, the inclined center line of the left and right parallel lenses axis is respectively marked in the jointly shot middle image according to the identification mark, and the inclined center line of the left and right parallel lenses axis marked in the jointly shot middle image is used as two parallel mark lines in the shot real scene.
Based on the known distance represented by the two parallel mark lines at any distance in the image, the scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a recovery mode. The two parallel marking lines can be parallel solid lines in the real scene, and can also be parallel laser lines formed on the real scene by a laser head of the shooting device. The two parallel marking lines are two entity parallel marking lines with known spacing inherent in the image; or the two laser heads are symmetrically distributed on two sides of the vertical line of the lens on the shooting device or the two laser parallel marking lines with known intervals are formed by irradiating two laser lines which are emitted by the two laser heads symmetrically distributed on the ground on two sides of the vertical line of the lens during shooting and are parallel to the axis of the lens in the image live-action scene. The two laser heads and the shooting device which are additionally arranged are respectively provided with a gyroscope, a theodolite or a compass and a level instrument which are used for calibrating direction and level. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio.
The restoration of the longitudinal length is that the distance between the near ends of two parallel mark lines in the image and the transverse distance between the near ends of the two parallel mark lines in the image and the midpoint of the far end transverse line is the same as the real scene angle, and on the basis of measurement, the actual longitudinal depth length from the near end to the far end is obtained by geometric calculation, and the longitudinal depth length of the scene in the image with the length is established in the virtual space in a restoration mode. The geometric calculation is that the length of the short side of the straight triangle is half of the distance between the near-end transverse distance line of the two parallel mark lines in the known image, the included angle of the connecting line between the near-end transverse distance line of the two parallel mark lines in the image and the midpoint between the near-end transverse distance line and the far-end transverse distance line of the two parallel mark lines in the image is equal to the included angle of the image, and the length of the long side of the straight triangle is calculated by utilizing a right-angle function.
The transverse length recovery in the recovery is a proportional relation value of the image space between two parallel mark lines in the image and the near-end image space between two parallel mark lines in the image on each recovered longitudinal length section, and the transverse length of all scenes on each recovered longitudinal length section is reversely amplified in a virtual space. The vertical height recovery in the recovery is a proportional relation value of the image space between two parallel mark lines in the image on each recovered longitudinal length section and the near-end image space between two parallel mark lines in the image, and the vertical height of all scenery on each recovered longitudinal length section is reversely amplified in a virtual space. The size recovery in the live-action recovery is that the ratio relation value of the image space between two parallel mark lines in the image and the near-end image space between two parallel mark lines in the image on each recovered longitudinal length section is reversely amplified and corrected in the virtual space to correct the three-dimensional size of all the scenery on each recovered longitudinal length section.
The shooting device for shooting the left, right and middle images is provided with three parallel lenses which are distributed at equal intervals, and the middle lens is used for shooting the middle image. And the central line or the central line reference object is identified and marked as an image identification characteristic of the central line or the central line reference object in the image. The image identification feature on the central line in the image is a feature image point element arrangement number, such as a point color and a brightness value, which is sequentially arranged on the line of the central line in the shot image. The image recognition feature of the center line reference object is a scene outline with a remarkable image feature in the shot image, and two parallel mark lines in the middle image are calculated and marked according to the change of the outline in the left image, the right image and the middle image or the change of the transverse distance between the outline and the center line in the left image, the right image and the middle image.
Each picture has a central line and is superposed with the central line of the shot actual road surface and the camera; each picture has a central point, namely a picture core, and is also in a plane which is formed by the data of the bottom edge and the height of the central line; when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to that of the vertical line B theoretically; the far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image.
The recovery is that a relative virtual ground plane data which is suitable for the image vision is determined firstly, the conversion of the actual size and the picture size of a segment of a central line and a near line on a picture is followed, the size of an equidistant size is far and near, and the R inclination angle extends far; the plane is lifted integrally in an upward water-submerged mode or is descended in a reverse horizontal plane, and pixels with the same horizontal height are equal in height, distance and area; three-dimensional data of each pixel comes out.
In general, we always think that a computer only recognizes 0 and 1, and has no space and plane, that is, no space element is added to a logic architecture, and we observe far and near things with eyes, remember the color of an object and also remember the space position of the object, and naturally divide things with space positions in front of eyes into independent objects, and the objects with different sizes follow the rules of far, small, near, high, near, low and the like.
Human eyes are the first observer, the light is projected to the retina and then is collected and planned by the brain, the near part is sensed to be horizontal, flat and vertical, no problem exists, the image recorded after the human eyes observe by the camera, the human eyes observe the image at the angle of the second person, the space is deformed in the image, the lens and the eyes can see the same thing, and the computer can only read 0 and 1 due to cognitive limitation of the human eyes when the human eyes interpret.
How to restore the data after the image is recorded as a deformed image and objects in the space are separated one by one, which needs to research the invariant elements of the space in the image picture and the conversion rule.
Taking a straight standard road in a picture of a ninety-degree vertical road standing on the center line of the standard road as an example, the following points are found through research: 1. each picture has a central line, and the central line of the picture coincides with the central line of the camera and the actual road surface. 2. Each frame has a center point, called the frame center, as shown in fig. 1, and is also in the plane formed by the data with the center line as the bottom line and the height. And when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to the height of the vertical line B theoretically. 3. The far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image. 4. The equidistant lines in the image are arranged on the visual oblique lines at two sides of the central line of the image, and the inclination angle R of the visual oblique lines is not changed in the same static image. 5. And an included angle B obtained by any line segment which is arbitrarily vertical to the center line from the near end and is used as the bottom of the triangle, and an included angle B obtained by any point outside the center line and the far end of the center line is equal to the actually measured angle. The angle from the distal end is in turn equal to angle B by adding the angle of inclination R.
The width of a standard road surface is determined after the angle is obtained, the distance between a far end line and a near end line can be calculated by utilizing a trigonometric function, the data can form a virtual data space, the measured angle and the calculated distance only correct errors of the data space, the distance and the position information of two places are determined, the finished three-dimensional space data can be embedded in a picture, and the data of other arbitrary points are calculated according to the image space nested data model reduction transformation rule.
The above is the standard road surface condition, how to do irregular road surface, need to shoot and use two specific settings on camera equipment at the same time, its equidistant two laser heads that divide and arrange in the lens center both sides, send out two laser lines that are parallel to the central line, its central line that scans two lines on the ground is coincident with the central line in the image data, also can be used for measuring vision oblique angle R, the interval of two laser lines, with the characteristic of laser line unchangeable, just as the image has been furnished with suitable ruler, just so can adapt to the measurement and application of most conditions, and correct the error in the image that moves at any time.
There is also a method of dual lens space building as with human eyes, where the two lens analyses can work independently or correct each other. After the photos obtained by the two lenses are restored and overlapped, a three-dimensional space is constructed at the distance between the two central lines, and the central line at the distance between the two central lines is the central line after the pictures are synthesized. After the model is built, each pixel point is equivalent to double data positioning, and the positioning can also be summarized as triangle positioning. The work can be finished by moving the lens at a certain distance. The method has accurate positioning and error correction at any time. Can reach centimeter level or above. The data information is directly measured instead of indirectly measured.
The methods firstly determine the distance on a section of picture, build a face by the section and then reach the height, and then combine the picture space transformation rule, so that a virtual space model can be built, and the virtual space model can be nested on the picture to extract the required data through software analysis.
Three-dimensional virtual space modeling detailed solution: 1. firstly, any line segment perpendicular to the center line of the picture at the near end is a near end A line. 2. The distance between the point D and the point H, through which the two laser lines pass, on the near-end line is the distance between the centers of the two laser heads, and is known data. 3. The conversion of the picture length and the actual space length of the near-end A line is determined by taking the drawing length of the DH line segment as a basis. 4. Because the actual spacing of the laser lines is equal, the actual distances represented by the other line segments of any vertical center line through which the laser lines pass in the picture are equal although the spacing lengths on the picture are different. 5. And the length of the line segment where any other laser line passes through and is perpendicular to the central line can be measured by taking the laser spacing line on the line segment as the basic length. The actual spatial distances represented by the DH line segment and the D1H1 line segment are illustrated as being equal. The two laser lines are equidistant from each other and on the visual oblique line. The distance may also be calculated by the ratio of the length of the drawing of the proximal line DH to the distal line D1H 1. 6. The distance between the near-end line A and the far-end line B can be calculated by using the related trigonometric function data of the right triangle of the picture PDO, and the angle of the angle PDO is the same as the angle of the actual spatial relative position. 7. Angle n.h1.h is the varying tilt angle of the picture. 8. By combining the data and the change rule, a plane change data model is established, other three surfaces are also established, and a three-dimensional virtual space model is also established. The virtual deformation space can be restored to an actual square space. The method is applied to traffic navigation, and can enable individuals to easily construct own three-dimensional traffic maps.
The space composition technology can be applied to real-time processing and post analysis of all image data, such as public security, water circulation, fire fighting, city management, traffic, emergency and the like. And the method surpasses the prior computer plane cognition, leads the data to be directly three-dimensionally, and leads the computer to directly distinguish the information such as the attribute, the spatial position and the like of each object. The technology can realize automatic modeling, and simply speaking, the method is completed by constructing a virtual image change space and then nesting the virtual image change space into a shot image, an object naturally obtains a space position coordinate, any pixel is given a coordinate, and the length, the width, the height, the area and the like naturally exist, namely a filling process. The camera equipment faithfully records all what is seen, and then whether the camera equipment can restore the seen image in a correct way is seen. The cost is only that an analysis software is added on the basis of the original camera equipment. In the application, inertial calculation and laser multipoint scanning only assist error correction and are not central.
The technology is that a relative virtual ground plane data suitable for image vision is determined, and conversion of the actual size of a segment of a confirmed central line and a near-end line on a picture and the size of the picture is followed, the size of an equidistant size is far and near, the R inclination angle extends far, and the like. The plane is lifted upwards in a water-submerged mode or is lowered in a reverse horizontal plane, pixels with the same horizontal height are equal in height, distance and area, and three-dimensional data of all the pixels are obtained.
The system for realizing the virtual space establishing method of the image comprises a single-lens shooting device or a shooting device which is attached with a laser head axis parallel to the lens axis and has a known axis vertical distance, wherein the lens axis of the shooting device is parallel to a mark straight line with the known vertical distance in a real scene, and is used for shooting the real scene and parallel mark lines, and the parallel mark lines are actual parallel mark lines in the shot real scene or laser parallel mark lines projected to the real scene by the laser head attached or attached to the shooting device; or the main and auxiliary lenses are arranged in parallel with the lens axis and have known intervals by the shooting device, the main and auxiliary lenses shoot the real scene at the same time, the center line or the center line reference object where the lens axis is located in the auxiliary image shot by the auxiliary lens is identified and marked, the inclined center line where the auxiliary lens axis is located is marked in the main image shot by the main lens according to the identification mark, and the inclined center line where the auxiliary lens axis is located in the main image shot by the main lens is used as the parallel mark line in the shot real scene. The central line is identified and marked, the pixel parameters on the path of the central line in the image can be identified and marked, or at least two obvious mark points on the path of the central line in the image can be identified and marked, at least the two obvious mark points are found in the main image shot by the main lens, and the straight line connecting the two obvious mark points is the inclined central line where the axis of the auxiliary lens is located and is used as the parallel mark line in the shot real scene. The method comprises the steps of obtaining a central line reference object, wherein the central line reference object comprises an obvious mark point which is passed by a central line in an image and a reference object contour line which is parallel to the central line on a reference object outside the central line, drawing a straight line which passes the obvious mark point and is parallel to the reference object contour line in a main image, namely an inclined central line where an auxiliary lens axis is located as a parallel mark line in a shot real scene, measuring and identifying the transverse distance from each obvious mark point to the central line in the auxiliary image, reconstructing the inclined mark line according to the distance ratio from each pair of obvious mark points to the central line in the main image, and the like. The vertical distance is the distance between the two parallel lines when the two parallel lines are both vertical to the virtual ground of the photo object of the shot scenery, and the two parallel lines are respectively positioned on the vertical plane vertical to the virtual ground of the photo object.
The actual vertical distance represented by the image central line where the parallel mark line and the lens axis are located at any distance in the image is taken as a reference, and scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a real scene recovery mode. Namely, the distance between the central line in the shot image and the parallel marking line in the inclined state is recovered to the parallel and known fixed distance in the real scene from the big-small-big state as basic data, the real scene recovery is carried out on all the scenery in the whole image, and the real scene recovery three-dimensional image is established. The shooting device and the attached laser head are respectively provided with a gyroscope, a theodolite, a compass, a level meter and a range finder for calibrating direction and level. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio. The method has the advantages of easy shooting and capability of establishing the shot images in the virtual space.
As optimization, two laser heads which are symmetrically distributed are arranged on the two sides of the vertical line of the lens on the shooting device, two laser lines which are parallel to the axis of the lens are emitted by the two laser heads and form two laser parallel mark lines in an image, or the shooting device shoots a live-action image of the two parallel mark lines which longitudinally extend in parallel from the near end to the far end; or the shooting device is provided with two left and right parallel lenses, or the two left and right parallel lenses of the shooting device respectively and jointly shoot left and right and middle images of the same real scene, the center lines or center line reference objects of the lens axes in the left and right images respectively shot by the left and right parallel lenses are identified and marked, the inclined center lines of the left and right parallel lenses are respectively marked in the jointly shot middle image according to the identification marks, and the inclined center lines of the left and right parallel lenses marked in the jointly shot middle image are used as two parallel mark lines in the shot real scene.
Based on the known distance represented by the two parallel mark lines at any distance in the image, the scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a recovery mode. The two parallel marking lines are two entity parallel marking lines with known spacing inherent in the image; or the two laser heads are symmetrically distributed on two sides of the vertical line of the lens on the shooting device or the two laser parallel marking lines with known intervals are formed by irradiating two laser lines which are emitted by the two laser heads symmetrically distributed on the ground on two sides of the vertical line of the lens during shooting and are parallel to the axis of the lens in the image live-action scene. The two laser heads and the shooting device which are additionally arranged are respectively provided with a gyroscope, a theodolite or a compass and a level instrument which are used for calibrating direction and level. Establishing the scenes with large and small distances in the plane image in a three-dimensional virtual space in a real-scene recovery mode, for example, the reduction ratios of the large and small distances of all the scenes at the same distance are the same, namely, the length, the width and the height of the scenes at the same distance in the image and any other dimension vertical to the lens axis are reduced according to the same ratio; when recovering, the ratio is increased by the same ratio.
The restoration of the longitudinal length is that the distance between the near ends of two parallel mark lines in the image and the transverse distance between the near ends of the two parallel mark lines in the image and the midpoint of the far end transverse line is the same as the real scene angle, and on the basis of measurement, the actual longitudinal depth length from the near end to the far end is obtained by geometric calculation, and the longitudinal depth length of the scene in the image with the length is established in the virtual space in a restoration mode. The geometric calculation is that the length of the short side of the straight triangle is half of the distance between the near-end transverse distance line of the two parallel mark lines in the known image, the included angle of the connecting line between the near-end transverse distance line of the two parallel mark lines in the image and the midpoint between the near-end transverse distance line and the far-end transverse distance line of the two parallel mark lines in the image is equal to the included angle of the image, and the length of the long side of the straight triangle is calculated by utilizing a right-angle function.
The transverse length recovery in the recovery is a proportional relation value of the image space between two parallel mark lines in the image and the near-end image space between two parallel mark lines in the image on each recovered longitudinal length section, and the transverse length of all scenes on each recovered longitudinal length section is reversely amplified in a virtual space. The vertical height recovery in the recovery is a proportional relation value of the image space between two parallel mark lines in the image on each recovered longitudinal length section and the near-end image space between two parallel mark lines in the image, and the vertical height of all scenery on each recovered longitudinal length section is reversely amplified in a virtual space. The size recovery in the live-action recovery is that the ratio relation value of the image space between two parallel mark lines in the image and the near-end image space between two parallel mark lines in the image on each recovered longitudinal length section is reversely amplified and corrected in the virtual space to correct the three-dimensional size of all the scenery on each recovered longitudinal length section.
The shooting device for shooting the left, right and middle images is provided with three parallel lenses which are distributed at equal intervals, and the middle lens is used for shooting the middle image. And the central line or the central line reference object is identified and marked as an image identification characteristic of the central line or the central line reference object in the image. The image identification feature on the central line in the image is a feature image point element arrangement number, such as a point color and a brightness value, which is sequentially arranged on the line of the central line in the shot image. The image recognition feature of the center line reference object is a scene outline with a remarkable image feature in the shot image, and two parallel mark lines in the middle image are calculated and marked according to the change of the outline in the left image, the right image and the middle image or the change of the transverse distance between the outline and the center line in the left image, the right image and the middle image.
Each picture has a central line and is superposed with the central line of the shot actual road surface and the camera; each picture has a central point, namely a picture core, and is also in a plane which is formed by the data of the bottom edge and the height of the central line; when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to that of the vertical line B theoretically; the far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image.
The recovery is that a relative virtual ground plane data which is suitable for the image vision is determined firstly, the conversion of the actual size and the picture size of a segment of a central line and a near line on a picture is followed, the size of an equidistant size is far and near, and the R inclination angle extends far; the plane is lifted integrally in an upward water-submerged mode or is descended in a reverse horizontal plane, and pixels with the same horizontal height are equal in height, distance and area; three-dimensional data of each pixel comes out.
As shown in the figure, a computer always considers that only 0 and 1 are known, no space and plane are divided, no space element is added to a logic architecture, objects far away and near are observed by eyes, the space position of the objects is remembered besides the color of the objects, the objects in front of the eyes with the space position are naturally divided into independent objects, and the objects with different sizes follow the laws of far, small, near, high, near and low.
Human eyes are the first observer, the light is projected to the retina and then is collected and planned by the brain, the near part is sensed to be horizontal, flat and vertical, no problem exists, the image recorded after the human eyes observe by the camera, the human eyes observe the image at the angle of the second person, the space is deformed in the image, the lens and the eyes can see the same thing, and the computer can only read 0 and 1 due to cognitive limitation of the human eyes when the human eyes interpret.
How to restore the data after the image is recorded as a deformed image and objects in the space are separated one by one, which needs to research the invariant elements of the space in the image picture and the conversion rule.
Taking the example of observing a picture of a ninety-degree vertical road surface standing on the center line of a standard road surface, and analyzing a straight standard road in the figure 2, the inventor finds the following points through research: 1. each picture has a central line, and the central line of the picture coincides with the central line of the camera and the actual road surface. 2. Each frame has a center point, called the frame center, as shown in fig. 1, and is also in the plane formed by the data with the center line as the bottom line and the height. And when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to the height of the vertical line B theoretically. 3. The far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image. 4. The equidistant lines in the image are arranged on the visual oblique lines at two sides of the central line of the image, and the inclination angle R of the visual oblique lines is not changed in the same static image. 5. And an included angle B obtained by any line segment which is arbitrarily vertical to the center line from the near end and is used as the bottom of the triangle, and an included angle B obtained by any point outside the center line and the far end of the center line is equal to the actually measured angle. The angle obtained by base at the distal end is in turn equal to angle B by adding a tilt angle R, and fig. 2 contains an analysis diagram.
The width of a standard road surface is determined after the angle is obtained, the distance between a far end line and a near end line can be calculated by utilizing a trigonometric function, the data can form a virtual data space, the measured angle and the calculated distance only correct errors of the data space, the distance and the position information of two places are determined, the finished three-dimensional space data can be embedded in a picture, and the data of other arbitrary points are calculated according to the image space nested data model reduction transformation rule.
The above is the standard road surface condition, how to do irregular road surface, need to shoot and use two specific settings on camera equipment at the same time, its equidistant two laser heads that divide and arrange in the lens center both sides, send out two laser lines that are parallel to the central line, its central line that scans two lines on the ground is coincident with the central line in the image data, also can be used for measuring vision oblique angle R, the interval of two laser lines, with the characteristic of laser line unchangeable, just as the image has been furnished with suitable ruler, just so can adapt to the measurement and application of most conditions, and correct the error in the image that moves at any time.
There is also a method of dual lens space building as with human eyes, where the two lens analyses can work independently or correct each other. After the obtained photos of the two lenses are restored and overlapped, a three-dimensional space is constructed by the distance between the two central lines, the central line of the distance between the two central lines is the central line after the pictures are synthesized, and fig. 4 is an analysis schematic diagram. After the model is built, each pixel point is equivalent to double data positioning, and the positioning can also be summarized as triangle positioning. The work can be completed by a single lens and fixed distance traversing, and the figure 5 is an analysis chart. The method has accurate positioning and error correction at any time. Can reach centimeter level or above. The data information is directly measured instead of indirectly measured.
The methods firstly determine the distance on a section of picture, build a face by the section and then reach the height, and then combine the picture space transformation rule, so that a virtual space model can be built, and the virtual space model can be nested on the picture to extract the required data through software analysis.
The following are several sets of application analysis and actual images.
Three-dimensional virtual space modeling detailed solution:
1. firstly, any line segment perpendicular to the center line of the picture at the near end is a near end A line.
2. The distance between the point D and the point H, through which the two laser lines pass, on the near-end line is the distance between the centers of the two laser heads, and is known data.
3. The conversion of the picture length and the actual space length of the near-end A line is determined by taking the drawing length of the DH line segment as a basis.
4. Because the actual spacing of the laser lines is equal, the actual distances represented by the other line segments of any vertical center line through which the laser lines pass in the picture are equal although the spacing lengths on the picture are different.
5. And the length of the line segment where any other laser line passes through and is perpendicular to the central line can be measured by taking the laser spacing line on the line segment as the basic length. The actual spatial distances represented by the DH line segment and the D1H1 line segment are illustrated as being equal. The two laser lines are equidistant from each other and on the visual oblique line. The distance may also be calculated by the ratio of the length of the drawing of the proximal line DH to the distal line D1H 1.
6. The distance between the near-end line A and the far-end line B can be calculated by using the related trigonometric function data of a right triangle of the picture PDO, the angle of the angle PDO is the same as the angle of the actual spatial relative position, and an analysis chart is shown in FIG. 2.
7. Angle n.h1.h is the varying tilt angle of the picture.
8. By combining the data and the change rule, a plane change data model is established, other three surfaces are also established, and a three-dimensional virtual space model is also established. The virtual deformation space can be restored to an actual square space. The method is applied to traffic navigation, and can enable individuals to easily construct own three-dimensional traffic maps.
The space composition technology can be applied to real-time processing and post analysis of all image data, such as public security, water circulation, fire fighting, city management, traffic, emergency and the like. And the method surpasses the prior computer plane cognition, leads the data to be directly three-dimensionally, and leads the computer to directly distinguish the information such as the attribute, the spatial position and the like of each object. The technology can realize automatic modeling, and simply speaking, the method is completed by constructing a virtual image change space and then nesting the virtual image change space into a shot image, an object in the picture naturally obtains a spatial position coordinate, any pixel is given a coordinate, and the length, the width, the height, the area and the like naturally exist, namely a filling process. The camera equipment faithfully records all what is seen, and then whether the camera equipment can restore the seen image in a correct way is seen. The cost is only that an analysis software is added on the basis of the original camera equipment. In the application, inertial calculation and laser multipoint scanning only assist error correction and are not central.
The technology is that a relative virtual ground plane data suitable for image vision is determined, and conversion of the actual size of a segment of a confirmed central line and a near-end line on a picture and the size of the picture is followed, the size of an equidistant size is far and near, the R inclination angle extends far, and the like. The plane is lifted upwards in a water-submerged mode or is lowered in a reverse horizontal plane, pixels with the same horizontal height are equal in height, distance and area, and three-dimensional data of all the pixels are obtained.
In summary, the method and system for establishing a virtual space of an image according to the present invention have the advantages of easy shooting and establishing the shot image in the virtual space.
Claims (10)
1. A virtual space building method of an image is characterized in that a lens axis of a shooting device is parallel to a mark straight line with known vertical spacing in a real scene, and the real scene and parallel mark lines are shot, wherein the parallel mark lines are actual parallel mark lines in the shot real scene or laser parallel mark lines projected into the real scene by a laser head attached or attached to the shooting device; or the shooting device is provided with a main lens and an auxiliary lens which are parallel to the lens axis and have known intervals, the main lens and the auxiliary lens shoot the real scene at the same time, the center line or the center line reference object where the lens axis is located in the auxiliary image shot by the auxiliary lens is identified and marked, the inclined center line where the auxiliary lens axis is located is marked in the main image shot by the main lens according to the identification mark, and the inclined center line where the auxiliary lens axis is located in the main image shot by the main lens is used as the parallel mark line in the shot real scene;
the actual vertical distance represented by the image central line where the parallel mark line and the lens axis are located at any distance in the image is taken as a reference, and scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a real scene recovery mode.
2. The method for creating a virtual space according to claim 1, wherein the lens of the photographing device is positioned between two parallel marking lines with a known distance, and the axis of the lens is parallel to the two parallel marking lines, and the real-world image of the two parallel marking lines extending longitudinally and in parallel from the near end to the far end is photographed; or the two left and right parallel lenses of the shooting device respectively and jointly shoot left, right and middle images of the same real scene, the center line or the center line reference object of the lens axis in the left and right images respectively shot by the left and right parallel lenses is identified and marked, the inclined center line of the left and right parallel lenses axis is respectively marked in the jointly shot middle image according to the identification mark, and the inclined center line of the left and right parallel lenses axis marked in the jointly shot middle image is taken as two parallel mark lines in the shot real scene;
the method comprises the steps of establishing a scene with large distance and small distance in a plane image in a three-dimensional virtual space in a real scene recovery mode by taking the known distance of the actual distance represented by two parallel mark lines in the image at any distance as a reference.
3. The method according to claim 2, wherein the real-scene restoration of the longitudinal length is performed by obtaining the actual longitudinal depth length from the near end to the far end by geometric calculation based on the measurable distance between the real-scene distance from the near end to the lateral distance from the near end of the two parallel marking lines in the image and the real-scene distance from the near end to the midpoint of the far end of the two parallel marking lines in the image, and establishing the virtual space by the real-scene restoration method based on the actual longitudinal depth length of the scene in the image.
4. A method for creating a virtual space according to any one of claims 2-3, wherein each frame has a center line, and is coincident with the center line of the camera and the actual road surface being photographed; each picture has a central point, namely a picture core, and is also in a plane which is formed by the data of the bottom edge and the height of the central line; when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to that of the vertical line B theoretically; the far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image;
the real scene recovery is that a relative virtual ground plane data which is suitable for image vision is determined, conversion of the actual size of a segment of a central line and a near-end line on a picture and the size of the picture is followed, the size of an equidistant line is far and near, and an R inclined angle extends far; the plane is lifted integrally in an upward water-submerged mode or is descended in a reverse horizontal plane, and pixels with the same horizontal height are equal in height, distance and area; three-dimensional data of each pixel comes out.
5. The method for creating a virtual space according to any one of claims 2-3, wherein the two parallel marking lines are two solid parallel marking lines with a known spacing inherent in the real scene of the image; or the two laser heads are symmetrically distributed on two sides of the vertical line of the lens on the shooting device or the two laser parallel marking lines with known intervals are formed by irradiating two laser lines which are emitted by the two laser heads symmetrically distributed on the ground on two sides of the vertical line of the lens during shooting and are parallel to the axis of the lens in the image live-action scene.
6. The system for realizing the virtual space establishing method of the image according to claim 1 is characterized by comprising a single-lens shooting device or a shooting device which is attached with a laser head axis parallel to the lens axis and has a known axis vertical distance, wherein the lens axis of the shooting device is parallel to a mark straight line with a known vertical distance in the real scene, and the real scene and the parallel mark lines are shot, and the parallel mark lines are actual parallel mark lines in the shot real scene or laser parallel mark lines projected to the real scene by the laser head attached or attached to the shooting device; or the main and auxiliary lenses are arranged in parallel with the lens axis and have known intervals by the shooting device, the main and auxiliary lenses shoot the real scene at the same time, the center line or the center line reference object where the lens axis is located in the auxiliary image shot by the auxiliary lens is identified and marked, the inclined center line where the auxiliary lens axis is located is marked in the main image shot by the main lens according to the identification mark, and the inclined center line where the auxiliary lens axis is located in the main image shot by the main lens is used as the parallel mark line in the shot real scene;
the actual vertical distance represented by the image central line where the parallel mark line and the lens axis are located at any distance in the image is taken as a reference, and scenes with large and small distances in the plane image are built in a three-dimensional virtual space in a real scene recovery mode.
7. The system of claim 6, wherein the two laser heads are symmetrically arranged on the two sides of the vertical line of the lens, and the two laser heads emit two laser lines parallel to the axis of the lens to irradiate in the image real scene to form two laser parallel mark lines, or the image shooting device shoots the real scene image of the two parallel mark lines longitudinally extending in parallel from the near end to the far end; or the shooting device is provided with two left and right parallel lenses, or the two left and right parallel lenses of the shooting device respectively and jointly shoot left and right and middle images of the same real scene, the center lines or center line reference objects of the lens axes in the left and right images respectively shot by the left and right parallel lenses are identified and marked, the inclined center lines of the left and right parallel lenses axes are respectively marked in the jointly shot middle image according to the identification marks, and the inclined center lines of the left and right parallel lenses marked in the jointly shot middle image are used as two parallel mark lines in the shot real scene;
the method comprises the steps of establishing a scene with large distance and small distance in a plane image in a three-dimensional virtual space in a real scene recovery mode by taking the known distance of the actual distance represented by two parallel mark lines in the image at any distance as a reference.
8. The system according to claim 7, wherein the real-scene restoration of the longitudinal length is based on the fact that the real-scene distance of the lateral distance between the near ends of the two parallel marking lines in the image is known, and the included angle between the near end lateral line end of the two parallel marking lines in the image and the midpoint of the far end lateral line is the same as the real-scene included angle, and the real longitudinal depth length from the near end to the far end is obtained by geometric calculation based on the measurable distance, and the real-scene longitudinal depth length in the image with the length is established in the virtual space in a real-scene restoration manner.
9. The system of any of claims 7-8, wherein each frame has a centerline that coincides with the centerline of the camera on the actual road surface being imaged; each picture has a central point, namely a picture core, and is also in a plane which is formed by the data of the bottom edge and the height of the central line; when the central line of the lens of the camera is parallel to the ground, the height of the vertical line A is equal to that of the vertical line B theoretically; the far ends of the upper, lower, left and right four sides are inclined to the center of the picture at the angle R, and the angle is unchanged in the same static image;
the real scene recovery is that a relative virtual ground plane data which is suitable for image vision is determined, conversion of the actual size of a segment of a central line and a near-end line on a picture and the size of the picture is followed, the size of an equidistant line is far and near, and an R inclined angle extends far; the plane is lifted integrally in an upward water-submerged mode or is descended in a reverse horizontal plane, and pixels with the same horizontal height are equal in height, distance and area; three-dimensional data of each pixel comes out.
10. The system according to any of claims 7-8, wherein said two parallel marking lines are two solid parallel marking lines of known spacing inherent in the image scene; or the two laser heads are symmetrically distributed on two sides of the vertical line of the lens on the shooting device or the two laser parallel marking lines with known intervals are formed by irradiating two laser lines which are emitted by the two laser heads symmetrically distributed on the ground on two sides of the vertical line of the lens during shooting and are parallel to the axis of the lens in the image live-action scene.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011294053.0A CN112465947B (en) | 2020-11-18 | 2020-11-18 | Method and system for establishing virtual space of image |
CN202410094899.1A CN117893689A (en) | 2020-11-18 | 2020-11-18 | Method and system for establishing virtual space of image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011294053.0A CN112465947B (en) | 2020-11-18 | 2020-11-18 | Method and system for establishing virtual space of image |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410094899.1A Division CN117893689A (en) | 2020-11-18 | 2020-11-18 | Method and system for establishing virtual space of image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112465947A true CN112465947A (en) | 2021-03-09 |
CN112465947B CN112465947B (en) | 2024-04-23 |
Family
ID=74837291
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410094899.1A Pending CN117893689A (en) | 2020-11-18 | 2020-11-18 | Method and system for establishing virtual space of image |
CN202011294053.0A Active CN112465947B (en) | 2020-11-18 | 2020-11-18 | Method and system for establishing virtual space of image |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410094899.1A Pending CN117893689A (en) | 2020-11-18 | 2020-11-18 | Method and system for establishing virtual space of image |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN117893689A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007071570A (en) * | 2005-09-05 | 2007-03-22 | Mamoru Otsuki | Camera attitude grasping method, photogrammetric method using the same, and their program |
JP2009229182A (en) * | 2008-03-21 | 2009-10-08 | Acreeg Corp | Feature-on-image measurement method, display method, and measurement apparatus |
CN106546216A (en) * | 2016-11-01 | 2017-03-29 | 广州视源电子科技股份有限公司 | Distance measurement method, device, camera and mobile terminal |
CN109269421A (en) * | 2018-09-14 | 2019-01-25 | 李刚 | Omnipotent shooting measuring scale |
CN109682312A (en) * | 2018-12-13 | 2019-04-26 | 上海集成电路研发中心有限公司 | A kind of method and device based on camera measurement length |
WO2019181284A1 (en) * | 2018-03-23 | 2019-09-26 | ソニー株式会社 | Information processing device, movement device, method, and program |
CN110555884A (en) * | 2018-05-31 | 2019-12-10 | 海信集团有限公司 | calibration method and device of vehicle-mounted binocular camera and terminal |
WO2020006941A1 (en) * | 2018-07-03 | 2020-01-09 | 上海亦我信息技术有限公司 | Method for reconstructing three-dimensional space scene on basis of photography |
-
2020
- 2020-11-18 CN CN202410094899.1A patent/CN117893689A/en active Pending
- 2020-11-18 CN CN202011294053.0A patent/CN112465947B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007071570A (en) * | 2005-09-05 | 2007-03-22 | Mamoru Otsuki | Camera attitude grasping method, photogrammetric method using the same, and their program |
JP2009229182A (en) * | 2008-03-21 | 2009-10-08 | Acreeg Corp | Feature-on-image measurement method, display method, and measurement apparatus |
CN106546216A (en) * | 2016-11-01 | 2017-03-29 | 广州视源电子科技股份有限公司 | Distance measurement method, device, camera and mobile terminal |
WO2019181284A1 (en) * | 2018-03-23 | 2019-09-26 | ソニー株式会社 | Information processing device, movement device, method, and program |
CN110555884A (en) * | 2018-05-31 | 2019-12-10 | 海信集团有限公司 | calibration method and device of vehicle-mounted binocular camera and terminal |
WO2020006941A1 (en) * | 2018-07-03 | 2020-01-09 | 上海亦我信息技术有限公司 | Method for reconstructing three-dimensional space scene on basis of photography |
CN109269421A (en) * | 2018-09-14 | 2019-01-25 | 李刚 | Omnipotent shooting measuring scale |
CN109682312A (en) * | 2018-12-13 | 2019-04-26 | 上海集成电路研发中心有限公司 | A kind of method and device based on camera measurement length |
Non-Patent Citations (3)
Title |
---|
GANG LI等: "Matching Algorithm and Parallax Extraction Based onn Binocular Stereo Vision", 《SMART INNOVATIONS IN COMMUNICATION AND COMPUTATIONAL SCIENCES》, pages 347 - 355 * |
万一龙;柏连发;韩静;张毅;: "低照度双目立体显著目标距离测定方法与实现", 红外与激光工程, vol. 44, no. 03, pages 1053 - 1060 * |
李刚等: "基于红外传感器的运动目标位姿测量与误差分析", 《激光杂志》, vol. 41, no. 03, pages 86 - 90 * |
Also Published As
Publication number | Publication date |
---|---|
CN112465947B (en) | 2024-04-23 |
CN117893689A (en) | 2024-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021004312A1 (en) | Intelligent vehicle trajectory measurement method based on binocular stereo vision system | |
TWI555379B (en) | An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
JP4095491B2 (en) | Distance measuring device, distance measuring method, and distance measuring program | |
CN101853528B (en) | Hand-held three-dimensional surface information extraction method and extractor thereof | |
CN110956660B (en) | Positioning method, robot, and computer storage medium | |
CN107084680B (en) | A kind of target depth measurement method based on machine monocular vision | |
US7456842B2 (en) | Color edge based system and method for determination of 3D surface topology | |
US6768813B1 (en) | Photogrammetric image processing apparatus and method | |
CN108288294A (en) | A kind of outer ginseng scaling method of a 3D phases group of planes | |
CN107907048A (en) | A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning | |
JP5715735B2 (en) | Three-dimensional measurement method, apparatus and system, and image processing apparatus | |
CN106990776B (en) | Robot homing positioning method and system | |
EP1580523A1 (en) | Three-dimensional shape measuring method and its device | |
CN108629829B (en) | Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera | |
CN109712232B (en) | Object surface contour three-dimensional imaging method based on light field | |
KR101759798B1 (en) | Method, device and system for generating an indoor two dimensional plan view image | |
CN107808398B (en) | Camera parameter calculation device, calculation method, program, and recording medium | |
CN111524195B (en) | Camera calibration method in positioning of cutting head of heading machine | |
CN107633532B (en) | Point cloud fusion method and system based on white light scanner | |
CN106767526A (en) | A kind of colored multi-thread 3-d laser measurement method based on the projection of laser MEMS galvanometers | |
CN112184793B (en) | Depth data processing method and device and readable storage medium | |
CN114359406A (en) | Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method | |
CN111932627B (en) | Marker drawing method and system | |
CN109191533A (en) | Tower crane high-altitude construction method based on assembled architecture | |
CN105513074B (en) | A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |