CN105989354A - Positioning method and system - Google Patents
Positioning method and system Download PDFInfo
- Publication number
- CN105989354A CN105989354A CN201510271989.4A CN201510271989A CN105989354A CN 105989354 A CN105989354 A CN 105989354A CN 201510271989 A CN201510271989 A CN 201510271989A CN 105989354 A CN105989354 A CN 105989354A
- Authority
- CN
- China
- Prior art keywords
- target
- coordinate system
- angle
- incidence
- relation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Embodiments of the invention provide a positioning method and system. The positioning method comprises the following steps: obtaining video image information of a to-be-positioned objective by using a video obtaining device, wherein the video obtaining device comprises a combination of a plurality of fixedly arrangement ultra-wide lenses, and the combination of the ultra-wide lenses is used for obtaining image information of any objective in a range of the video obtaining device under a condition of not moving the video obtaining device; obtaining a three-dimensional coordinate of the to-be-positioned objective according to video image information of the to-be-positioned objective; and positioning the to-be-positioned objective according to the obtained three-dimensional coordinate. Through the positioning method and system, a used algorithm is simple, so that the processing load of the positioning system can be reduced. Furthermore, the algorithm is simple, so that the obtained positioning result is small in error and more correct.
Description
Technical field
The present invention relates to visual processes technical field, particularly relate to a kind of localization method and system.
Background technology
Along with computer and the development of image processing techniques, visual processes is applied to more and more widely
All trades and professions.Such as, field of video monitoring, field in intelligent robotics, medical instruments field etc..
Visual processes enables a computer to by one or more image cognition ambient condition information, this
Kind of ability not only make physics in machine perception environment geological information (include shape, position, attitude,
Motion etc.), and they can be described, store, transmit, identify and process with understanding etc..Logical
Crossing these to process, people can obtain required information.Wherein, object is obtained by visual processes
Location information be more commonly use a kind of object location mode.
But, owing to common visual system is limited by visual field, need to be by by means of cloud platform rotation
Adjust visual angle, or the multiple spot by means of mobile platform shoots, and could realize panorama observation space three-dimensional letter
The acquisition of breath, and then carry out positioning action according to these information obtained.
But, this location mode algorithm is complicated so that system processing load weight, and then, due to this
Planting complicated algorithm causes arithmetic result error to increase, and ultimately results in Wrong localization.
Summary of the invention
Embodiments provide a kind of localization method and system, in order to solve existing localization method
Because algorithm is complicated, and the positioning result error caused is big and the problem of system processing load weight.
In order to solve the problems referred to above, the embodiment of the invention discloses a kind of localization method, including: use and regard
Frequently acquisition device obtains the video image information of target to be positioned, and wherein, described video acquisition device includes
The combination of the multiple bugeye lenses being fixedly installed, the combination of the plurality of bugeye lens is not for moving
Under conditions of dynamic described video acquisition device, any mesh in the range of obtaining residing for described video acquisition device
Target image information;According to the video image information of described target to be positioned, obtain described target to be positioned
Three-dimensional coordinate;According to the described three-dimensional coordinate obtained, described target to be positioned is positioned.
In order to solve the problems referred to above, the embodiment of the invention also discloses a kind of alignment system, including: obtain
Module, for using video acquisition device to obtain the video image information of target to be positioned, wherein, described
Video acquisition device includes the combination of the multiple bugeye lenses being fixedly installed, the plurality of bugeye lens
Combination under conditions of not moving described video acquisition device, obtain described video acquisition device institute
The image information of the arbitrary target in the range of place;Three-dimensional coordinate determines module, for according to described to be positioned
The video image information of target, obtains the three-dimensional coordinate of described target to be positioned;Target locating module, uses
According to the described three-dimensional coordinate obtained, described target to be positioned is positioned.
The localization method of embodiment of the present invention offer and system, owing to video acquisition device includes multiple super wide
The combination of angle mirror head, therefore, can also realize non-blind area complete under conditions of not mobile video acquisition device
View is surveyed.Owing to video acquisition device is capable of blind-area-free panoramic observation, thus a certain to be positioned obtaining
The video image information of target time, then just can obtain mesh to be positioned without mobile video acquisition device
Target video image information.As can be seen here, the targeting scheme that the embodiment of the present invention provides, can effectively keep away
Exempt from existing targeting scheme to need by adjusting visual angle by means of cloud platform rotation, or many by means of mobile platform
Point shooting, the problem that the acquisition of panorama observation space three-dimensional information could be realized.Existing owing to not existing
, therefore, the most there is not bring because of the problems referred to above a series of and ask in problem present in targeting scheme
Topic.The targeting scheme that the embodiment of the present invention provides, under conditions of video acquisition device does not moves, passes through
The acquisition of the video image information of target the most to be positioned, just can directly calculate the three-dimensional seat of target to be positioned
Mark, and without considering that cloud platform rotation adjusts the factors such as visual angle.It is simple compared to existing targeting scheme algorithm,
The processing load of alignment system can be reduced.And then, owing to algorithm is simple, therefore obtained location knot
Really error is little, more accurate.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to reality
Execute the required accompanying drawing used in example or description of the prior art to be briefly described, it should be apparent that under,
Accompanying drawing during face describes is some embodiments of the present invention, for those of ordinary skill in the art,
On the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the flow chart of steps of a kind of localization method of according to embodiments of the present invention;
Fig. 2 is the flow chart of steps of a kind of localization method of according to embodiments of the present invention two;
Fig. 3 is the ball vision visual field design sketch of video acquisition device in embodiment illustrated in fig. 2 two;
Fig. 4 is the fish-eye mathematical model in embodiment illustrated in fig. 2 two;
Fig. 5 is two the adjacent flake mirrors installed on the video acquisition device in embodiment illustrated in fig. 2 two
The easy structure figure of head;
Fig. 6 is the structured flowchart of a kind of alignment system of according to embodiments of the present invention three;
Fig. 7 is the structured flowchart of a kind of alignment system of according to embodiments of the present invention four.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with this
Accompanying drawing in bright embodiment, is clearly and completely described the technical scheme in the embodiment of the present invention,
Obviously, described embodiment is a part of embodiment of the present invention rather than whole embodiments.Based on
Embodiment in the present invention, those of ordinary skill in the art are obtained under not making creative work premise
The every other embodiment obtained, broadly falls into the scope of protection of the invention.
Embodiment one
With reference to Fig. 1, it is shown that the flow chart of steps of a kind of localization method of the embodiment of the present invention one.
The localization method of the embodiment of the present invention comprises the following steps:
Step S102: use video acquisition device to obtain the video image information of target to be positioned.
Wherein, video acquisition device includes the combination of the multiple bugeye lenses being fixedly installed, multiple super wide
The combination of angle mirror head, under conditions of not mobile video acquisition device, obtains residing for video acquisition device
In the range of the image information of arbitrary target.
Owing to video acquisition device includes multiple bugeye lenses of being fixedly installed, it is possible to realize non-blind area complete
View is surveyed, it is therefore not necessary to mobile video acquisition device, in the range of can obtaining residing for video acquisition device
The image information of arbitrary target.
Bugeye lens is the general designation of the camera lens that angular field of view is wider in wide-angle lens, in the art, one
As refer to the camera lens of 80-110 degree more.But, in the art, the another kind of angle camera lens more than 110 degree
Also it is considered bugeye lens by those skilled in the art, i.e. " fish eye lens ", it is believed that " flake mirror
Head " it is the special case in bugeye lens." fish eye lens " be a kind of focal length be 16mm or shorter and
Visual angle is close or equal to 180 degree.It is a kind of extreme wide-angle lens, and " fish eye lens " is being commonly called as of it.
For making camera lens reach maximum Camera angle, the front lens diameter of this photographic lens is the shortest and in parabolical
Protruding to camera lens front portion, the most similar to the eyes of fish, " fish eye lens " therefore gains the name.Fish eye lens
Belonging to a kind of special lens in bugeye lens, the visual angle of camera lens typically can reach 220 degree or 230 degree,
The scope reaching or seeing beyond human eye is made every effort at its visual angle.
The combination of bugeye lens will be combined institute according to certain rule of combination by multiple bugeye lenses
The entirety constituted.
In the embodiment of the present invention, for the number of bugeye lens that is fixedly installed in video acquisition device with
And position is not especially limited.During implementing, can be by those skilled in the art according to reality
Demand is configured, it is ensured that video acquisition device is capable of blind-area-free panoramic and observes.Such as: can
With bugeye lens that four camera lens Radix Rumiciss are 180 degree, can also to arrange five camera lens Radix Rumiciss be 110 degree
Bugeye lens, the bugeye lens that six camera lens Radix Rumiciss are 100 degree can also be set.
Step S104: according to the video image information of target to be positioned, obtain the three-dimensional seat of target to be positioned
Mark.
After the video image information determining target to be positioned, permissible by the image-forming principle of bugeye lens
Anti-acquisition the target to be positioned i.e. X of three-dimensional coordinate in world coordinate system, Y, the Z coordinate of pushing away, thus, just
Can determine that the actual position of target to be positioned.
Step S106: according to the three-dimensional coordinate obtained, target to be positioned is positioned.
The localization method provided by the embodiment of the present invention, owing to video acquisition device includes multiple ultra-wide angle
The combination of camera lens, therefore, can also realize blind-area-free panoramic under conditions of not mobile video acquisition device
Observation.Owing to video acquisition device is capable of blind-area-free panoramic observation, thus a certain to be positioned obtaining
During the video image information of target, then just can obtain target to be positioned without mobile video acquisition device
Video image information.As can be seen here, the localization method that the embodiment of the present invention provides, can be prevented effectively from
Existing localization method needs by adjusting visual angle, or the multiple spot by means of mobile platform by means of cloud platform rotation
Shooting, the problem that the acquisition of panorama observation space three-dimensional information could be realized.Existing fixed owing to not existing
Problem present in method for position, therefore, does not the most deposit a series of problem brought because of the problems referred to above.
Embodiment two
With reference to Fig. 2, it is shown that the flow chart of steps of a kind of localization method of two according to embodiments of the present invention.
The localization method of the embodiment of the present invention comprises the following steps:
Step S202: alignment system uses video acquisition device to obtain the video image information of target to be positioned.
Alignment system can be to include that computer, server etc. have data acquisition, process the device of function
System, alignment system is getting the video image information of target to be positioned that video acquisition device obtains
After, data model and default a series of formula by bugeye lens can determine to be positioned
Target coordinate figure in world coordinate system.
Video acquisition device includes the combination of the multiple bugeye lenses being fixedly installed, multiple bugeye lenses
Combination under conditions of not mobile video acquisition device, obtain video acquisition device residing in the range of
The image information of arbitrary target.
Be fixedly installed be combined as being fixedly installed four of multiple bugeye lenses in the embodiment of the present invention
As a example by fish-eye combination, the localization method of the embodiment of the present invention is illustrated.
Four fish eye lenses in the embodiment of the present invention pass through the spaced 90 degree of horizontal positioned of lens bracket,
Each fish-eye Radix Rumicis is 185 degree, and the fish eye lens that each two is adjacent can realize 90 ° × 90 ° visual fields
Internal information completely overlapped, four adjacent fish eye lenses can realize 360 ° × 360 ° completely observe sky
Between the overlap of internal information, it can be ensured that in space, the video information on optional position is to I haven't seen you for ages at two flake mirrors
Imaging in Tou, has ball vision visual field effect, and concrete design sketch is as shown in Figure 3.Wherein, four flakes
Camera lens is separately mounted on four lens brackets, four lens bracket composition crosses.Wherein, with positive six
The arc area that limit row is filled is the fish-eye visual field being arranged on level lens bracket to the right, uses
The arc area that the staggered right angle piled up is filled is be arranged on level lens bracket to the left fish-eye
Visual field.The arc area filled with horizontal line is be arranged on lens bracket straight up fish-eye
Visual field, the arc area filled with vertical line is that the fish-eye of lens bracket being arranged on straight down regards
?.
As shown in Figure 4, this model is made up of the mathematical model of panorama picture of fisheye lens four coordinate systems, from upper
World coordinate system (X, Y, Z), camera coordinate system (X it is followed successively by under toC, YC, ZC), image coordinate
System (u, v) and imager chip coordinate system (x, y).Wherein, world coordinate system is also referred to as true or real world
Coordinate system, it is the absolute coordinate of objective world, and general 3D scene all represents by this coordinate system.
Camera coordinate system is the coordinate system specified centered by video camera, takes the light of video camera in the coordinate system
Axle is as ZCAxle.Image coordinate system refers in the coordinate system used by computer-internal digital picture, imaging core
Sheet coordinate system is the photo coordinate system formed in video camera.Image coordinate system and imager chip coordinate
System is at grade.
In the model shown in Fig. 4, P " ' for any point P in space (may be considered target to be positioned,
It is also assumed that be the point to be imaged of in target to be positioned) ideal image point in image coordinate system,
PDFor the actual imaging point after consideration pattern distortion.OO' is optical axis, and h is that P point is to fish eye lens surface
Vertical dimension, ω is the P point angle of incidence relative to optical axis, and θ is P point orientation in world coordinate system
Angle.R is that picture point is to optic center point O'(u0, v0) radial distance, φ is that imaging point is at image coordinate system
In the azimuth of azimuth target the most to be positioned and image coordinate system, dxAnd dyIt is respectively a pixel to exist
X-axis and the physical size on y-axis direction under imager chip coordinate system.
Picture corresponding for target to be positioned, when shooting target P to be positioned, can be projected to figure by fish eye lens
As P corresponding in coordinate systemDPoint, and the P point angle of incidence relative to camera lens optical axis can be known.Accordingly
Ground, alignment system can get these video image informations by each fish eye lens.
In the embodiment of the present invention, alignment system may determine that according to the video image information of target to be positioned and treats
Location target image space in image coordinate system, both may determine that and the angle of v axle in image coordinate system
Degree, may determine that again the coordinate in image coordinate system.By target to be positioned in image coordinate system
Image space can obtain the three-dimensional coordinate of target to be positioned, shown in specific implementation following sequence step.
Wherein, the three-dimensional coordinate target the most to be positioned position in world coordinate system, those skilled in the art
It will be clearly understood that determining the target to be positioned i.e. X of three-dimensional coordinate in world coordinate system, Y, Z coordinate
After, just can determine that the actual position of target to be positioned.
Step S204: alignment system, according to the video image information of target to be positioned, obtains target to be positioned
Angle of incidence relative to each fish-eye camera lens optical axis.
Wherein, camera lens optical axis is the line by optical center.Alignment system is according to the video of target to be positioned
Image information can be determined that by regarding which the concrete bugeye lens in video acquisition device gets
Frequently image information, and, carry in the video image information of target to be positioned, target phase to be positioned
For obtaining the angle of incidence of each fish-eye camera lens optical axis of video image information, now, location system
System directly obtains target to be positioned relative to each angle of incidence ω.
Step S206: alignment system determines the relation of each angle of incidence and each coordinate axes of world coordinate system.
Preferably, each of at least two angle of incidence and world coordinate system is determined according at least two angle of incidence
The relation of coordinate axes.Such as: a certain target to be positioned is by least two flake mirror in video acquisition device
Head photographs simultaneously.So, in this case, then may select at least two angle of incidence to determine at least
The relation of each coordinate axes of two angle of incidence and world coordinate system.
Preferably, when determining each angle of incidence and the relation of each coordinate axes of world coordinate system, the most really
The plane that fixed target to be positioned is projected, such as: YOZ plane, XOZ plane.Determining the flat of projection
Behind face, then determine the relation of each angle of incidence and each coordinate axes of world coordinate system.Shown in Fig. 5
As a example by coordinate system, in the YOZ plane of Fig. 5, the corresponding Z axis with Fig. 4 of Z axis, represent vertical direction,
Y-axis, corresponding to the Y-axis of Fig. 4, represents that horizontal direction, Y-axis are zero O with the intersection point of Z axis,
X-axis is the coordinate axes being perpendicular to YOZ plane inwards.
When determining each angle of incidence and the relation of each coordinate axes of world coordinate system, can be by incidence
Tangent of an angle value, sine value, cotangent value etc. are determined.Preferably, can be according to each angle of incidence
Tangent value determines the relation of each angle of incidence and each coordinate axes of world coordinate system.
Such as: by target projection to be positioned to YOZ plane, and, by the tangent value of two angle of incidence
Determine the relation of two angle of incidence and each coordinate axes of world coordinate system respectively.
Step S208: alignment system determines the relation of target to be positioned and image coordinate system.
When determining relation, it may be determined that target to be positioned and the azimuth of image coordinate system.Certainly, also
May determine that the target to be positioned imager coordinate at image coordinate system, determine target to be positioned and image coordinate
Azimuthal trigonometric function value of system, then, is determined azimuth by azimuthal trigonometric function value.
A kind of preferably determine that target to be positioned is as follows with the mode of the relation of image coordinate system: determine undetermined
Position target imager coordinate in image coordinate system;According to the target to be positioned imaging in image coordinate system
Coordinate, determines the azimuth of target to be positioned and image coordinate system.
Preferably, determine that target to be positioned with azimuthal mode of image coordinate system is: according to be positioned
Target imager coordinate in image coordinate system, is just determining the azimuth of target to be positioned and image coordinate system
Cut value;According to azimuth tangent value, determine the azimuth of target to be positioned and image coordinate system.
It should be noted that when determining azimuth, it is not limited to sat with image by target to be positioned
The azimuth tangent value of mark system determines, it is also possible to come by these azimuthal sine, cosine, cotangents etc.
Determine.
Such as: the target to be positioned imager coordinate in image coordinate system be (u, v), image coordinate system
The coordinate of initial point is (u0, v0), then, when determining azimuth, can pass throughCome really
Fixed target to be positioned and the azimuth φ of image coordinate system, it is also possible to pass throughDetermine to be positioned
Target and the azimuth φ of image coordinate system.
Step S210: alignment system determines the azimuth of target to be positioned and world coordinate system.
A kind of preferably determine that target to be positioned with the mode at the angle in the orientation of world coordinate system is: according to really
Fixed target to be positioned and the azimuth of image coordinate system, and, advise according to fish eye lens equidistant projection imaging
Then, the azimuth of target to be positioned and world coordinate system is determined.Advise according to fish eye lens equidistant projection imaging
Then, it can be deduced that the azimuth φ of target to be positioned and image coordinate system and target to be positioned and world coordinates
The azimuth angle theta of system is equal, it may be assumed that φ=θ.
Certainly, however it is not limited to this, those skilled in the art can also come according to other the most suitable forms
Determine the azimuth of target to be positioned and world coordinate system.
Step S212: alignment system, according to the relation determined, determines the three-dimensional coordinate of target to be positioned.
Wherein it is determined that relation include: target to be positioned is relative to each fish-eye camera lens optical axis
Angle of incidence, alignment system determine each angle of incidence and the relation of each coordinate axes of world coordinate system and undetermined
Position target and the relation of image coordinate system.Further, as above shown in step S208 to S210, by undetermined
The relation of position target and image coordinate system may determine that the azimuth of target to be positioned and world coordinate system.
A kind of preferably according to the relation determined, determine that the mode of the three-dimensional coordinate of target to be positioned is: root
According to target to be positioned and the tangent value of the angle of incidence of world coordinate system, angle of incidence and world coordinate system each
Azimuthal tangent value of the relation of coordinate axes, target to be positioned and world coordinate system and, mesh to be positioned
The azimuth of mark and world coordinate system and the relation of at least two coordinate axes of world coordinate system, determine undetermined
The three-dimensional coordinate of position target.
Referring to Fig. 5, above-mentioned preferred mode is illustrated:
Fig. 5 is two vertical fish-eye easy structure, and this structure is to be arrived by target projection to be positioned
Easy structure corresponding in YOZ plane.
Assume angle of incidence by location target to be positioned determined by above-mentioned steps and world coordinate system
Tangent value is respectively tan ω1With tan ω2, determine azimuthal tangent value of target to be positioned and world coordinate system
For tan θ and equal to tan φ, azimuth angle theta with the relation of at least two coordinate axes of world coordinate system isFurther, as shown in Figure 5, the relation of each coordinate axes of angle of incidence and world coordinate system
It is not So, tan ω is being determined1、tanω2、tanθ、
LX-l、LZOn the premise of-l is definite value, then can determine that X, Y, Z's is concrete by above three equation
Value, hence, it can be determined that the three-dimensional coordinate of target to be positioned.
Wherein, LX-l、LZ-l represents that shown in Fig. 5, two fish eye lenses are in lens mounting frame respectively
The distance of the heart.
If it should be noted that target projection to be positioned in other planes of world coordinate system such as:
In XOZ plane, then azimuth angle theta will become with the relation of at least two coordinate axes of world coordinate systemCorrespondingly, angle of incidence also will change with the relation of each coordinate axes of world coordinate system,
Specifically changing about this, those skilled in the art can be by rationally on the basis of content disclosed above
Reasoning push over and draw, this is not described in detail by the present embodiment.
It should be noted that determine target to be positioned and azimuthal tangent value of world coordinate system and side
The relation of at least two coordinate axes of parallactic angle and world coordinate system is in order to determine in world coordinate system
Two coordinates between relation.After the relation determined between two coordinates, in conjunction with two comprise X,
The equation of Y, Z i.e. can determine that the three-dimensional coordinate of target to be positioned.During implementing, not office
Really the mode of target and azimuthal tangent value of world coordinate system is positioned, it is also possible to really cited by being limited to
Fixed this azimuthal sine value, cosine value, cotangent value, because also may be used by above-mentioned trigonometric function relation
To determine the relation between two coordinates in world coordinate system.During implementing, can be by this
Skilled person selects according to the actual requirements.
Determine the tangent value of angle of incidence, angle of incidence and the world coordinate system of target to be positioned and world coordinate system
The relation of each coordinate axes merely to set up two, relation between X, Y, Z is described in world coordinate systems
Equation.During implementing, it may be determined that the angle of incidence of target to be positioned and world coordinate system
Cotangent value, determined the relation of each coordinate axes of angle of incidence and world coordinate system by cotangent value, then,
Construct two equations meeting regulation.The target to be positioned angle of incidence with world coordinate system can also be determined
Sine value, determines the relation of angle of incidence and each coordinate axes of world coordinate system, structure by sine value
Two equations meeting regulation.It is, of course, also possible to determine the angle of incidence of target to be positioned and world coordinate system
Cosine value, determined the relation of each coordinate axes of angle of incidence and world coordinate system, structure by cosine value
Make two equations meeting regulation.During implementing, can be by those skilled in the art according to reality
Border demand selects.
Step S214: alignment system is according to the three-dimensional coordinate of the target to be positioned obtained, to target to be positioned
Position.
After the three-dimensional coordinate determining target to be positioned, target to be positioned can be positioned, specifically
Realization please can be found in correlation technique, and the present embodiment no longer describes in detail at this.
It should be noted that the embodiment of the present invention is only be with the video acquisition device of fish eye lens composition
The explanation that example is carried out.During implementing, those skilled in the art can select according to the actual requirements
Other multiple bugeye lenses being suitable for form video acquisition device.It is also possible to by suitable mould
Type and equation inference obtain the three-dimensional coordinate of corresponding target to be positioned, and then carry out this target to be positioned
Location.
The localization method provided by the embodiment of the present invention, owing to video acquisition device includes four flake mirrors
The combination of head, this combination is capable of under conditions of not mobile video acquisition device can also realizing without blind
District's panorama observation.Owing to video acquisition device is capable of blind-area-free panoramic observation, therefore obtaining a certain treating
During the video image information of target of location, then just can obtain to be positioned without mobile video acquisition device
The video image information of target.As can be seen here, the localization method that the embodiment of the present invention provides, Ke Yiyou
Effect is avoided needing in existing alignment system by adjusting visual angle by means of cloud platform rotation, or by means of mobile platform
Multiple spot shooting, the problem that the acquisition of panorama observation space three-dimensional information could be realized.Existing owing to not existing
Problem present in some alignment systems, therefore, does not the most deposit bring because of the problems referred to above a series of
Problem.
Localization method in the embodiment of the present invention is illustrated with an instantiation referring to Fig. 5.
In this instantiation, video acquisition device is got by two adjacent fish eye lenses simultaneously and treats
The video image information of location target, and got corresponding video image information by alignment system, tool
Body step is as follows:
S1: get the video image information of target to be positioned.
Wherein, video image information is obtained by fish eye lens 1,2 shooting respectively.
S2: according to video image information, obtains the target to be positioned camera lens light relative to fish eye lens 1
The angle of incidence of axle that is first angle of incidence ω1Represent, relative to the angle of incidence of the camera lens optical axis of fish eye lens 2
I.e. second angle of incidence ω2Represent.
S3: determine ω respectively1、ω2Relation with each coordinate axes of world coordinate system.
As it is shown in figure 5, be two vertical fish-eye easy structure, this structure is by target to be positioned
I.e. object P projects to easy structure corresponding in YOZ plane.
With regard to this situation shown in Fig. 5, ω1With the relation of each coordinate axes of world coordinate system it is:
It should be noted that this example is only the tangent relationship with angle of incidence is to determine ω1、ω2With the world
As a example by the relation of each coordinate axes of coordinate system, can also is that during implementing by its of angle of incidence
His trigonometric function relation, such as: sine relation, cosine relation, cotangent relation etc. determine ω1、ω2With
The relation of each coordinate axes of world coordinate system.
Wherein, LX-l、LZ-l represents that shown in Fig. 5, two fish eye lenses are in lens mounting frame respectively
The distance of the heart.
If during it should be noted that object P is projected in XOZ or YOX plane, then two
Adaptability is also changed by the easy structure that individual vertical fish eye lens is corresponding.And for change after simple
Structure, those skilled in the art are based on the vertical fish-eye easy structure of two shown in Fig. 5, energy
Enough reasonably suppositions draw.
S4: determine the relation of target to be positioned i.e. object P and image coordinate system.
The true imaging point P of object on the imaging plane of image coordinate systemDCoordinate be (u, v), figure
As the coordinate of the initial point i.e. central point of imaging surface of coordinate system is (u0, v0).Target the most to be positioned and image
Azimuthal tangent value of coordinate system is:Now, due to coordinate be (u, v), (u0,
v0) it is occurrence, therefore, it can draw target to be positioned and image coordinate system by above-mentioned relation formula
Azimuth, i.e. φ angle.
S5: determined the azimuth of target to be positioned and world coordinate system, i.e. θ angle by φ angle.
Those skilled in the art are it will be clearly understood that based on fish-eye equidistant rules of image formation, target to be positioned
With the azimuth of image coordinate system, equal with the azimuth of world coordinate system with target to be positioned.That is, φ angle
Equal with θ angle.
Further, by fish eye lens mathematical model, when the i.e. object P of target to be positioned is projected to YOZ
Time in plane,And when i.e. object P projects in XOZ plane target to be positioned,In this instantiation, target to be positioned i.e. object P is projected in YOZ plane, really
Use during earnest body P three-dimensional coordinate in world coordinate systemAs a example by.
S6: by the relation determined in above-mentioned steps S3 to S5, i.e. can determine that object P is at world coordinates
Three-dimensional coordinate in system.
That is, will And Composition equation group.By
In ω1、ω2And θ angle is constant, therefore, i.e. can determine that object P is alive by the above-mentioned equation group of solution
Three-dimensional coordinate in boundary's coordinate system, i.e. P (X, Y, Z).
S7: by three-dimensional coordinate, object P is positioned.
In this concrete localization method example, it is proposed that object dimensional information under four fish eye lens visual fields
Computational methods, owing to four fish eye lenses may be constructed a full-view stereo ball visual system, therefore, can
To realize blind-area-free panoramic observation.Specifically, regarding by four the most orthogonal compositions of fish eye lens
Frequently acquisition device, it can be ensured that in space, the object on optional position becomes in two fish eye lenses to I haven't seen you for ages
Picture, and, if object occurs in the surface of the full-view stereo ball visual system that four fish eye lenses are constituted
Or underface, this object will imaging in four fish eye lenses simultaneously.Visible, provided by this example
Video acquisition device, it is possible to obtain in global area space arbitrarily thing by least two fish eye lens simultaneously
The three-dimensional information of body.Present example provide localization method, can only to by single visual system,
The vedio data once shooting acquisition at synchronization is analyzed, and i.e. can determine that mesh to be positioned
The three-dimensional coordinate being marked in world coordinate system.This location mode provided in present example, compared to
Existing needs considers that the vedio data that the shooting of cloud platform rotation, multiple spot obtains is analyzed, algorithm letter
Single.Owing to distributing the processing load that simply can not only alleviate system, additionally it is possible to reduce because algorithm complexity is led
The resultant error caused, positions more accurate.
Embodiment three
With reference to Fig. 6, it is shown that the structured flowchart of a kind of alignment system of the embodiment of the present invention three.
The alignment system of the embodiment of the present invention, including: acquisition module 602, it is used for using video acquisition to fill
Putting the video image information obtaining target to be positioned, wherein, video acquisition device includes being fixedly installed many
The combination of individual bugeye lens, the combination of multiple bugeye lenses is not for moving described video acquisition dress
Under conditions of putting, the image information of the arbitrary target in the range of obtaining residing for described video acquisition device;Three
Dimension coordinate determines module 604, for the video image information according to target to be positioned, obtains mesh to be positioned
Target three-dimensional coordinate;Target locating module 606, for according to the three-dimensional coordinate obtained, to mesh to be positioned
Mark positions.
The alignment system provided by the embodiment of the present invention, owing to video acquisition device includes multiple ultra-wide angle
The combination of camera lens, therefore, can also realize blind-area-free panoramic under conditions of not mobile video acquisition device
Observation.Owing to video acquisition device is capable of blind-area-free panoramic observation, thus a certain to be positioned obtaining
During the video image information of target, then just can obtain target to be positioned without mobile video acquisition device
Video image information.As can be seen here, the alignment system that the embodiment of the present invention provides, can be prevented effectively from
Existing alignment system needs by adjusting visual angle, or the multiple spot by means of mobile platform by means of cloud platform rotation
Shooting, the problem that the acquisition of panorama observation space three-dimensional information could be realized.Existing fixed owing to not existing
Problem present in the system of position, therefore, does not the most deposit a series of problem brought because of the problems referred to above.
Embodiment four
With reference to Fig. 7, it is shown that the structured flowchart of a kind of alignment system of the embodiment of the present invention four.
Alignment system in embodiment three has been carried out optimizing further, after optimization by the embodiment of the present invention
Alignment system includes: acquisition module 702, for using video acquisition device to obtain regarding of target to be positioned
Frequently image information, wherein, video acquisition device includes the combination of the multiple bugeye lenses being fixedly installed,
The combination of the plurality of bugeye lens, under conditions of not moving described video acquisition device, obtains
The image information of the arbitrary target in the range of video acquisition device is residing;Three-dimensional coordinate determines module 704,
For the video image information according to described target to be positioned, obtain the three-dimensional coordinate of target to be positioned;Mesh
Mark locating module 706, for according to the three-dimensional coordinate obtained, positioning target to be positioned.
Preferably, be combined as being fixedly installed four fish eye lenses of the multiple bugeye lenses being fixedly installed
Combination;Wherein, four fish eye lenses are by the spaced 90 degree of horizontal positioned of lens bracket.
Preferably, three-dimensional coordinate determines that module 704 includes: angle of incidence acquisition module 7042, for basis
The video image information of target to be positioned, obtains target to be positioned relative to each fish-eye camera lens light
The angle of incidence of axle;Relationship determination module 7044, for determining each of each angle of incidence and world coordinate system
The relation of coordinate axes, and, determine the relation of target to be positioned and image coordinate system;Coordinate determines module
7046, for determining the three-dimensional coordinate of target to be positioned according to the relation determined.
Preferably, relationship determination module 7044 determines each coordinate axes of each angle of incidence and world coordinate system
Relation time: determine that each of each angle of incidence and world coordinate system is sat according to the tangent value of each angle of incidence
The relation of parameter.
Preferably, relationship determination module 7044 according to the tangent value of each angle of incidence determine each angle of incidence with
During the relation of each coordinate axes of world coordinate system: determine that at least two is incident according at least two angle of incidence
The relation of each coordinate axes of angle and world coordinate system.
Preferably, when relationship determination module 7044 determines target to be positioned and the relation of image coordinate system: true
The fixed target to be positioned imager coordinate in image coordinate system;According to target to be positioned in image coordinate system
Imager coordinate, determine the azimuth of target to be positioned and image coordinate system.
Preferably, relationship determination module 7044 is sat according to the target to be positioned imaging in image coordinate system
Mark, when determining the azimuth of target to be positioned and image coordinate system: according to target to be positioned in image coordinate
Imager coordinate in system, determines the azimuth tangent value of target to be positioned and image coordinate system;According to orientation
Angle tangent value, determines the azimuth of target to be positioned and image coordinate system.
Preferably, three-dimensional coordinate determines that module 704 also includes: the azimuth of world coordinate system determines module
7048, it is used for after being to determine that module 7044 determines the target to be positioned azimuth with image coordinate system,
According to the azimuth of the target to be positioned determined Yu image coordinate system, and, according to fish eye lens equidistant projection
Rules of image formation, determines the azimuth of target to be positioned and world coordinate system.
Preferably, coordinate determines that module 7046 determines the three-dimensional coordinate of target to be positioned according to the relation determined
Time: according to target to be positioned and the tangent value of the angle of incidence of world coordinate system, angle of incidence and world coordinate system
The relation of each coordinate axes, azimuthal tangent value of target to be positioned and world coordinate system and, side
Parallactic angle and the relation of at least two coordinate axes of world coordinate system, determine the three-dimensional coordinate of target to be positioned.
The alignment system of the embodiment of the present invention is used for realizing in previous embodiment one and embodiment two corresponding
Localization method, and there is the beneficial effect of corresponding embodiment of the method, do not repeat them here.
Device embodiment described above is only schematically, wherein said illustrates as separating component
Unit can be or may not be physically separate, the parts shown as unit can be or
Person may not be physical location, i.e. may be located at a place, or can also be distributed to multiple network
On unit.Some or all of module therein can be selected according to the actual needs to realize the present embodiment
The purpose of scheme.Those of ordinary skill in the art are not in the case of paying performing creative labour, the most permissible
Understand and implement.
Through the above description of the embodiments, those skilled in the art is it can be understood that arrive each reality
The mode of executing can add the mode of required general hardware platform by software and realize, naturally it is also possible to by firmly
Part.Based on such understanding, the portion that prior art is contributed by technique scheme the most in other words
Dividing and can embody with the form of software product, this computer software product can be stored in computer can
Read in storage medium, such as ROM/RAM, magnetic disc, CD etc., including some instructions with so that one
Computer equipment (can be personal computer, server, or the network equipment etc.) performs each to be implemented
The method described in some part of example or embodiment.
Last it is noted that above example is only in order to illustrate technical scheme, rather than to it
Limit;Although the present invention being described in detail with reference to previous embodiment, the ordinary skill of this area
Personnel it is understood that the technical scheme described in foregoing embodiments still can be modified by it, or
Person carries out equivalent to wherein portion of techniques feature;And these amendments or replacement, do not make corresponding skill
The essence of art scheme departs from the spirit and scope of various embodiments of the present invention technical scheme.
Claims (18)
1. a localization method, it is characterised in that including:
Using video acquisition device to obtain the video image information of target to be positioned, wherein, described video obtains
Fetching puts the combination of multiple bugeye lenses including being fixedly installed, the combination of the plurality of bugeye lens
For under conditions of not moving described video acquisition device, obtain scope residing for described video acquisition device
The image information of interior arbitrary target;
According to the video image information of described target to be positioned, obtain the three-dimensional coordinate of described target to be positioned;
According to the described three-dimensional coordinate obtained, described target to be positioned is positioned.
Method the most according to claim 1, it is characterised in that described in be fixedly installed multiple super wide
Be combined as being fixedly installed four fish-eye combinations of angle mirror head;Wherein, described four fish eye lenses
By the spaced 90 degree of horizontal positioned of lens bracket.
Method the most according to claim 2, it is characterised in that according to regarding of described target to be positioned
Frequently image information, the step of the three-dimensional coordinate obtaining described target to be positioned includes:
According to the video image information of described target to be positioned, obtain described target to be positioned relative to each
The angle of incidence of fish-eye camera lens optical axis;
Determine the relation of each described angle of incidence and each coordinate axes of world coordinate system, and, determine institute
State the relation of target to be positioned and image coordinate system;
The three-dimensional coordinate of described target to be positioned is determined according to the described relation determined.
Method the most according to claim 3, it is characterised in that determine each described angle of incidence and generation
The step of the relation of each coordinate axes of boundary's coordinate system includes:
Each described angle of incidence and described world coordinate system is determined according to the tangent value of angle of incidence each described
The relation of each coordinate axes.
Method the most according to claim 4, it is characterised in that according to angle of incidence each described just
Cut value and determine the step bag of each described angle of incidence and the relation of each coordinate axes of described world coordinate system
Include:
Described at least two angle of incidence and described world coordinate system is determined according to angle of incidence described at least two
The relation of each coordinate axes.
6. according to the method according to any one of claim 3 to 5, it is characterised in that treat described in determining
Location target includes with the step of the relation of image coordinate system:
Determine the described target to be positioned imager coordinate in described image coordinate system;
According to the described target to be positioned imager coordinate in described image coordinate system, determine described to be positioned
Target and the azimuth of image coordinate system.
Method the most according to claim 6, it is characterised in that according to described target to be positioned in institute
State the imager coordinate in image coordinate system, determine the azimuthal of described target to be positioned and image coordinate system
Step includes:
According to the described target to be positioned imager coordinate in described image coordinate system, determine described to be positioned
Target and the azimuth tangent value of image coordinate system;
According to described azimuth tangent value, determine the azimuth of target to be positioned and image coordinate system.
Method the most according to claim 7, it is characterised in that described determine target to be positioned with
After azimuthal step of image coordinate system, also include:
According to the azimuth of the target described to be positioned determined Yu described image coordinate system, and, according to flake
Camera lens equidistant projection rules of image formation, determines the azimuth of described target to be positioned and described world coordinate system.
Method the most according to claim 8, it is characterised in that described according to the described relation determined
Determine that the step of the three-dimensional coordinate of described target to be positioned includes:
According to each seat of the tangent value of angle of incidence, described angle of incidence and described world coordinate system each described
Azimuthal tangent value of the relation of parameter, described target to be positioned and described world coordinate system and, institute
State the relation at azimuth and at least two coordinate axes of described world coordinate system, determine described target to be positioned
Three-dimensional coordinate.
10. an alignment system, it is characterised in that including:
Acquisition module, for using video acquisition device to obtain the video image information of target to be positioned, its
In, described video acquisition device includes the combination of the multiple bugeye lenses being fixedly installed, the plurality of super
The combination of wide-angle lens, under conditions of not moving described video acquisition device, obtains described video and obtains
Fetching put residing in the range of the image information of arbitrary target;
Three-dimensional coordinate determines module, for the video image information according to described target to be positioned, obtains institute
State the three-dimensional coordinate of target to be positioned;
Target locating module, for according to the described three-dimensional coordinate obtained, carrying out described target to be positioned
Location.
11. systems according to claim 10, it is characterised in that described in be fixedly installed multiple super
Be combined as being fixedly installed four fish-eye combinations of wide-angle lens;Wherein, described four flake mirrors
Head is by the spaced 90 degree of horizontal positioned of lens bracket.
12. systems according to claim 11, it is characterised in that described three-dimensional coordinate determines module
Including:
Angle of incidence acquisition module, for the video image information according to described target to be positioned, obtains described
Target to be positioned is relative to the angle of incidence of each fish-eye camera lens optical axis;
Relationship determination module, for determining each described angle of incidence each coordinate axes with world coordinate system
Relation, and, determine the relation of described target to be positioned and image coordinate system;
Coordinate determines module, for determining the three-dimensional seat of described target to be positioned according to the described relation determined
Mark.
13. systems according to claim 12, it is characterised in that described relationship determination module determines
When each described angle of incidence and the relation of each coordinate axes of world coordinate system:
Each described angle of incidence and described world coordinate system is determined according to the tangent value of angle of incidence each described
The relation of each coordinate axes.
14. systems according to claim 13, it is characterised in that described relationship determination module according to
The tangent value of each described angle of incidence determines each coordinate of each described angle of incidence and described world coordinate system
During the relation of axle:
Described at least two angle of incidence and described world coordinate system is determined according to angle of incidence described at least two
The relation of each coordinate axes.
15. according to the system according to any one of claim 12 to 14, it is characterised in that described pass
When system determines the relation that module determines described target to be positioned and image coordinate system:
Determine the described target to be positioned imager coordinate in described image coordinate system;
According to the described target to be positioned imager coordinate in described image coordinate system, determine described to be positioned
Target and the azimuth of image coordinate system.
16. systems according to claim 15, it is characterised in that described relationship determination module according to
The described target to be positioned imager coordinate in described image coordinate system, determines described target to be positioned and figure
During as the azimuth of coordinate system:
According to the described target to be positioned imager coordinate in described image coordinate system, determine described to be positioned
Target and the azimuth tangent value of image coordinate system;
According to described azimuth tangent value, determine the azimuth of target to be positioned and image coordinate system.
17. systems according to claim 16, it is characterised in that described three-dimensional coordinate determines module
Also include:
The azimuth of world coordinate system determines module, for determining mesh to be positioned at described relationship determination module
After the azimuth of mark and image coordinate system, according to the target described to be positioned determined and described image coordinate
System azimuth, and, according to fish eye lens equidistant projection rules of image formation, determine described target to be positioned with
The azimuth of described world coordinate system.
18. systems according to claim 17, it is characterised in that described coordinate determine module according to
When described relation determines the three-dimensional coordinate of described target to be positioned:
According to each seat of the tangent value of angle of incidence, described angle of incidence and described world coordinate system each described
Azimuthal tangent value of the relation of parameter, described target to be positioned and described world coordinate system and, institute
State the relation at azimuth and at least two coordinate axes of described world coordinate system, determine described target to be positioned
Three-dimensional coordinate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510271989.4A CN105989354A (en) | 2015-05-25 | 2015-05-25 | Positioning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510271989.4A CN105989354A (en) | 2015-05-25 | 2015-05-25 | Positioning method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105989354A true CN105989354A (en) | 2016-10-05 |
Family
ID=57040327
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510271989.4A Pending CN105989354A (en) | 2015-05-25 | 2015-05-25 | Positioning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105989354A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122770A (en) * | 2017-06-13 | 2017-09-01 | 驭势(上海)汽车科技有限公司 | Many mesh camera systems, intelligent driving system, automobile, method and storage medium |
CN108268850A (en) * | 2018-01-24 | 2018-07-10 | 成都鼎智汇科技有限公司 | A kind of big data processing method based on image |
CN112348884A (en) * | 2019-08-09 | 2021-02-09 | 华为技术有限公司 | Positioning method, terminal device and server |
WO2021035882A1 (en) * | 2019-08-26 | 2021-03-04 | 陈利君 | Sound source positioning method using fisheye lens and device thereof |
CN114132409A (en) * | 2021-12-08 | 2022-03-04 | 北京理工大学 | Whistling sound identification snapshot unmanned patrol car and control method thereof |
CN114178282A (en) * | 2021-11-12 | 2022-03-15 | 国能铁路装备有限责任公司 | Brake beam cleaning production line, identification and positioning system, device and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090160940A1 (en) * | 2007-12-20 | 2009-06-25 | Alpine Electronics, Inc. | Image display method and image display apparatus |
CN102868853A (en) * | 2012-10-08 | 2013-01-09 | 长沙尼采网络科技有限公司 | 360-degree non-blind area panorama video shooting device based on regular polyhedron |
CN103456171A (en) * | 2013-09-04 | 2013-12-18 | 北京英泰智软件技术发展有限公司 | Vehicle flow detection system and method based on fish-eye lens and image correction method |
CN103903263A (en) * | 2014-03-26 | 2014-07-02 | 苏州科技学院 | Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images |
-
2015
- 2015-05-25 CN CN201510271989.4A patent/CN105989354A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090160940A1 (en) * | 2007-12-20 | 2009-06-25 | Alpine Electronics, Inc. | Image display method and image display apparatus |
CN102868853A (en) * | 2012-10-08 | 2013-01-09 | 长沙尼采网络科技有限公司 | 360-degree non-blind area panorama video shooting device based on regular polyhedron |
CN103456171A (en) * | 2013-09-04 | 2013-12-18 | 北京英泰智软件技术发展有限公司 | Vehicle flow detection system and method based on fish-eye lens and image correction method |
CN103903263A (en) * | 2014-03-26 | 2014-07-02 | 苏州科技学院 | Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images |
Non-Patent Citations (1)
Title |
---|
高宇: "利用Ladybug3全景系统相机测量物方空间点的三维坐标", 《城市建设理论研究(电子版)》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122770A (en) * | 2017-06-13 | 2017-09-01 | 驭势(上海)汽车科技有限公司 | Many mesh camera systems, intelligent driving system, automobile, method and storage medium |
CN107122770B (en) * | 2017-06-13 | 2023-06-27 | 驭势(上海)汽车科技有限公司 | Multi-camera system, intelligent driving system, automobile, method and storage medium |
CN108268850A (en) * | 2018-01-24 | 2018-07-10 | 成都鼎智汇科技有限公司 | A kind of big data processing method based on image |
CN108268850B (en) * | 2018-01-24 | 2022-04-12 | 贵州华泰智远大数据服务有限公司 | Big data processing method based on image |
CN112348884A (en) * | 2019-08-09 | 2021-02-09 | 华为技术有限公司 | Positioning method, terminal device and server |
CN112348884B (en) * | 2019-08-09 | 2024-06-04 | 华为技术有限公司 | Positioning method, terminal equipment and server |
WO2021035882A1 (en) * | 2019-08-26 | 2021-03-04 | 陈利君 | Sound source positioning method using fisheye lens and device thereof |
CN114178282A (en) * | 2021-11-12 | 2022-03-15 | 国能铁路装备有限责任公司 | Brake beam cleaning production line, identification and positioning system, device and method |
CN114178282B (en) * | 2021-11-12 | 2023-10-10 | 国能铁路装备有限责任公司 | Brake beam cleaning production line, identification positioning system, device and method |
CN114132409A (en) * | 2021-12-08 | 2022-03-04 | 北京理工大学 | Whistling sound identification snapshot unmanned patrol car and control method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105989354A (en) | Positioning method and system | |
CN106251399B (en) | A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam | |
US20170127045A1 (en) | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
US20170134713A1 (en) | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
CN106204443A (en) | A kind of panorama UAS based on the multiplexing of many mesh | |
CN104363438B (en) | Full-view stereo making video method | |
CN105008995A (en) | Omnistereo imaging | |
CN106303283A (en) | A kind of panoramic image synthesis method based on fish-eye camera and system | |
CN110889873A (en) | Target positioning method and device, electronic equipment and storage medium | |
CN110264528A (en) | Quick self-calibration method for fisheye lens binocular camera | |
CN111510621B (en) | Imaging system | |
CN103247020A (en) | Fisheye image spread method based on radial characteristics | |
CN107504918B (en) | Radio telescope surface shape measurement method and device | |
CN105488766A (en) | Fish-eye lens image correcting method and device | |
CN112837207A (en) | Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera | |
CN107545537A (en) | A kind of method from dense point cloud generation 3D panoramic pictures | |
CN109325913A (en) | Unmanned plane image split-joint method and device | |
CN103338333B (en) | Optimal configuration method for orientation element of aerial camera | |
CN113763480B (en) | Combined calibration method for multi-lens panoramic camera | |
CN106990668A (en) | A kind of imaging method, the apparatus and system of full-view stereo image | |
CN108074250A (en) | Matching power flow computational methods and device | |
CN110766752B (en) | Virtual reality interactive glasses with light reflecting mark points and space positioning method | |
CN111915741A (en) | VR generater based on three-dimensional reconstruction | |
CN107341764A (en) | Virtual Space localization method and system based on fish eye lens stereoscopic vision | |
JP7014175B2 (en) | Image processing device, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20161005 |