CN101464132B - Position confirming method and apparatus - Google Patents

Position confirming method and apparatus Download PDF

Info

Publication number
CN101464132B
CN101464132B CN200810247497.1A CN200810247497A CN101464132B CN 101464132 B CN101464132 B CN 101464132B CN 200810247497 A CN200810247497 A CN 200810247497A CN 101464132 B CN101464132 B CN 101464132B
Authority
CN
China
Prior art keywords
coordinate
specified dimension
parameter
target object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200810247497.1A
Other languages
Chinese (zh)
Other versions
CN101464132A (en
Inventor
邓亚峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mid Star Technology Ltd By Share Ltd
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CN200810247497.1A priority Critical patent/CN101464132B/en
Publication of CN101464132A publication Critical patent/CN101464132A/en
Application granted granted Critical
Publication of CN101464132B publication Critical patent/CN101464132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention provides a method for determining locations and a device thereof. The method comprises the following steps: collecting an image with target object imagery; acquiring area description information used for describing the occupied area of the imagery in the image; and determining a spatial location by using the area description information and pre-determined relation parameters which can reflect the correlative relation between the area description information and the spatial location of the target object. The technical scheme of the embodiment of the invention has the advantages that the spatial location can be determined more accurately; and the unique spatial location of the target object relative to the image collecting device can further be determined.

Description

Location determining method and position determining means
Technical field
The present invention relates to technical field of image processing, particularly a kind of location determining method and position determining means.
Background technology
The development of image processing technology, is advancing the digitizing process of image capture device and in more multi-field widespread use.Under certain situation, expectation is based on image processing techniques, know the locus of target with respect to image capture device that be taken, with this locus based on being known, carry out other operations, as, in Simulation Application, utilize the location expression information of above-mentioned locus to carry out correlation computations, as predicted or judging whether the people before image capture device can meet with barrier, with the limbs dirigibility of test person; For another example, the location expression information of locus is offered to decision maker, to help to carry out relevant decision-making, etc.
In prior art, for the target that is taken, be people's situation, provide the image based on taking, determine that people is with respect to the technical scheme of the location expression information of the locus of image capture device.In prior art, for convenience of processing, by people abstract be a right cylinder.People's unique point is as eyes and face, and branch is on this right cylinder.
Referring to Fig. 1, Fig. 1 is that in prior art, people and image capture device are overlooked the perspective view of direction.In Fig. 1, adopt two-dimensional direct angle coordinate system to demarcate relevant location information, wherein, initial point O (0,0) is made as image capture device position; H (x, y) is made as r for the center of circle of the right cylinder disc that projection goes out on coordinate, the radius of this disc; H (x, y) is made as D with the distance straight between O (0,0); If shown in Fig. 1, on coordinate, the fan-shaped angle that the projection of left eye and the projection of face and H (x, y) surround is made as α, and the component size of the distance between two projections on x axle is a; The fan-shaped angle that the projection of right eye and the projection of face and H (x, y) surround is also α, and the component size of the distance between two projections on x direction of principal axis is b.If the projection that people's face towards image capture device, can be established true origin, the center of circle and face roughly point-blank, the angle between this straight line and the optical axis of image capture device is θ.Wherein, α can rule of thumb add up and draw, a, b can the projection coordinate based on eyes and face calculate.For determining the position of the relative image capture device of people, need to solve D and θ.
Prior art mainly comprises:
According to the dependent imaging parameter of the distance between people's eyes, image capture device on image, as the focal length of resolution, sensor devices, physical size etc., and image-forming principle, calculates the distance D straight between people and image capture device;
The relational expression of calculating r and θ is as follows:
rsin(α+θ)-rsinθ=a
Formula 1
rsin(α-θ)+rsinθ=b
According to formula 1, solve θ as follows:
θ = arctg [ ( b - a ) sin a ( b + a ) ( 1 - cos α ) ] Formula 2
Further, based on following calculating formula 3, obtain the positional information of the relative image capture device of people on rectangular coordinate:
x = D · sin θ y = D · cos θ Formula 3
Above-mentioned prior art can be determined the positional information of target with respect to image capture device that be taken under particular case, in this particular case, the unique point that the target that is taken is possessed has requirement with respect to the position of image capture device, if the face of will asking for help is towards image capture device, to apply above-mentioned each calculating formula, obtain people with respect to the positional information of image capture device.And in practical application, be conventionally difficult to guarantee the regularity of relative position between unique point and image capture device.And, based on above-mentioned formula 3, be difficult to determine unique people with respect to the location expression information of image capture device.
What therefore, prior art provided still haves much room for improvement with respect to the technical scheme of the positional information of image capture device for the target that is taken.
Summary of the invention
The embodiment of the present invention provides a kind of location determining method and position determining means, to solve being difficult to of existing in the prior art, determines that more exactly target object is with respect to the technical matters of the locus of image capture device.
For solving the problems of the technologies described above, the embodiment of the present invention provides a kind of location determining method, and the method comprises the following steps:
The image that collection comprises target object imaging;
Obtain for being imaged on the region description information in shared region on described image described in describing;
Utilize described region description information, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus.
Preferably, described region description information comprises: the positional information of described region on described image, and/or, the big or small descriptor of described imaging.
Preferably, be describedly related to that parameter comprises: embody first of target object size and be related to parameter;
Described definite described locus comprises:
Utilize preset algorithm, described first to be related to parameter and described positional information and described big or small descriptor, determine the projected position of described target object in the plane of the optical axis perpendicular to described collection image device used.
Preferably, default two-dimensional coordinate system, the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the number of pixels in described specified dimension direction; Or the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the ratio of number of pixels in described specified dimension direction and the resolution of described the above image device of specified dimension direction.
Preferably, described first is related to that parameter comprises: described target object is in the first length of specified dimension direction;
Described in described positional information comprises, be imaged on the coordinate figure in specified dimension direction described in described two-dimensional coordinate system and be designated as M;
Described in comprising, described big or small descriptor is imaged on the second length in described specified dimension direction;
The described projected position of described target object in the plane of the optical axis perpendicular to described collection image device used of determining comprises:
Utilize default three-dimensional system of coordinate to demarcate the space of described optical axis and the formation of described plane, wherein the first coordinate axis and the second coordinate axis are demarcated described plane, three axes and described optical axis coincidence, and the forward of described three axes described image dorsad;
Utilize calculating formula the first length M/the second length, calculate described target object and be projected in the coordinate figure in described the first coordinate axis and/or the second coordinate axis in described plane.
Preferably, described calculating formula the first length M/the second length of utilizing, calculate the coordinate figure that be projected in described first coordinate axis and/or second coordinate axis in of described target object in described plane and comprise:
Utilize described calculating formula, calculate described target object and be projected in the first coordinate figure in described the first coordinate axis in described plane;
Described specified dimension direction comprise the first specified dimension direction and with perpendicular the second specified dimension direction of this first specified dimension direction; And described the first specified dimension direction is parallel with described the first change in coordinate axis direction; Described the second specified dimension direction is parallel with described the second change in coordinate axis direction;
The coordinate figure of remembering described the first specified dimension direction is M1; The coordinate figure of described the second specified dimension direction is M2;
According to calculating formula the first coordinate figure M2/M1, described in calculating, be projected in the second coordinate figure in described the second coordinate axis.
Preferably, described first is related to that parameter is preset, and comprising:
Measure in advance preset reference thing with the parallel plane reference planes in described image place on projection, the physical length on assigned direction; Described physical length is used as described first and is related to parameter; Or,
From the reference picture that comprises described reference substance imaging collecting, measure the length in pixels of the picture of described the above projected correspondence of assigned direction, described reference substance be imaged on pixel coordinate on described assigned direction and described reference substance in the plane vertical with described optical axis be projected in described three-dimensional system of coordinate in coordinate figure in the coordinate axis parallel with described assigned direction;
According to calculating formula: the coordinate figure/pixel coordinate described in length in pixels in the parallel coordinate axis of assigned direction, calculate described first and be related to parameter.
Preferably, be describedly related to that parameter comprises:
The big or small descriptor and the described target object that embody the resolution of described collection image device used, described imaging are related to parameter apart from second of incidence relation between the distance of the plane vertical with described image device optical axis;
Described region description information comprises: the big or small descriptor of described imaging;
Described definite described locus comprises:
Utilize preset algorithm, described second to be related to parameter and described big or small descriptor, determine described target object apart from the distance of the plane vertical with described image device optical axis.
Preferably, described in, obtaining described region description information comprises:
The default default two-dimensional coordinate system that can demarcate each point position in described image, is imaged on this two-dimensional coordinate described in measurement and fastens the length Δ l in specified dimension direction;
The described distance of determining the described target object distance plane vertical with described image device optical axis comprises:
Utilize calculating formula: calculate described distance;
Coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the number of pixels in described specified dimension direction; Or the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the ratio of number of pixels in described specified dimension direction and the resolution of described the above image device of specified dimension direction.
Preferably, described second is related to that parameter is preset, and comprising:
Measure the reference distance of the preset reference object distance plane vertical with described image device optical axis;
Measure described reference substance with the parallel plane reference planes in described image place on projection, the 3rd length on assigned direction;
Utilize calculating formula: calculate described second and be related to parameter.
Preferably, described in obtain for describe described image space and/or size information for adopting object detection technology, be imaged on position and/or the size in image described in automatic acquisition.
Preferably, described object behaviour face is or/and the number of people.
For solving the problems of the technologies described above, the embodiment of the present invention provides a kind of equipment, and this equipment comprises: position determining means, and described device comprises: collecting unit, acquiring unit and determining unit;
Described collecting unit, for gathering the image that comprises target object imaging;
Described acquiring unit, for obtaining for being imaged on the region description information in shared region on described image described in describing;
Described determining unit, for the described region description information of utilizing described acquiring unit to obtain, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus.
Preferably, described acquiring unit comprises:
First obtains subelement, and for obtaining described region at the coordinate of default two-dimensional coordinate system, described coordinate is used as described positional information; Described default two-dimensional coordinate system can be demarcated each point position in described image;
Second obtains subelement, for being imaged on default two-dimensional coordinate described in measuring, fastens the length Δ l in specified dimension direction.
Preferably, described determining unit comprises: presupposed information receiving element and the first computing unit;
Described presupposed information receiving element, is related to parameter for receiving first of default embodiment target object size;
Described the first computing unit, for what utilize that preset algorithm, described presupposed information receiving element receive, described first be related to parameter, and described in obtain the region description information that subelement obtains, determine the position of described target object projection in the plane of the optical axis perpendicular to described collection image device used.
Preferably, described determining unit comprises: presupposed information receiving element and the second computing unit;
Described presupposed information receiving element, for receiving the resolution of the described collecting unit of default embodiment image device used, the big or small descriptor of described imaging and described target object are related to parameter apart from second of incidence relation between the distance of the plane vertical with described image device optical axis;
Described the second computing unit, for what utilize that preset algorithm, described presupposed information receiving element receive, described second be related to that parameter, described second obtains the information that subelement gets, determine described target object apart from the distance of the plane vertical with described image device optical axis.
The technical scheme that the embodiment of the present invention the provides prior art of comparing, has following beneficial effect:
The location determining method that embodiments of the invention provide and position determining means, by the image from collecting, obtain the region description information in the shared region of target object imaging, and in conjunction with the default parameter that is related to, determine the locus of target object, due to region description information and be related to that parameter all can be learned more exactly, therefore, the prior art of comparing, the technical scheme that embodiments of the invention provide, above-mentioned locus can be determined more exactly, and target object can be determined with respect to the only space position of image capture device.
Accompanying drawing explanation
Fig. 1 is that in prior art, people and image capture device are overlooked the perspective view of direction;
Fig. 2 is the process flow diagram of location determining method in embodiments of the invention;
Fig. 3 is for demarcating the schematic diagram in object and space, imaging place thereof in embodiments of the invention;
Fig. 4 determines the process flow diagram of the position of target object in embodiments of the invention;
Fig. 5 is the structural representation of position determining means in embodiments of the invention.
Embodiment
For the image based on collecting, determine more exactly target object with respect to the locus of image capture device, in implementation procedure of the present invention, the projection law of inventor between material object and imaging, carry out derivation work, the relational expression that is difficult to definite parameter will be comprised, be converted into the relational expression being formed by relatively easily definite parameter, thereby, simplify computing and operation element for determining that object need carry out with respect to the locus of image capture device, and effectively improved the accuracy of the location expression information of determining.
Technical scheme embodiments of the invention being provided below in conjunction with specific embodiment and accompanying drawing elaborates.
Referring to Fig. 2, Fig. 2 is the process flow diagram of location determining method in embodiments of the invention, and this flow process can comprise the following steps:
The image that step 201, collection comprise target object imaging.
Step 202, obtain the region description information that is imaged on imaging region on this image for describing this.
In embodiments of the invention, region description information can comprise: the positional information of imaging region on image, or, the big or small descriptor of imaging region, etc.
Step 203, utilize described region description information, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus.
In embodiments of the invention, be usedly related to that parameter can be preset,, in determining the process of target object with respect to the locus of image capture device, the parameter that is related to pre-determining out can be used as to known quantity and be used.Thereby the parameter that is related to based on known, and the region description information getting from image, and the relatively easy algorithm calculating formula of utilizing inventor to release in advance, can determine target object with respect to the locus of image capture device.
In the process of preset relation parameter, can, by gathering the imaging of reference substance, pre-determine out and be related to parameter.For convenience of distinguishing, claim the reference imaging that is imaged as of reference substance.Wherein, reference substance can be specifically target object, or other objects are as, the object that can carry out analogy with target object.
Particularly, in embodiments of the invention, be related to that parameter can comprise: embody first of target object size and be related to parameter, and/or, embody and gather the resolution of image device used, the big or small descriptor of imaging and target object are related to parameter apart from second of incidence relation between the distance of the plane vertical with image device optical axis.
In embodiments of the invention, determine in the process of target object with respect to the locus of image capture device, if selected region description information is the positional information of imaging region on image, the corresponding parameter that is related to of selecting is first to be related to parameter; If the big or small descriptor that selected region description information is imaging region, the corresponding parameter that is related to of selecting is second to be related to parameter.
The technical scheme that embodiments of the invention the provide prior art of comparing, owing to strictly not limiting the classification of target object, therefore removing is the mankind's scene applicable to target object, also applicable to other scenes, and, it for target object, is the mankind's scene, also the qualifications of Graph-Oriented as collecting device of will not asking for help, and because region description information can obtain information more accurately based on image, and be related to that parameter can be preset more exactly, therefore, the prior art of comparing, can determine more exactly target object with respect to the locus of image capture device.
The derivation work of carrying out below in conjunction with inventor, describes the implementation procedure of the technical scheme of the embodiment of the present invention in detail, and claims that the related material object of derivation is reference substance.
In practical application, can adopt a two-dimensional matrix to carry out presentation video, the concrete manifestation form of region description information can be embodied by default image coordinate system, and the coordinate that image coordinate system is demarcated can be considered the matrix element in two-dimensional matrix.Adoptable image coordinate system comprises image pixel coordinate system, image physical coordinates system etc.In embodiments of the invention, for convenience of statement, claim that image pixel coordinate is the first coordinate system, claim that image physical coordinates is the second coordinate system.
The two-dimensional coordinate that the first coordinate system and the second coordinate system are demarcated comprises the pixel position coordinate figure in two dimension directions respectively, can claim that the coordinate figure in one of them dimension direction is the row sequence number in two-dimensional matrix, claim that the coordinate figure in another dimension direction is row sequence number.The row sequence number of demarcating for the first coordinate system and row sequence number, its unit is number of pixels; The row sequence number of demarcating for the second coordinate system and row sequence number, its unit is long measure, as centimetre (cm).
Selected digital image coordinate system, the coordinate figure of the representative point that the concrete manifestation form of the positional information of above-mentioned imaging region on image comprises imaging region on different dimensions; The big or small descriptor of imaging region can comprise the length of imaging region on different dimensions, concrete as, the length of imaging region in a specified dimension direction, wide in another specified dimension.
And, for same pixel, between the coordinate that the coordinate that the first coordinate system is demarcated and the second coordinate system are demarcated, there are mapping relations.Referring to Fig. 3, Fig. 3 is for demarcating the schematic diagram in object and space, imaging place thereof in embodiments of the invention.If the first coordinate system comprises the first initial point, respectively through the first initial point and orthogonal row axle u axle and row axle v axle, the unit of u axle and v axle is number of pixels.If the second coordinate system comprises the second initial point, respectively through x axle and the y axle of the second initial point, the unit of x axle and y axle is long measure, as cm.If the x axle of the second coordinate system is parallel with the row axle of the first coordinate system, y axle is parallel with the row axle of the first coordinate system.If the coordinate of certain pixel on the first coordinate system is (u, v), the coordinate (x, y) of this pixel in the second coordinate system with the mapping relations of (u, v) is:
u = x dx + u 0 Formula 4
v = y dy + v 0 Formula 5
Wherein, dx, dy are respectively single pixel at x direction of principal axis and in the shared physical length of y direction of principal axis.
With image capture device position as a reference, arrange three-coordinate with scaling reference the locus with respect to image capture device.This three-coordinate is three-dimensional cartesian coordinate system, and its 3rd initial point can be made as the photocentre position of image device on image capture device, and the z axle of this three-coordinate is established and optical axis coincidence, and its forward directed towards object; X-axis is vertical with the optical axis of image device with the plane that Y-axis forms.Referring to Fig. 3, Fig. 3 is for demarcating the schematic diagram in object and space, imaging place thereof in embodiments of the invention.The coordinate of note locus, target object place is B (X c, Y c, Z c), this coordinate B (X c, Y c, Z c) can be used for describing locus, target object place, in embodiments of the invention, need to derive for determining the calculating formula of the one or more coordinate figures of this coordinate.In addition, this three-coordinate also can be described as world coordinate system.
For gathering in the practical application of image based on image capture device, often adopt device coordinate system to demarcate relevant information.In embodiments of the invention, for convenience of deriving, set above-mentioned three-coordinate is actual to coincide with device coordinate system, and the perspective projection transformation relational expression of image capture device imaging is reduced to:
Z c x y 1 = f 0 0 0 0 f 0 0 0 0 1 0 X c Y c Z c 1
Further can obtain:
X = x · Z f Formula 6
Y = y · Z f Formula 7
By formula 4, can be obtained:
X=(u-u 0) dx formula 8
By formula 5, can be obtained:
Y=(v-v 0) dy formula 9
By formula 8 substitution formulas 6, can obtain:
X c = ( u - u 0 ) · dx · Z c f Formula 10
By formula 9 substitution formulas 7, can obtain:
Y c = ( v - v 0 ) · dy · Z c f Formula 11
Wherein, u 0with v 0be constant, for convenience of computing, all can be made as zero.
According to the image-forming principle of image device on image capture device, and image distance a can be approximately to the focal distance f of image device, have:
h H = f A
Wherein, H is the length of reference substance on one dimension, and h is the length of corresponding picture, and A is object distance, and reference substance, apart from the distance of X-axis in three-coordinate and Y-axis place plane, claims that described X-axis and Y-axis place are XY plane.
If the length that is projected in X-direction of reference substance in XY plane is W, the length of corresponding picture is h=dx Δ u; Wherein Δ u is the number of pixels of picture on the line direction of the first coordinate system:
Z c = W · f Δu · dx Formula 12
If the length that is projected in Y-axis in of reference substance in XY plane is H, the length of corresponding picture is h=dy Δ v, and wherein Δ v is the pixel size of picture on the column direction of the first coordinate system, obtains:
Z c = H · f Δv · dy Formula 13
By formula 12 substitution formulas 10, obtain:
X c = ( u - u 0 ) Δu · W Formula 14
By formula 13 substitution formulas 11, obtain:
Y c = ( v - v 0 ) Δv · H Formula 15
U in above-mentioned formula 14, the v in formula 15 are in the embodiment of the present invention concrete manifestation form of the positional information of imaging region on image in region description information, can be generally called u, v is the coordinate figure M being imaged in specified dimension direction, and M value can obtain from the image that comprises target object imaging collecting.And above-mentioned Δ u and Δ v are the size of object in specified dimension direction, be respectively width and the height on row direction of principal axis of being expert on direction of principal axis.
For the object of fixed size, above-mentioned W and H are constant, are set as in an embodiment of the present invention the first concrete manifestation form that is related to parameter, note:
C 1=W; C 2=H formula 16
For sensor devices, fix, focal length is fixed, and the true altitude of reference substance and the fixing situation of width, C 1with C 2for constant, can pre-determine out C 1and C 2.
Inventor considers in practical application, the resolution of sensor devices can be set to different values conventionally under different situations, therefore, for the calculating formula that makes to derive is applicable to more scene, avoid not the fixing of resolution of sensor devices and cause the inapplicable situation of released calculating formula, as follows to the further conversion process of above-mentioned formula 13 to 15 work:
If the physical size of sensor devices is PQ, corresponding resolution is UV,,
dx = P U ; dy = Q V Formula 17
Formula 16 substitution formulas 12 and formula 13 are drawn:
Z c = W · f Δu U · P Formula 18
Or,
Z c = H · f Δv V · Q Formula 19
Coordinate figure in the coordinate that above-mentioned the first coordinate system is demarcated in specified dimension direction is for representing the number of pixels in described specified dimension direction; Definable normalization coordinate system, the coordinate figure in its coordinate of demarcating in specified dimension direction is for representing the ratio of number of pixels in described specified dimension direction and the resolution of described the above image device of specified dimension direction.
The t axle of this normalization coordinate system overlaps with u axle, and the r axle of this normalization coordinate system overlaps with v axle, and t = u U , r = v V , ,
Z c = W · f Δt · P , Or, Z c = H · f Δr · Q Formula 20
Above-mentioned formula 19 can be written as:
Z c = C 3 Δt = C 3 Δu / U , Or, Z c = C 4 Δr = C 4 Δv / V Formula 21
Δ u in above-mentioned formula 21, Δ v are the concrete manifestation form of the big or small descriptor of appointed area in the imaging of target object in the embodiment of the present invention.
Formula 14 becomes:
X c = ( u - u 0 ) Δu · W = ( t - t 0 ) Δt * W , Formula 22
Formula 15 becomes:
Y c = ( v - v 0 ) Δv · H = ( r - r 0 ) Δr * H . Formula 23
Above-mentioned formula 22 can be commonly referred to as with formula 23: the first length M/the second length, and wherein, the first length is as W, H; The second length is as Δ u, Δ v or Δ t, Δ r, and M is as (u-u 0), (t-t 0)
Wherein, t 0 = u 0 U , r 0 = v 0 V .
Can be generally called Δ u, Δ v is to be imaged on the length Δ l in specified dimension direction on this first coordinate system, and Δ l can obtain from the image that comprises target object imaging collecting.Formula 21 can synthesize:
Wherein, S is the resolution of image device in specified dimension direction, as U, V.
For sensor devices, fix, focal length is fixed, and the true altitude of reference substance and the fixing situation of width, C 3with C 4for constant, and be the above-mentioned second concrete manifestation form that is related to parameter, can pre-determine out C based on following calculating formula 3with C 4:
C 3 = Z c · U Δu ; C 4 = Z c · V Δv Formula 24
So far, the derivation work that embodiments of the invention carry out comes to an end.In practical application, can be as required, arbitrary calculating formula in choice for use formula 14, formula 15 and formula 22, formula 23, be determined in advance be related to parameter, the region description information obtaining from the current image collecting, and each parameter information of image device, calculate corresponding coordinate figure, as as required, only calculate Z c, or only calculate Z cwith X c, etc.And based on formula 14, formula 15 and formula 22, formula 23, can determine relatively unique world coordinates value.
Further, natural scale can be determined with corresponding imaging size, has following proportionate relationship, and for zones of different imaging on material object, the size of zones of different is conventionally constant with corresponding imaging size, can be expressed as: , further according to formula 14,15 and formula 22,23, obtain:
X c Y c = ( t - t 0 ) ( r - r 0 ) . Formula 25
When horizontal and vertical resolution setting also meets aforementioned proportion and is related to, i.e. dx=dy, exists:
X c Y c = ( u - u 0 ) ( v - v 0 ) . Formula 26
Thereby can only determine C 1, calculate X cbasis afterwards
Y c = X c * ( r - r 0 ) ( t - t 0 ) Or Y c = X c * ( v - v 0 ) ( u - u 0 ) Formula 27
Calculate Y c.
Also can only determine C 2, calculate Y cbasis afterwards
X c = Y c * ( t - t 0 ) ( r - r 0 ) Or X c = Y c * ( u - u 0 ) ( v - v 0 ) Formula 28
Calculate X c.
The description of relevant above-mentioned formula 27, formula 28 can comprise: calculate target object and be projected in the coordinate figure in X or Y coordinate axis in XY plane;
According to calculating formula the first coordinate figure M2/M1, calculate the second coordinate figure being projected in the second coordinate axis, wherein the first coordinate figure is made as the coordinate figure in X-axis, and the second coordinate figure is the coordinate figure in Y-axis.M2 is as (u-u 0), (t-t 0), M1 is as (r-r 0), (v-v 0)
In addition, if dx=dy, C 3=C 4.
Describe the process of preset relation parameter in embodiments of the invention below in detail.
Preset reference thing, and measure the distance of reference substance distance X Y plane, claim this distance for reference distance, i.e. object distance Z c;
Witness mark thing with the parallel plane reference planes in image place on projection, the first length on assigned direction, specifically comprise: the X-axis that this reference substance is parallel to three-coordinate is placed, witness mark thing is at the length W of X-direction, and can obtain C according to formula 16 1;
From the reference picture that comprises reference substance imaging collecting, measure the second length of the picture of projected correspondence on assigned direction, specifically comprise: measure the corresponding number of pixels Δ u that image coordinate is fastened that is imaged on;
The U that utilizes above-mentioned formula 24 and can predict, calculates C 3;
In the situation that keeping object distance constant, the Y-axis that this reference substance is parallel to three-coordinate is placed, and witness mark thing is at the first length H of Y direction, and can obtain C according to formula 16 3; Measure the corresponding number of pixels Δ v that image coordinate is fastened that is imaged on;
The V that utilizes above-mentioned formula 24 and can predict, calculates C 4;
Default second is related to that the way of parameter can be described as:
Measure the reference distance of the preset reference object distance plane vertical with described image device optical axis;
Measure described reference substance with the parallel plane reference planes in described image place on projection, the 3rd length on assigned direction;
Utilize calculating formula: calculate described second and be related to parameter;
Wherein, S is the resolution of described image device on described assigned direction.
In addition, also can take other modes to preset C 1with C 2as follows:
Can the coordinate of witness mark thing in X-axis, and the coordinate in Y-axis, and based on
W = Δu ( u - u 0 ) · X c , H = Y c · Δv ( v - v 0 ) Formula 29
Calculate.
Thereby, can determine all parameters that are related to.Institute it should be noted that, in practical application, the cited said process of the process of preset relation parameter and embodiments of the invention does not need in full accord, as long as the calculating formula that can provide based on embodiments of the invention is determined and is related to more accurately parameter.In addition, the mode that length in pixels in the horizontal direction of above-mentioned reference substance and length in pixels in the vertical direction can be demarcated by craft obtains, and the mode that also can detect by automated graphics obtains.
In addition, u 0=0, v 0=0 all can be suitable for for general image device.For improving degree of accuracy, can in camera coordinate system Z-direction, put an object in non-initial point place, obtain its pixel coordinate position in image, the horizontal ordinate of this position is u 0, ordinate is v 0.
Referring to Fig. 4, Fig. 4 determines the process flow diagram of the position of target object in embodiments of the invention, establishes target object for the mankind in this flow process.This flow process can comprise the following steps:
The image that step 401, collection comprise human face region, described human face region is region corresponding to actual persons face imaging in image.
Step 402, utilize human face detection tech, determine position (u, v) and the big or small descriptor of human face region on image.
If set other target as with reference to target, adopt corresponding object detection technology, just position and size information that can this target of automatic acquisition.For example, to the number of people, can adopt number of people detection technique to determine people's head region position and size in image.If adopt automobile as target, adopt the vehicle testing technique can position and the size information of automatic acquisition automobile region in image.
Wherein, big or small descriptor comprises that the human face region axial length of being expert at is width Delta u, and is listed as the i.e. height Δ v of axial length.In this step 402, the central point of predeterminable everyone face imaging is the representative point of face imaging, the positional information that the positional information of determining this central point is human face region.
Step 403, according to preset algorithm and be related to parameter, determine everyone with respect to the location expression information (X of the locus of image capture device c, Y c, Z c).
In practical application, if desired calculate X c, can be based on above-mentioned formula 14 and default C 1solve; Or can solve according to above-mentioned formula 22, specifically see that the coordinate using in practical application is the first coordinate system or normalization coordinate system; Or, if meet the establishment condition of formula 28, can calculate Y cafter, based on above-mentioned formula 28, solve;
If desired calculate Y c, can be based on above-mentioned formula 14 and default C 2solve; Or, can solve according to above-mentioned formula 23,, specifically see that the coordinate using in practical application is the first coordinate system or normalization coordinate system; Or, if meet the establishment condition of formula 27, can calculate X cafter, based on above-mentioned formula 27, solve;
If desired calculate Z c, can be according to the C of above-mentioned formula 21 and precognition 3or C 4solve.
In addition, Δ u or Δ v that above-mentioned computation process need to be used appointed area, can measure from collected image.
So far, image that can be based on collecting, determines everyone more exactly with respect to the locus of image capture device.
In practical application, because people is under attitude situation of change, may occur can't detect people's face, or the imaging of detected people's face causes the uncertain situation of size of appointed area in the imaging of people's face because of the motion of the person, thus the accuracy of the locus that impact is determined.Therefore, better way is to adopt people's head detection algorithm detection people's head to be imaged on position and the size on image, then measures people with respect to the locus of image capture device.
In practical application, target object that can be based on knowing is with respect to the locus of image capture device, carry out other operations, for example as people moves before image capture device, locus according to obtained people with respect to image capture device, whether the motion of judging people satisfies condition, etc.
Referring to Fig. 5, Fig. 5 is the structural representation of position determining means in embodiments of the invention, and in Fig. 5, position determining means 500 can comprise: collecting unit 501, acquiring unit 502 and determining unit 503;
Collecting unit 501, for gathering the image that comprises target object imaging;
Acquiring unit 502, for obtaining for being imaged on the region description information in shared region on described image described in describing;
Determining unit 503, for the described region description information of utilizing acquiring unit 502 to obtain, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus.
In Fig. 5, acquiring unit 502 comprises:
First obtains subelement 5021, and for obtaining described region at the coordinate of default two-dimensional coordinate system, described coordinate is used as described positional information; Described default two-dimensional coordinate system can be demarcated each point position in described image;
Second obtains subelement 5022, for measuring, is imaged on default two-dimensional coordinate and fastens the length Δ l in specified dimension direction.
In Fig. 5, determining unit 503 comprises: presupposed information receiving element 5031 and the first computing unit 5032;
Presupposed information receiving element 5031, is related to parameter for receiving first of proportionate relationship between default embodiment target object size and described imaging size;
The first computing unit 5032, for what utilize that preset algorithm, described presupposed information receiving element 4031 receive, described first be related to parameter, and described in obtain the information that subelement 502 obtains, determine the position of described target object in the plane of the optical axis perpendicular to described collection image device used.
Described determining unit 503 can further comprise: the second computing unit 5033;
Wherein, presupposed information receiving element 5031, is further used for receiving the resolution of described collecting unit 501 image devices used of default embodiment, the big or small descriptor of described imaging and described target object are related to parameter apart from second of incidence relation between the distance of the plane vertical with described image device optical axis;
The second computing unit 5033, for what utilize that preset algorithm, presupposed information receiving element 5031 receive, described second be related to that parameter, second obtains the information that subelement 5022 gets, determine target object apart from the distance of the plane vertical with described image device optical axis.
In practical application, above-mentioned position determining means can be arranged in concrete equipment, makes this equipment correlation function.
In sum, the location determining method that embodiments of the invention provide and equipment, by the image from collecting, obtain the region description information in the shared region of target object imaging, and in conjunction with the default parameter that is related to, determine the locus of target object, due to region description information and be related to that parameter all can be learned more exactly, therefore, the prior art of comparing, the technical scheme that embodiments of the invention provide, above-mentioned locus can be determined more exactly, and target object can be determined with respect to the only space position of image capture device.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (11)

1. a location determining method, is characterized in that, comprising:
The image that collection comprises target object imaging;
Obtain for being imaged on the region description information in shared region on described image described in describing;
Utilize described region description information, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus;
Wherein, described region description information comprises: the positional information of described region on described image, and, the big or small descriptor of described imaging;
Describedly be related to that parameter comprises: embody first of target object size and be related to parameter;
Described definite described locus comprises:
Utilize preset algorithm, described first to be related to parameter and described positional information and described big or small descriptor, determine the projected position of described target object in the plane of the optical axis perpendicular to described collection image device used;
Default two-dimensional coordinate system, the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the number of pixels in described specified dimension direction; Or the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the ratio of number of pixels in described specified dimension direction and the resolution of described the above image device of specified dimension direction;
Described first is related to that parameter comprises: described target object is in the first length of specified dimension direction;
Described in described positional information comprises, be imaged on the coordinate figure in specified dimension direction described in described two-dimensional coordinate system and be designated as M;
Described in comprising, described big or small descriptor is imaged on the second length in described specified dimension direction;
The described projected position of described target object in the plane of the optical axis perpendicular to described collection image device used of determining comprises:
Utilize default three-dimensional system of coordinate to demarcate the space of described optical axis and the formation of described plane, wherein the first coordinate axis and the second coordinate axis are demarcated described plane, three axes and described optical axis coincidence, and the forward of described three axes described image dorsad;
Utilize calculating formula the first length M/the second length, calculate described target object and be projected in the coordinate figure in described the first coordinate axis and/or the second coordinate axis in described plane.
2. method according to claim 1, is characterized in that, described calculating formula the first length M/the second length of utilizing is calculated the coordinate figure that be projected in described first coordinate axis and/or second coordinate axis in of described target object in described plane and comprised:
Utilize described calculating formula, calculate described target object and be projected in the first coordinate figure in described the first coordinate axis in described plane;
Described specified dimension direction comprise the first specified dimension direction and with perpendicular the second specified dimension direction of this first specified dimension direction; And described the first specified dimension direction is parallel with described the first change in coordinate axis direction; Described the second specified dimension direction is parallel with described the second change in coordinate axis direction;
The coordinate figure of remembering described the first specified dimension direction is M1; The coordinate figure of described the second specified dimension direction is M2;
According to calculating formula the first coordinate figure M2/M1, described in calculating, be projected in the second coordinate figure in described the second coordinate axis.
3. method according to claim 1, is characterized in that, described first is related to that parameter is preset, and comprising:
Measure in advance preset reference thing with the parallel plane reference planes in described image place on projection, the physical length on assigned direction; Described physical length is used as described first and is related to parameter; Or,
From the reference picture that comprises described reference substance imaging collecting, measure the length in pixels of the picture of described the above projected correspondence of assigned direction, described reference substance be imaged on pixel coordinate on described assigned direction and described reference substance in the plane vertical with described optical axis be projected in described three-dimensional system of coordinate in coordinate figure in the coordinate axis parallel with described assigned direction;
According to calculating formula: the coordinate figure/pixel coordinate described in length in pixels in the parallel coordinate axis of assigned direction, calculate described first and be related to parameter.
4. method according to claim 1, is characterized in that, described in obtain for describe described image space and/or size information for adopting object detection technology, be imaged on position and/or the size in image described in automatic acquisition.
5. method according to claim 4, is characterized in that, described object behaviour face is or/and the number of people.
6. a location determining method, is characterized in that, comprising:
The image that collection comprises target object imaging;
Obtain for being imaged on the region description information in shared region on described image described in describing;
Utilize described region description information, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus;
Wherein, be describedly related to that parameter comprises:
The big or small descriptor and the described target object that embody the resolution of described collection image device used, described imaging are related to parameter apart from second of incidence relation between the distance of the plane vertical with described image device optical axis;
Described region description information comprises: the big or small descriptor of described imaging;
Described definite described locus comprises:
Utilize preset algorithm, described second to be related to parameter and described big or small descriptor, determine described target object apart from the distance of the plane vertical with described image device optical axis;
Describedly obtain described region description information and comprise:
The default default two-dimensional coordinate system that can demarcate each point position in described image, is imaged on this two-dimensional coordinate described in measurement and fastens the length Δ l in specified dimension direction;
The described distance of determining the described target object distance plane vertical with described image device optical axis comprises:
Utilize calculating formula: calculate described distance;
Coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the number of pixels in described specified dimension direction; Or the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the ratio of number of pixels in described specified dimension direction and the resolution of described the above image device of specified dimension direction.
7. method according to claim 6, is characterized in that, described second is related to that parameter is preset, and comprising:
Measure the reference distance of the preset reference object distance plane vertical with described image device optical axis;
Measure described reference substance with the parallel plane reference planes in described image place on projection, the 3rd length on assigned direction;
Utilize calculating formula: calculate described second and be related to parameter.
8. a position determining means, is characterized in that, comprising:
Collecting unit, for gathering the image that comprises target object imaging;
Acquiring unit, for obtaining for being imaged on the region description information in shared region on described image described in describing;
Determining unit, for the described region description information of utilizing described acquiring unit to obtain, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus;
Wherein, described region description information comprises: the positional information of described region on described image, and, the big or small descriptor of described imaging;
Describedly be related to that parameter comprises: embody first of target object size and be related to parameter;
Described definite described locus comprises:
Utilize preset algorithm, described first to be related to parameter and described positional information and described big or small descriptor, determine the projected position of described target object in the plane of the optical axis perpendicular to described collection image device used;
Default two-dimensional coordinate system, the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the number of pixels in described specified dimension direction; Or the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the ratio of number of pixels in described specified dimension direction and the resolution of described the above image device of specified dimension direction; Described first is related to that parameter comprises: described target object is in the first length of specified dimension direction;
Described in described positional information comprises, be imaged on the coordinate figure in specified dimension direction described in described two-dimensional coordinate system and be designated as M;
Described in comprising, described big or small descriptor is imaged on the second length in described specified dimension direction;
The described projected position of described target object in the plane of the optical axis perpendicular to described collection image device used of determining comprises:
Utilize default three-dimensional system of coordinate to demarcate the space of described optical axis and the formation of described plane, wherein the first coordinate axis and the second coordinate axis are demarcated described plane, three axes and described optical axis coincidence, and the forward of described three axes described image dorsad;
Utilize calculating formula the first length M/the second length, calculate described target object and be projected in the coordinate figure in described the first coordinate axis and/or the second coordinate axis in described plane.
9. device according to claim 8, is characterized in that,
Described first is related to that parameter is preset, and comprising:
Measure in advance preset reference thing with the parallel plane reference planes in described image place on projection, the physical length on assigned direction; Described physical length is used as described first and is related to parameter; Or,
From the reference picture that comprises described reference substance imaging collecting, measure the length in pixels of the picture of described the above projected correspondence of assigned direction, described reference substance be imaged on pixel coordinate on described assigned direction and described reference substance in the plane vertical with described optical axis be projected in described three-dimensional system of coordinate in coordinate figure in the coordinate axis parallel with described assigned direction;
According to calculating formula: the coordinate figure/pixel coordinate described in length in pixels in the parallel coordinate axis of assigned direction, calculate described first and be related to parameter.
10. a position determining means, is characterized in that, comprising:
Collecting unit, for gathering the image that comprises target object imaging;
Acquiring unit, for obtaining for being imaged on the region description information in shared region on described image described in describing;
Determining unit, for the described region description information of utilizing described acquiring unit to obtain, and the preset relation parameter that can embody incidence relation between described region description information and the locus of described target object, determine described locus;
Wherein, be describedly related to that parameter comprises:
The big or small descriptor and the described target object that embody the resolution of described collection image device used, described imaging are related to parameter apart from second of incidence relation between the distance of the plane vertical with described image device optical axis;
Described region description information comprises: the big or small descriptor of described imaging;
Described definite described locus comprises:
Utilize preset algorithm, described second to be related to parameter and described big or small descriptor, determine described target object apart from the distance of the plane vertical with described image device optical axis;
Describedly obtain described region description information and comprise:
The default default two-dimensional coordinate system that can demarcate each point position in described image, is imaged on this two-dimensional coordinate described in measurement and fastens the length Δ l in specified dimension direction;
The described distance of determining the described target object distance plane vertical with described image device optical axis comprises:
Utilize calculating formula: calculate described distance;
Coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the number of pixels in described specified dimension direction; Or the coordinate figure in the coordinate that described two-dimensional coordinate system is demarcated in specified dimension direction is for representing the ratio of number of pixels in described specified dimension direction and the resolution of described the above image device of specified dimension direction.
11. devices according to claim 10, is characterized in that, described second is related to that parameter is preset, and comprising:
Measure the reference distance of the preset reference object distance plane vertical with described image device optical axis;
Measure described reference substance with the parallel plane reference planes in described image place on projection, the 3rd length on assigned direction;
Utilize calculating formula: calculate described second and be related to parameter.
CN200810247497.1A 2008-12-31 2008-12-31 Position confirming method and apparatus Active CN101464132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810247497.1A CN101464132B (en) 2008-12-31 2008-12-31 Position confirming method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810247497.1A CN101464132B (en) 2008-12-31 2008-12-31 Position confirming method and apparatus

Publications (2)

Publication Number Publication Date
CN101464132A CN101464132A (en) 2009-06-24
CN101464132B true CN101464132B (en) 2014-09-10

Family

ID=40804822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810247497.1A Active CN101464132B (en) 2008-12-31 2008-12-31 Position confirming method and apparatus

Country Status (1)

Country Link
CN (1) CN101464132B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102022979A (en) * 2009-09-21 2011-04-20 鸿富锦精密工业(深圳)有限公司 Three-dimensional optical sensing system
CN101876535B (en) * 2009-12-02 2015-11-25 北京中星微电子有限公司 A kind of height measurement method, device and supervisory system
US8407111B2 (en) * 2011-03-31 2013-03-26 General Electric Company Method, system and computer program product for correlating information and location
CN105588543B (en) * 2014-10-22 2019-10-18 中兴通讯股份有限公司 A kind of method, apparatus and positioning system for realizing positioning based on camera
CN104267203B (en) * 2014-10-30 2016-09-07 京东方科技集团股份有限公司 The method of testing of a kind of sample and device
CN105635555B (en) * 2014-11-07 2020-12-29 青岛海尔智能技术研发有限公司 Camera focusing control method, camera shooting device and wearable intelligent terminal
CN104776832B (en) * 2015-04-16 2017-02-22 浪潮软件集团有限公司 Method, set top box and system for positioning objects in space
CN105007396A (en) * 2015-08-14 2015-10-28 山东诚海电子科技有限公司 Positioning method and positioning device for classroom teaching
CN106949830A (en) * 2016-06-24 2017-07-14 广州市九州旗建筑科技有限公司 The measuring technology and its computational methods of scale built in a kind of imaging system and application
CN107478155A (en) * 2017-08-24 2017-12-15 苏州光照精密仪器有限公司 Product inspection method, apparatus and system
CN109961455B (en) 2017-12-22 2022-03-04 杭州萤石软件有限公司 Target detection method and device
CN109671190B (en) * 2018-11-27 2021-04-13 杭州天翼智慧城市科技有限公司 Multi-channel gate management method and system based on face recognition
CN111161339B (en) * 2019-11-18 2020-11-27 珠海随变科技有限公司 Distance measuring method, device, equipment and computer readable medium
CN111813984B (en) * 2020-06-23 2022-09-30 北京邮电大学 Method and device for realizing indoor positioning by using homography matrix and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404016A (en) * 2002-10-18 2003-03-19 清华大学 Establishing method of human face 3D model by fusing multiple-visual angle and multiple-thread 2D information
CN1635545A (en) * 2003-12-30 2005-07-06 上海科技馆 Method and apparatus for changing human face image
CN101057257A (en) * 2004-11-12 2007-10-17 欧姆龙株式会社 Face feature point detector and feature point detector

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404016A (en) * 2002-10-18 2003-03-19 清华大学 Establishing method of human face 3D model by fusing multiple-visual angle and multiple-thread 2D information
CN1635545A (en) * 2003-12-30 2005-07-06 上海科技馆 Method and apparatus for changing human face image
CN101057257A (en) * 2004-11-12 2007-10-17 欧姆龙株式会社 Face feature point detector and feature point detector

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JP特开11-250267A 1999.09.17
JP特开2005-122479A 2005.05.12

Also Published As

Publication number Publication date
CN101464132A (en) 2009-06-24

Similar Documents

Publication Publication Date Title
CN101464132B (en) Position confirming method and apparatus
CN110285793B (en) Intelligent vehicle track measuring method based on binocular stereo vision system
US10260862B2 (en) Pose estimation using sensors
KR102016636B1 (en) Calibration apparatus and method of camera and rader
US10909395B2 (en) Object detection apparatus
CN101408422B (en) Traffic accident on-site mapper based on binocular tridimensional all-directional vision
CN105043350A (en) Binocular vision measuring method
CN108733039A (en) The method and apparatus of navigator fix in a kind of robot chamber
CN103278139A (en) Variable-focus monocular and binocular vision sensing device
CN110334678A (en) A kind of pedestrian detection method of view-based access control model fusion
CN108279677B (en) Rail robot detection method based on binocular vision sensor
CN108107462A (en) The traffic sign bar gesture monitoring device and method that RTK is combined with high speed camera
CN110146030A (en) Side slope surface DEFORMATION MONITORING SYSTEM and method based on gridiron pattern notation
EP2476999B1 (en) Method for measuring displacement, device for measuring displacement, and program for measuring displacement
CN111996883B (en) Method for detecting width of road surface
CN101487702A (en) Binocular vision based traffic accident on-site photogrammetric survey method
JP3710548B2 (en) Vehicle detection device
JP2018036769A (en) Image processing apparatus, image processing method, and program for image processing
KR101255461B1 (en) Position Measuring Method for street facility
CN106169076A (en) A kind of angle license plate image storehouse based on perspective transform building method
CN103083089B (en) Virtual scale method and system of digital stereo-micrography system
CN111145262A (en) Vehicle-mounted monocular calibration method
CN105354828B (en) Read and write intelligent identification and the application thereof of reading matter three-dimensional coordinate in scene
CN111145260A (en) Vehicle-mounted binocular calibration method
CN111256651B (en) Week vehicle distance measuring method and device based on monocular vehicle-mounted camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20180124

Address after: 519031 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105, -23898 (central office)

Patentee after: Zhongxing Technology Co., Ltd.

Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor

Patentee before: Beijing Vimicro Corporation

TR01 Transfer of patent right
CP01 Change in the name or title of a patent holder

Address after: 519031 -23898, 105 room 6, Baohua Road, Hengqin New District, Zhuhai, Guangdong (centralized office area)

Patentee after: Mid Star Technology Limited by Share Ltd

Address before: 519031 -23898, 105 room 6, Baohua Road, Hengqin New District, Zhuhai, Guangdong (centralized office area)

Patentee before: Zhongxing Technology Co., Ltd.

CP01 Change in the name or title of a patent holder