CN113034605B - Target object position determining method and device, electronic equipment and storage medium - Google Patents

Target object position determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113034605B
CN113034605B CN201911360878.5A CN201911360878A CN113034605B CN 113034605 B CN113034605 B CN 113034605B CN 201911360878 A CN201911360878 A CN 201911360878A CN 113034605 B CN113034605 B CN 113034605B
Authority
CN
China
Prior art keywords
target object
determining
imaging plane
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911360878.5A
Other languages
Chinese (zh)
Other versions
CN113034605A (en
Inventor
王洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Ningbo Geely Automobile Research and Development Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN201911360878.5A priority Critical patent/CN113034605B/en
Publication of CN113034605A publication Critical patent/CN113034605A/en
Application granted granted Critical
Publication of CN113034605B publication Critical patent/CN113034605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The application relates to a method, a device, electronic equipment and a storage medium for determining the position of a target object, wherein an image shot by a camera is acquired; determining parameters of a camera; determining the type of the target object and the characteristic point information of the target object based on the parameters of the image and the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focus after distortion removal; determining coordinate information of the feature points on a theoretical imaging plane according to the angle of the incident light and the focal length after de-distortion; the theoretical imaging plane is an image after the image is de-distorted; determining the height of the target object in the theoretical imaging plane according to the coordinate information of the feature points on the theoretical imaging plane; determining the actual height of the target object according to the type of the target object; the position of the target object is determined according to the height of the target object in the theoretical imaging plane, the actual height and the focus after de-distortion. Thus, the precision of detecting the target object by the monocular camera can be improved, and the complexity is low and the real-time performance is high.

Description

Target object position determining method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to a method and an apparatus for determining a position of a target object, an electronic device, and a storage medium.
Background
Based on the fact that visual obstacle detection is a very important technology in unmanned operation, the safety of unmanned operation can be improved by visually detecting the position of an obstacle in a three-dimensional space. In order to achieve 360-degree field angle coverage of the vehicle body, a plurality of cameras are arranged, including various types of cameras such as long-focus, short-focus, wide-angle and fish-eye cameras. The real world is a three-dimensional space, and a two-dimensional image is obtained by imaging with a camera, and depth information is lost, so in order to estimate the position information of an obstacle, a common method is as follows: binocular ranging and monocular ranging. Binocular ranging is a method for acquiring three-dimensional geometric information of an object by calculating position deviation between corresponding points of images based on parallax principle and by utilizing imaging equipment to acquire two images of the object to be measured from different positions. Monocular ranging is mainly to estimate the distance of an obstacle by using the geometric relation of small hole imaging and internal parameters of a cameraAnd (5) separating. As shown in fig. 1, the small bore imaging model is the most common model; wherein O is the center point of the camera coordinate system; the Z axis is the main axis of the camera; o (O) 1 Is the point at which the principal axis intersects the imaging plane. At a certain point Q (X, Y, Z) in the world coordinate system, the imaging point is Q (X, Y, f) on the imaging plane. Based on the height of the obstacle and the height in the image, and the camera parameters based on the similar triangle theorem, we can estimate the distance of the obstacle.
The ranging scheme of the small hole model is an ideal model, and a linear model is established according to the mapping relation between imaging points and target points, so that the distance of a camera with small lens distortion such as long focus can be estimated well. However, since the ranging scheme of the small hole model does not consider the influence of lens distortion, a linear model cannot be obtained for a camera such as a wide angle camera or a fish eye camera, and thus a distance error in estimating an obstacle using the small hole model scheme is relatively large.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a storage medium for determining the position of a target object, which can improve the precision of detecting the target object by a monocular camera, and have low complexity and high real-time performance.
In one aspect, an embodiment of the present application provides a method for determining a location of a target object, including:
acquiring an image shot by a camera;
determining parameters of a camera;
determining the type of the target object and the characteristic point information of the target object based on the parameters of the image and the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focus after distortion removal;
determining coordinate information of the feature points on a theoretical imaging plane according to the angle of the incident light and the focal length after de-distortion; the theoretical imaging plane is an image after the image is de-distorted;
determining the height of the target object in the theoretical imaging plane according to the coordinate information of the feature points on the theoretical imaging plane;
determining the actual height of the target object according to the type of the target object;
the position of the target object is determined according to the height of the target object in the theoretical imaging plane, the actual height and the focus after de-distortion.
In another aspect, an embodiment of the present application provides a location determining apparatus for a target object, including:
the acquisition module is used for acquiring an image shot by the camera;
a first determining module for determining parameters of the camera;
a second determining module for determining the type of the target object and the feature point information of the target object based on the parameters of the image and the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focus after distortion removal;
the third determining module is used for determining coordinate information of the feature points on the theoretical imaging plane according to the angle of the incident light and the focal length after the distortion removal; the theoretical imaging plane is an image after the image is de-distorted;
a fourth determining module, configured to determine a height of the target object in the theoretical imaging plane according to coordinate information of the feature point on the theoretical imaging plane;
a fifth determining module, configured to determine an actual height of the target object according to a type of the target object;
and a sixth determining module, configured to determine the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the focal length after de-distortion.
In another aspect, an embodiment of the present application provides an electronic device, where the device includes a processor and a memory, and at least one instruction or at least one program is stored in the memory, where the at least one instruction or at least one program is loaded by the processor and executed by the processor to determine a location of a target object.
In another aspect, an embodiment of the present application provides a computer storage medium, where at least one instruction or at least one program is stored, where the at least one instruction or the at least one program is loaded and executed by a processor to implement the method for determining a position of a target object.
The method, the device, the electronic equipment and the storage medium for determining the position of the target object have the following beneficial effects:
acquiring an image shot by a camera; determining parameters of a camera; determining the type of the target object and the characteristic point information of the target object based on the parameters of the image and the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focus after distortion removal; determining coordinate information of the feature points on a theoretical imaging plane according to the angle of the incident light and the focal length after de-distortion; the theoretical imaging plane is an image after the image is de-distorted; determining the height of the target object in the theoretical imaging plane according to the coordinate information of the feature points on the theoretical imaging plane; determining the actual height of the target object according to the type of the target object; the position of the target object is determined according to the height of the target object in the theoretical imaging plane, the actual height and the focus after de-distortion. Thus, the precision of detecting the target object by the monocular camera can be improved, and the complexity is low and the real-time performance is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of a pinhole imaging model provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a flowchart of a method for determining a position of a target object according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a bounding box of a target object in an image according to an embodiment of the present application;
fig. 5 is a schematic diagram of a fisheye camera model provided in an embodiment of the present application;
FIG. 6 is a schematic view of a target object projected onto a theoretical imaging plane according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a position determining device for a target object according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 2, fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application, including a vehicle 201 and a target object 202, where the vehicle 201 is configured with a plurality of cameras 2011. The vehicle 201 determines the position of the target object 202 via the in-vehicle system calculation based on the image captured by the camera 2011.
The in-vehicle system of the vehicle 201 acquires an image captured by the camera 2011 and determines parameters of the camera 2011. The in-vehicle system of the vehicle 201 determines the type of the target object 202 and feature point information of the target object 202, which includes the incident light angle of the feature point and the focal length after de-distortion, based on the image and the parameters of the camera 2011. The on-vehicle system of the vehicle 201 determines coordinate information of the feature points on a theoretical imaging plane, which is an image after image distortion removal, based on the angle of incident light and the focal length after distortion removal. The in-vehicle system of the vehicle 201 determines the height of the target object 202 in the theoretical imaging plane based on the coordinate information of the feature points on the theoretical imaging plane, and determines the actual height of the target object 202 based on the type of the target object 202. The on-board system of the vehicle 201 determines the position of the target object 202 based on the height of the target object 202 in the theoretical imaging plane, the actual height, and the de-distorted focal length.
Alternatively, the camera 2011 may be any one of a fisheye camera, a wide angle camera, a short focal camera, and a long focal camera.
In the following, a specific embodiment of a method for determining a position of a target object according to the present application is described, and fig. 3 is a schematic flow chart of a method for determining a position of a target object according to the embodiment of the present application, where the method operation steps according to the embodiment or the flowchart are provided, but more or fewer operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 3, the method may include:
s301: and acquiring an image shot by the camera.
S303: parameters of the camera are determined.
The execution subject of the embodiments of the present application may be an in-vehicle system of a vehicle. The vehicle is equipped with a camera, and an onboard system acquires images captured by the camera and determines parameters of the camera. The parameters of the camera include normalized focal length f, image center position information (u 0 ,v 0 ) And a radial distortion parameter k 1 、k 2
In an alternative embodiment of determining the normalized focal length f of the camera, the normalized focal length f may be determined according to equation (1):
f=(f x +f y )/2......(1)
wherein f x Representing the normalized focal length of the x-axis in the camera coordinate system; f (f) y Representing the normalized focal length of the y-axis in the camera coordinate system.
Optionally, the camera parameter f x 、f y Image center position information (u) 0 ,v 0 ) And a radial distortion parameter k 1 、k 2 Can be measured by calibration.
S305: determining the type of the target object and the characteristic point information of the target object based on the parameters of the image and the camera; the feature point information includes an incident light angle of the feature point and a focal length after de-distortion.
In the embodiment of the application, the vehicle-mounted system detects the image by using a target detection algorithm to determine the type and the boundary box of the target object, determines the characteristic points of the target object based on the boundary box, and then determines the type and the boundary box according to the normalized focal length f of the camera and the image center position information (u 0 ,v 0 ) And a radial distortion parameter k 1 、k 2 And calculating the incident light angle of the characteristic point and the focal length after distortion removal.
In an alternative embodiment of determining the type of the target object and the feature point information of the target object based on the parameters of the image and the camera, the vehicle-mounted system may determine the type of the target object and the bounding box by using the YOLO algorithm, referring to fig. 4, fig. 4 is a schematic diagram of a bounding box of the target object in the image, where the image plane coordinate system is xoy, provided in the embodiment of the present application. The vehicle-mounted system firstly determines that the coordinate of the upper left corner of the boundary box is P 1 (x 1 ,y 1 ) And the lower right corner has the coordinate P 2 (x 2 ,y 2 ) Then selecting two end points P of the central line of the boundary frame t 、P b As characteristic points, and calculate the characteristic points P according to the formula (2) t 、P b Coordinate information of (2):
x t =(x 1 +x 2 )/2
y t =y 1
x b =(x 1 +x 2 )/2
y b =y 2 .......(2)
wherein x is t ,y t Respectively represent characteristic points P t Is the horizontal and vertical coordinates of (2); x is x b ,y b Respectively represent characteristic points P b And the abscissa of (2).
Next, referring to fig. 5, fig. 5 is a schematic diagram of a fisheye camera model according to an embodiment of the present application. The vehicle-mounted system is based on a fisheye camera model and according to the characteristic point P t 、P b Coordinate information of (a) and image center position information (u) 0 ,v 0 ) And normalized focal length f to determine feature point P t 、P b Distance to the center of the image. Specifically, the vehicle-mounted system may determine the distance from the feature point to the center of the image according to formula (3): dx (dx) i =(x i -u 0 )/f
dy i =(y i -v 0 )/f......(3)
Wherein, (x) i ,y i ) Coordinate information indicating the feature points, i=t, b.
Secondly, the vehicle-mounted system is used for controlling the distance dx between the characteristic points and the center of the image i 、dy i And a radial distortion parameter k 1 、k 2 The angle of the incident light at the feature point is determined. Specifically, the vehicle-mounted system may calculate the incident light angle of the feature point according to formula (4):
θ d =sqrt(dx i 2 +dy i 2 )
θ i =θ d /(1+k 1d 2 +k 2d 4 )......(4)
wherein θ i The incident light angle of the feature point is represented by i=t, b.
And secondly, the vehicle-mounted system determines the focal length after the characteristic points are de-distorted according to the distance from the characteristic points to the center of the image and the normalized focal length f. Specifically, the vehicle-mounted system may calculate the focal length after de-distortion according to formula (5):
f i =sqrt(dx i 2 +dy i 2 +f 2 )......(5)
wherein f i The focal length after the characteristic point is de-distorted is shown, i=t, b.
S307: determining coordinate information of the feature points on a theoretical imaging plane according to the angle of the incident light and the focal length after de-distortion; the theoretical imaging plane is the image after the image is de-distorted.
S309: and determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane.
In the embodiment of the present application, the number of feature points is 2. And the vehicle-mounted system determines coordinate information of the feature points on the theoretical imaging plane according to the incident light angles of the feature points and the focus after the distortion removal, namely the coordinate information of the feature points after the distortion removal. And then calculating the distance between 2 characteristic points according to a distance formula between the two points, and determining the distance between the 2 characteristic points as the height of the target object in the theoretical imaging plane.
The following description proceeds based on the alternative embodiments described above. Referring to fig. 6, fig. 6 is a schematic diagram of a target object projected onto a theoretical imaging plane according to an embodiment of the present application. In an alternative embodiment, the on-vehicle system may calculate the coordinate information of the feature point on the theoretical imaging plane according to the formula (6):
x′ i =dx i *scale*f i
y′ i =dy i *scale*f i ......(6)
wherein scale=tan (θ i )/θ d ;(x′ i ,y′ i ) The coordinates of the feature points on the theoretical imaging plane are represented, i=t, b.
Thus, the coordinates P 'of the feature points on the theoretical imaging plane are obtained' t 、P’ b Then, calculating the characteristic point P 'based on a distance formula between the two points' t 、P’ b Distance between them, the distance is determined as the height H of the target object in the theoretical imaging plane 1
S311: the actual height of the target object is determined according to the type of the target object.
S313: the position of the target object is determined according to the height of the target object in the theoretical imaging plane, the actual height and the focus after de-distortion.
In the embodiment of the application, the vehicle-mounted system estimates the actual height of the target object according to the type of the target object, and then determines the distance from the target object to the camera by using a similar triangle theorem according to the height of the target object in the theoretical imaging plane, the actual height and the focal length after de-distortion. Next, a position of the target object based on the camera, that is, coordinate information of the target object in a camera coordinate system is determined based on a distance of the target object from the camera and an angle of incident light.
The following description proceeds based on the alternative embodiments described above. In an alternative embodiment, as shown in fig. 6, in which the position of the target object is determined based on the height of the target object in the theoretical imaging plane, the actual height and the focal length after de-distortion, the actual height is estimated to be H assuming that the on-board system determines the type of target object as a car 2 Then determining the distance of the car from the camera according to the formula (7):
H 1 /H 2 =f t /d 1 =f b /d 2 ......(7)
wherein d 1 、d 2 Representing the distance of the target object to the camera.
Alternatively, d 1 、d 2 The distance of the smaller value is determined as the distance from the target object to the camera, i.e. the distance of the target object on the Z-axis of the camera coordinate system.
Secondly, calculating according to a formula (8) to obtain coordinate information of the target object in a camera coordinate system:
X=sinθ i *d n
Y=cosθ i *d n ......(7)
wherein d n Representing the distance of the target object on the Z axis of the camera coordinate system, n=1 or 2; θ i An incident light angle indicating the feature point; x, Y it is divided intoIndicating the distance of the target object on the axis of the camera coordinate system X, Y, respectively.
In summary, the method provided by the embodiment of the application can be applied to an on-board system of a vehicle, and the position of the target object is determined based on an on-board camera. The vehicle-mounted camera can be any one of a fisheye camera, a wide-angle camera, a short-focus camera and a long-focus camera, and has the advantages of wide application range, simple algorithm and good instantaneity.
The embodiment of the application also provides a device for determining the position of the target object, and fig. 7 is a schematic structural diagram of the device for determining the position of the target object, as shown in fig. 7, where the device includes:
an acquisition module 701, configured to acquire an image captured by a camera;
a first determining module 702, configured to determine parameters of the camera;
a second determining module 703 for determining the type of the target object and the feature point information of the target object based on the parameters of the image and the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focus after distortion removal;
a third determining module 704, configured to determine coordinate information of the feature point on the theoretical imaging plane according to the angle of the incident light and the focal length after the distortion removal; the theoretical imaging plane is an image after the image is de-distorted;
a fourth determining module 705, configured to determine a height of the target object in the theoretical imaging plane according to coordinate information of the feature point in the theoretical imaging plane;
a fifth determining module 706, configured to determine an actual height of the target object according to the type of the target object;
a sixth determining module 707 is configured to determine a position of the target object according to the height of the target object in the theoretical imaging plane, the actual height, and the de-distorted focal length.
The apparatus and method embodiments in the embodiments of the present application are based on the same application concept.
The embodiment of the application provides electronic equipment, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the processor loads and executes the position determining method of the target object.
Embodiments of the present application provide a computer storage medium having at least one instruction or at least one program stored therein, the at least one instruction or the at least one program loaded and executed by a processor to implement a method for determining a position of a target object as described above.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiments of the method, the device, the electronic device or the storage medium for determining the position of the target object provided by the application can be seen, wherein the image shot by the camera is acquired; determining parameters of a camera; determining the type of the target object and the characteristic point information of the target object based on the parameters of the image and the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focus after distortion removal; determining coordinate information of the feature points on a theoretical imaging plane according to the angle of the incident light and the focal length after de-distortion; the theoretical imaging plane is an image after the image is de-distorted; determining the height of the target object in the theoretical imaging plane according to the coordinate information of the feature points on the theoretical imaging plane; determining the actual height of the target object according to the type of the target object; the position of the target object is determined according to the height of the target object in the theoretical imaging plane, the actual height and the focus after de-distortion. Thus, the precision of detecting the target object by the monocular camera can be improved, and the complexity is low and the real-time performance is high.
It should be noted that: the foregoing sequence of the embodiments of the present application is only for describing, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to the particular embodiments of the present application.

Claims (8)

1. A method for determining a position of a target object, comprising:
acquiring an image shot by a camera;
determining parameters of the camera, wherein the parameters of the camera comprise normalized focal length, image center position information and radial distortion parameters;
determining a type and bounding box of the target object based on the image;
determining coordinate information of the feature points based on the bounding box;
determining the distance from the feature point to the center of the image according to the coordinate information of the feature point, the position information of the center of the image and the normalized focal length;
determining an incident light angle of the feature point according to the distance from the feature point to the center of the image and the radial distortion parameter;
determining the focal length of the undistorted feature points according to the distance from the feature points to the center of the image and the normalized focal length;
determining coordinate information of the feature points on a theoretical imaging plane according to the incident light angle and the focus after de-distortion; the theoretical imaging plane is an image after the image is de-distorted;
determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane;
determining the actual height of the target object according to the type of the target object;
and determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the focus after de-distortion.
2. The method according to claim 1, wherein the number of feature points is 2;
the determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points in the theoretical imaging plane comprises the following steps:
determining the distance between the two feature points according to the coordinate information of the two feature points on the theoretical imaging plane;
and determining the distance between the two characteristic points as the height of the target object in the theoretical imaging plane.
3. The method of claim 1, wherein said determining the position of the target object based on the height of the target object in the theoretical imaging plane, the actual height, and the de-distorted focal length comprises:
determining the distance from the target object to the camera based on a similar triangle theorem according to the height of the target object in the theoretical imaging plane, the actual height and the focus after de-distortion;
the method further includes determining a position of the target object based on the camera based on a distance of the target object from the camera and the incident light angle.
4. A position determining apparatus of a target object, comprising:
the acquisition module is used for acquiring an image shot by the camera;
the first determining module is used for determining parameters of the camera, wherein the parameters of the camera comprise normalized focal length, image center position information and radial distortion parameters;
a second determination module for determining a type and a bounding box of the target object based on the image; determining coordinate information of the feature points based on the bounding box; determining the distance from the feature point to the center of the image according to the coordinate information of the feature point, the position information of the center of the image and the normalized focal length; determining an incident light angle of the feature point according to the distance from the feature point to the center of the image and the radial distortion parameter; determining the focal length of the undistorted feature points according to the distance from the feature points to the center of the image and the normalized focal length;
the third determining module is used for determining coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the focus after de-distortion; the theoretical imaging plane is an image after the image is de-distorted;
a fourth determining module, configured to determine a height of the target object in a theoretical imaging plane according to coordinate information of the feature point on the theoretical imaging plane;
a fifth determining module, configured to determine an actual height of the target object according to a type of the target object;
and a sixth determining module, configured to determine a position of the target object according to a height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length.
5. The apparatus of claim 4, wherein the number of feature points is 2;
the fourth determining module is further configured to determine a distance between the two feature points according to coordinate information of the two feature points on the theoretical imaging plane; and determining the distance between the two characteristic points as the height of the target object in the theoretical imaging plane.
6. The apparatus of claim 4, wherein the device comprises a plurality of sensors,
the sixth determining module is further configured to determine, based on a similar triangle theorem, a distance from the target object to the camera according to a height of the target object in the theoretical imaging plane, the actual height, and the de-distorted focal length; the method further includes determining a position of the target object based on the camera based on a distance of the target object from the camera and the incident light angle.
7. An electronic device comprising a processor and a memory, wherein the memory has stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded by the processor and performing the method of determining the position of a target object according to any of claims 1-3.
8. A computer storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement a method of determining the position of a target object according to any of claims 1-3.
CN201911360878.5A 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium Active CN113034605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360878.5A CN113034605B (en) 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360878.5A CN113034605B (en) 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113034605A CN113034605A (en) 2021-06-25
CN113034605B true CN113034605B (en) 2024-04-16

Family

ID=76458480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360878.5A Active CN113034605B (en) 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113034605B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023108385A1 (en) * 2021-12-14 2023-06-22 合肥英睿系统技术有限公司 Target object positioning method and apparatus, and device and computer-readable storage medium
CN115802159B (en) * 2023-02-01 2023-04-28 北京蓝色星际科技股份有限公司 Information display method and device, electronic equipment and storage medium
CN117111046B (en) * 2023-10-25 2024-01-12 深圳市安思疆科技有限公司 Distortion correction method, system, device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577002A (en) * 2009-06-16 2009-11-11 天津理工大学 Calibration method of fish-eye lens imaging system applied to target detection
WO2019000945A1 (en) * 2017-06-28 2019-01-03 京东方科技集团股份有限公司 On-board camera-based distance measurement method and apparatus, storage medium, and electronic device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577002A (en) * 2009-06-16 2009-11-11 天津理工大学 Calibration method of fish-eye lens imaging system applied to target detection
WO2019000945A1 (en) * 2017-06-28 2019-01-03 京东方科技集团股份有限公司 On-board camera-based distance measurement method and apparatus, storage medium, and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于计算机视觉的目标方位测量方法;孙少杰;杨晓东;;火力与指挥控制(03);全文 *

Also Published As

Publication number Publication date
CN113034605A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN110057352B (en) Camera attitude angle determination method and device
CN113034605B (en) Target object position determining method and device, electronic equipment and storage medium
CN110377015B (en) Robot positioning method and robot positioning device
JP4803449B2 (en) On-vehicle camera calibration device, calibration method, and vehicle production method using this calibration method
JP6767998B2 (en) Estimating external parameters of the camera from the lines of the image
KR101212419B1 (en) Calibration apparatus
JP4803450B2 (en) On-vehicle camera calibration device and vehicle production method using the device
JP2874710B2 (en) 3D position measuring device
JP5011528B2 (en) 3D distance measurement sensor and 3D distance measurement method
CN111274943B (en) Detection method, detection device, electronic equipment and storage medium
JP6328327B2 (en) Image processing apparatus and image processing method
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
US10110822B2 (en) Method for tracking at least one object and method for replacing at least one object by a virtual object in a moving image signal recorded by a camera
JP5917767B2 (en) Stereoscopic correction method
CN111572633B (en) Steering angle detection method, device and system
CN112348890B (en) Space positioning method, device and computer readable storage medium
CN104949657A (en) Object detection device, object detection method, and computer readable storage medium comprising objection detection program
CN110825079A (en) Map construction method and device
KR101735325B1 (en) Apparatus for registration of cloud points
WO2019012004A1 (en) Method for determining a spatial uncertainty in images of an environmental area of a motor vehicle, driver assistance system as well as motor vehicle
Pollok et al. A visual SLAM-based approach for calibration of distributed camera networks
Schönbein et al. Environmental Perception for Intelligent Vehicles Using Catadioptric Stereo Vision Systems.
JPH07218251A (en) Method and device for measuring stereo image
CN111243021A (en) Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium
CN110992291A (en) Distance measuring method, system and storage medium based on trinocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant