CN111882616B - Method, device and system for correcting target detection result, electronic equipment and storage medium - Google Patents

Method, device and system for correcting target detection result, electronic equipment and storage medium Download PDF

Info

Publication number
CN111882616B
CN111882616B CN202011036772.2A CN202011036772A CN111882616B CN 111882616 B CN111882616 B CN 111882616B CN 202011036772 A CN202011036772 A CN 202011036772A CN 111882616 B CN111882616 B CN 111882616B
Authority
CN
China
Prior art keywords
image
processed
position information
standard deviation
luminosity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011036772.2A
Other languages
Chinese (zh)
Other versions
CN111882616A (en
Inventor
柏道齐
叶浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVL List Technical Center Shanghai Co Ltd
Original Assignee
AVL List Technical Center Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVL List Technical Center Shanghai Co Ltd filed Critical AVL List Technical Center Shanghai Co Ltd
Priority to CN202011036772.2A priority Critical patent/CN111882616B/en
Publication of CN111882616A publication Critical patent/CN111882616A/en
Application granted granted Critical
Publication of CN111882616B publication Critical patent/CN111882616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The application provides a method, a device, a system, electronic equipment and a storage medium for correcting a target detection result, wherein the method comprises the following steps: acquiring photometric parameters when an image to be identified is acquired; calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain a position precision standard deviation; and correcting the position information identified from the image to be identified based on the position precision standard deviation to obtain standard position information. According to the embodiment of the application, after the position precision standard deviation is calculated by collecting the luminosity parameters when the image to be recognized is shot, the position information can be corrected according to the position precision standard deviation, so that more accurate standard position information can be obtained.

Description

Method, device and system for correcting target detection result, electronic equipment and storage medium
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a method, an apparatus, a system, an electronic device, and a computer-readable storage medium for correcting a target detection result.
Background
Automatic driving or automatic vehicle control is one of the major trends in the automotive industry. Vehicle-mounted camera-based target detection (including classification and localization) is a necessary technology to achieve autonomous driving or autonomous vehicle control. Because of the existence of noise in the image to be recognized shot by the vehicle-mounted camera, the target position determined according to the image to be recognized usually has errors.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a system, an electronic device, and a computer-readable storage medium for correcting a target detection result, which are used to correct a position determined by an image to be recognized acquired by a camera, so as to achieve accurate target positioning.
In one aspect, the present application provides a method for correcting a target detection result, including:
acquiring photometric parameters when an image to be identified is acquired;
calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain a position precision standard deviation;
and correcting the position information identified from the image to be identified based on the position precision standard deviation to obtain standard position information.
In one embodiment, the image to be recognized comprises at least two images to be processed which are acquired simultaneously;
the correcting the position information recognized from the image to be recognized based on the position precision standard deviation to obtain standard position information comprises:
determining a weight factor according to the position precision standard deviation corresponding to each image to be processed;
and according to the weight factor corresponding to each image to be processed, carrying out weighted summation on the position information identified from each image to be processed to obtain the standard position information.
In an embodiment, the correcting the position information identified from the image to be recognized based on the position precision standard deviation to obtain standard position information includes:
constructing a measurement noise covariance matrix according to the position precision standard deviation;
constructing a measurement vector according to the position information;
calculating the measurement noise covariance matrix and the measurement vector according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes the standard location information.
In one embodiment, the image to be recognized comprises at least two images to be processed which are acquired simultaneously;
the correcting the position information recognized from the image to be recognized based on the position precision standard deviation to obtain standard position information comprises:
respectively constructing a measurement noise covariance matrix corresponding to each image to be processed according to the position precision standard deviation corresponding to each image to be processed;
respectively constructing a measurement vector corresponding to each image to be processed according to the position information corresponding to each image to be processed;
sequentially calculating a measurement noise covariance matrix and a measurement vector corresponding to each image to be processed according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes the standard location information.
In an embodiment, before the calculating according to the extended kalman filter algorithm, the method further includes:
determining initial position information and initial speed information according to historical images at least two moments;
and constructing an initial state vector of the extended Kalman filtering algorithm according to the initial position information and the initial speed information.
In an embodiment, prior to calculating the photometric parameter, the method further comprises:
obtaining a plurality of historical photometric parameters and a position accuracy standard deviation corresponding to each historical photometric parameter;
and fitting to obtain the luminosity characteristic function according to the plurality of historical luminosity parameters and the position precision standard deviation.
On the other hand, the present application further provides a device for correcting a target detection result, including:
the acquisition module is used for acquiring photometric parameters when the image to be identified is acquired;
the calculation module is used for calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain the position precision standard deviation;
and the correction module is used for correcting the position information identified from the image to be identified based on the position precision standard deviation to obtain standard position information.
In another aspect, the present application further provides a system for correcting a target detection result, including:
the camera controller is used for identifying the position information of the target from the image to be identified;
the luminosity sensor controller is used for determining and acquiring luminosity parameters in the image to be identified;
and the central controller is connected with the camera controller, is connected with the luminosity sensor controller, and is used for acquiring the position information and the luminosity parameters, calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to acquire a position precision standard deviation, and correcting the position information according to the position precision standard deviation to acquire standard position information.
Further, the present application also provides an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the above method of correcting a target detection result.
In addition, the present application also provides a computer-readable storage medium storing a computer program executable by a processor to perform the above-mentioned method of correcting a target detection result.
In the embodiment of the application, luminosity parameters when the image to be identified is acquired are acquired; calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain a position precision standard deviation; and correcting the position information recognized from the image to be recognized based on the position precision standard deviation to obtain standard position information. Because the noise point influencing the target detection result in the image to be recognized is related to the illumination condition when the camera shoots the image to be recognized, after the position precision standard deviation is calculated by collecting the luminosity parameters when the image to be recognized is shot, the position information can be corrected according to the position precision standard deviation, and more accurate standard position information can be obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an application scenario of a method for correcting a target detection result according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for correcting a target detection result according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of target detection provided by an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for correcting location information according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a method for correcting location information according to another embodiment of the present application;
fig. 7 is a flowchart illustrating a method for correcting location information according to another embodiment of the present application;
FIG. 8 is a schematic diagram of photometric feature functions provided in an embodiment of the present application;
fig. 9 is a block diagram of an apparatus for correcting a target detection result according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic view of an application scenario of a method for correcting a target detection result according to an embodiment of the present application. As shown in fig. 1, the application scenario includes a camera controller 20, a photometric sensor controller 30 and a central controller 40, the camera controller 20 is connected to the camera for identifying the position of the target from the image captured by the camera and passing the position information to the central controller 40; the photometric sensor controller 30 is for converting an output signal of the photometric sensor into a photometric parameter represented by the output signal and communicating the photometric parameter to the central controller 40; the central controller 40 corrects the positional information based on the photometric parameters. The camera controller 20, the photometric sensor controller 30, and the central controller 40 constitute a system that corrects the target detection result.
As shown in fig. 2, the present embodiment provides an electronic apparatus 1 including: at least one processor 11 and a memory 12, one processor 11 being exemplified in fig. 2. The processor 11 and the memory 12 are connected by a bus 10, and the memory 12 stores instructions executable by the processor 11, and the instructions are executed by the processor 11, so that the electronic device 1 can execute all or part of the flow of the method in the embodiments described below. In an embodiment, the electronic device 1 may be a host computer interfaced with a camera controller and a photometric sensor controller for performing a method of correcting target detection results.
The Memory 12 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The present application also provides a computer-readable storage medium storing a computer program executable by the processor 11 to perform the method of correcting target detection results provided herein.
Referring to fig. 3, a flowchart of a method for correcting a target detection result according to an embodiment of the present disclosure is shown in fig. 3, where the method may include the following steps 310 to 330.
Step 310: and acquiring photometric parameters when the image to be identified is acquired.
The photometric parameter may be any of physical quantities such as luminous flux (unit: lumen), luminous intensity (unit: candela), luminosity (unit: candela/square meter), and light emission degree (unit: lumen/square meter). The central controller performing the method of correcting the target detection result may acquire the photometric parameters from the photometric sensor controller.
The image to be identified refers to an image acquired by the vehicle-mounted camera; the onboard camera may capture images of the front or periphery of the vehicle and the location of the target is identified from the images by a camera controller connected to the camera. The camera controller can deliver the position information to the central controller, so that the central controller can achieve the functions of avoiding targets, prompting drivers and the like based on the position information.
Step 320: and calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain the position precision standard deviation.
At least one camera may be carried on the vehicle, each camera being preconfigured with a corresponding photometric feature function. The input of the luminosity characteristic function is luminosity parameters, and the output is the standard deviation of the position precision.
Step 330: and correcting the position information recognized from the image to be recognized based on the position precision standard deviation to obtain standard position information.
The central controller can correct the position information according to the position precision standard deviation to obtain corrected standard position information.
And the noise influencing the target detection result in the image to be recognized is related to the luminosity parameters when the camera shoots the image to be recognized. After the standard deviation of the position precision is calculated through the luminosity parameters, the position information can be corrected according to the standard deviation, and more accurate standard position information is obtained. Referring to fig. 4, a schematic diagram of object detection provided in an embodiment of the present application is shown in fig. 4, in which a vehicle 43 is equipped with a camera 42 and a photometric sensor 41. There is a target 44 in front of the vehicle 43. The camera 42 captures an image to be recognized containing a target 44. The position information of the target 44, which may be coordinates in the vehicle body coordinate system, is recognized from the image to be recognized as (x, y). The body coordinate system uses a position (for example, a head center point) on the vehicle 43 as an origin, the vehicle traveling direction as an X-axis, and the vehicle width direction as a Y-axis. The central controller on the vehicle 43 can correct the position information according to the standard deviation of the position accuracy of the abscissa and the ordinate determined by the photometric parameters, thereby obtaining standard position information.
In one embodiment, at least two cameras are mounted on the vehicle, and in this case, the image to be recognized includes at least two simultaneously acquired images to be processed. Each camera acquires an image to be processed, and each image to be processed can identify position information of a target. Referring to fig. 5, which is a flowchart illustrating a method for correcting location information according to an embodiment of the present disclosure, when the central controller executes step 330, the central controller may execute the following steps 331A to 332A.
Step 331A: and determining a weight factor according to the position precision standard deviation corresponding to each image to be processed.
The images to be processed correspond to the cameras one by one, and the position precision standard deviation corresponding to the images to be processed is calculated by the luminosity characteristic function corresponding to the cameras. Illustratively, the photometric feature function is in the form of a quadratic function, and the photometric feature function corresponding to the abscissa can be represented by the following formula (1):
Figure 375062DEST_PATH_IMAGE001
wherein σixIndicating the position precision standard deviation of the ith camera in the X-axis direction; f. ofx(L) representing a photometric characteristic function corresponding to an abscissa; l represents a photometric parameter; a isixRepresenting coefficients of quadratic terms in the photometric characteristic function of the ith camera corresponding to the abscissa; bixRepresenting the coefficients of first order terms in the photometric characteristic function of the ith camera corresponding to the abscissa; c. CixRepresents a constant in the photometric characteristic function of the i-th camera corresponding to the abscissa.
Illustratively, the photometric characteristic function is in the form of a quadratic function, and the photometric characteristic function corresponding to the ordinate can be represented by the following formula (2):
Figure 226343DEST_PATH_IMAGE002
wherein σiyIndicating the position precision standard deviation of the ith camera in the Y-axis direction; f. ofy(L) representing a photometric characteristic function corresponding to the ordinate; l represents a photometric parameter;aiyrepresenting coefficients of quadratic terms in the photometric characteristic function of the ith camera corresponding to the ordinate; biyRepresenting the coefficient of the first order term in the luminosity characteristic function of the ith camera corresponding to the ordinate; c. CiyRepresents a constant in the photometric characteristic function of the i-th camera corresponding to the ordinate.
The calculation manner of the weight factor corresponding to the abscissa can be expressed by the following formula (3):
Figure 656187DEST_PATH_IMAGE003
wherein, wixRepresenting a weight factor of the ith camera corresponding to the X axis; n denotes the total number of cameras.
The calculation manner of the weight factor corresponding to the ordinate can be expressed by the following formula (4):
Figure 691621DEST_PATH_IMAGE004
wherein, wiyRepresenting the weight factor of the ith camera corresponding to the Y axis; n denotes the total number of cameras.
Step 332A: and according to the weight factor corresponding to each image to be processed, carrying out weighted summation on the position information identified from each image to be processed to obtain standard position information.
And respectively weighting and summing the abscissa and the ordinate in each position information to obtain standard position information.
The weighted sum calculation corresponding to the abscissa can be expressed by the following formula (5):
Figure 966744DEST_PATH_IMAGE005
wherein x isestnRepresenting the abscissa in the standard position information; w is aixRepresenting a weight factor of the ith camera corresponding to the X axis; n represents the total number of cameras; x is the number ofiAnd the abscissa represents the abscissa in the position information recognized by the image to be processed collected by the ith camera.
The weighted sum calculation corresponding to the ordinate can be expressed by the following equation (6):
Figure 938111DEST_PATH_IMAGE006
wherein, yestnRepresenting the ordinate in the standard position information; w is aiyRepresenting the weight factor of the ith camera corresponding to the Y axis; n represents the total number of cameras; y isiAnd the ordinate in the position information identified by the image to be processed collected by the ith camera is represented.
In one embodiment, a camera is mounted on the vehicle, and the camera collects position information of the target in the image to be recognized. Referring to fig. 6, which is a flowchart illustrating a method for correcting location information according to another embodiment of the present disclosure, when the central controller executes step 330, the central controller may execute the following steps 331B to 333B.
Step 331B: and constructing a measurement noise covariance matrix according to the position precision standard deviation.
The central controller constructs a measurement noise covariance matrix based on the position accuracy standard deviation corresponding to the abscissa and the position accuracy standard deviation corresponding to the ordinate. The measurement noise covariance matrix R can be expressed as
Figure 273278DEST_PATH_IMAGE007
. Wherein σxIndicating the standard deviation, σ, of the positional accuracy in the X-axis directionyThe positional accuracy standard deviation in the Y-axis direction is shown.
Step 332B: and constructing a measurement vector according to the position information.
The measurement vector z can be expressed as
Figure 64516DEST_PATH_IMAGE008
Wherein (x)1,y1) Position information in the form of coordinates is represented.
Step 333B: calculating a measurement noise covariance matrix and a measurement vector according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information.
The central controller may periodically correct the position information according to the extended kalman filter algorithm, using a partial parameter of the previous cycle (see related description below) for each calculation, and the cycle duration may be a preconfigured empirical value. For example, the central controller may acquire the photometric parameters and the position information identified from the image to be identified every 0.02 seconds, and correct the position information according to the extended kalman filter algorithm.
Taking the current location information acquired for the kth time as an example, the correction process is described. The central controller may calculate the current state prediction vector according to the previously calculated state vector, the previous state transition matrix, the previous controlled variable transition matrix, and the control vector, and the calculation formula (7) may be expressed as:
Figure 143331DEST_PATH_IMAGE009
wherein, Xk|k-1Predicting the vector for the state calculated at the k time; fk-1State transition matrix of the k-1 st time; xk-1State vectors calculated for the (k-1) th time; gk-1A control quantity transfer matrix of the k-1 st time; u. ofk-1The control vector of the previous time.
The state transition matrix F may be constant and expressed as
Figure 438046DEST_PATH_IMAGE010
And Δ t is 0.02 seconds. The control quantity transfer matrix and the control vector may be 0.
The central controller may calculate the covariance prediction matrix of the current time according to the covariance matrix of the previous time, the state transition matrix of the previous time, and the process noise covariance matrix of the previous time, and the calculation formula (8) may be expressed as:
Figure 209693DEST_PATH_IMAGE011
wherein, Pk|k-1A covariance prediction matrix calculated for the kth time; fk-1State transition matrix of the k-1 st time; pk-1Covariance matrix calculated for the k-1 st time; qk-1The process noise covariance matrix for the k-1 st pass.
The process noise covariance matrix Q may be constant, expressed as
Figure 488227DEST_PATH_IMAGE012
Wherein the process noise variance σ2 wIs 0.01.
The central controller may calculate the gain matrix of this time according to the covariance prediction matrix of this time, the observation matrix of this time, and the measured noise covariance matrix of this time, and the calculation formula (9) may be expressed as:
Figure 370733DEST_PATH_IMAGE013
wherein, KkA gain matrix calculated for the kth time; pk|k-1A covariance prediction matrix calculated for the kth time; hkIs the k-th observation matrix; rkThe measured noise covariance matrix for the k-th time.
The observation matrix H is constant and is expressed as
Figure 519954DEST_PATH_IMAGE014
. The measurement noise covariance matrix R can be expressed as
Figure 462503DEST_PATH_IMAGE015
Wherein σ isxIndicating the standard deviation, σ, of the positional accuracy in the X-axis directionyThe positional accuracy standard deviation in the Y-axis direction is shown.
The central controller may calculate the current state vector, that is, the current state vector, according to the current state prediction vector, the current gain matrix, the current observation matrix, and the current measurement vector, and the calculation formula (10) may be represented as:
Figure 962754DEST_PATH_IMAGE016
wherein, XkCalculating a state vector for the kth time; xk|k-1Predicting the vector for the state calculated at the k time; kkA gain matrix calculated for the kth time; hkIs the k-th observation matrix; z is a radical ofkIs the measurement vector of the k-th time.
State vector XkCan be expressed as
Figure 383371DEST_PATH_IMAGE017
Wherein, (x, y) is standard position information, vxAnd vyThe speed of the target in the X-axis direction and the Y-axis direction, respectively, with respect to the vehicle is not of interest in this solution.
The central controller can calculate the covariance matrix of this time, which is convenient for the next round of calculation, and the calculation formula (11) can be expressed as:
Figure 183837DEST_PATH_IMAGE018
wherein, PkA covariance matrix calculated for the kth time; i is an identity matrix; kkA gain matrix calculated for the kth time; hkIs the k-th observation matrix; pk|k-1The covariance prediction matrix calculated for the k-th time.
By performing the calculation processes of the above equations (7) to (11) in a loop, the central controller can correct the position information of the object determined from the image to be recognized according to the standard deviation of the position accuracy determined by the photometric parameter, thereby obtaining standard position information.
In one embodiment, at least two cameras are mounted on the vehicle, and in this case, the image to be recognized includes at least two simultaneously acquired images to be processed. Each camera acquires an image to be processed, and each image to be processed can identify position information of a target. Referring to fig. 7, which is a flowchart illustrating a method for correcting location information according to another embodiment of the present disclosure, when the central controller executes step 330, the central controller may execute the following steps 331C to 333C.
Step 331C: and respectively constructing a measurement noise covariance matrix corresponding to each image to be processed according to the position precision standard deviation corresponding to each image to be processed.
For each image to be processed, the central controller may construct a measurement noise covariance matrix from the position accuracy standard deviation corresponding to the abscissa and the position accuracy standard deviation corresponding to the ordinate. The measurement noise covariance matrix R can be expressed as
Figure 297286DEST_PATH_IMAGE019
. Wherein σixIndicates the standard deviation, σ, of the positional accuracy in the X-axis direction corresponding to the i-th camera (image to be processed)iyAnd (4) indicating the position accuracy standard deviation of the ith camera (image to be processed) in the Y-axis direction.
Step 332C: and respectively constructing a measurement vector corresponding to each image to be processed according to the position information corresponding to each image to be processed.
Measurement vector ziCan be expressed as
Figure 22184DEST_PATH_IMAGE020
Wherein (x)i,yi) Position information representing the form of coordinates recognized from the i-th image to be processed.
Step 333C: sequentially calculating a measurement noise covariance matrix and a measurement vector corresponding to each image to be processed according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information.
The central controller may periodically correct the position information according to the extended kalman filter algorithm, and the period duration may be a preconfigured empirical value. For example, the central controller may acquire the photometric parameters and the position information identified from the at least two images to be processed every 0.02 seconds, and correct the position information according to the extended kalman filter algorithm.
The difference from the calibration method of steps 331B to 333B is that the central controller uses a plurality of position information and standard deviations of position accuracy corresponding to a plurality of cameras for each calibration run when executing the calibration method of steps 331C to 333C.
When executing step 333C, the central controller may substitute the state vector and the covariance matrix obtained in the previous calculation into the above equations (7) to (8) to calculate the state prediction vector and the covariance prediction matrix of this time. Then, the central controller may calculate by sequentially substituting the measurement noise covariance matrix and the measurement vector corresponding to each image to be processed into the above-described equations (9) to (11), thereby obtaining the standard position information.
Illustratively, a vehicle carries 3 cameras, and 3 images to be processed can be obtained at the same time, and accordingly, 3 measurement vectors and a measurement noise covariance matrix exist. The measurement vector is denoted as z1k、z2kAnd z3kAnd the covariance matrix of the measured noise is recorded as R1k、R2kAnd R3k. The central controller can calculate the state vector and the covariance matrix calculated in the previous time according to the formula (7) and the formula (8) to obtain the state prediction vector X of the current timek|k-1Sum covariance prediction matrix Pk|k-1
Further, the central controller may measure the vector z1kAnd measure the noise covariance matrix R1kSubstituting equation (9) to equation (11) to calculate a state vector X corresponding to the first image to be processedkSum covariance matrix Pk
Then, the central controller will use the state vector XkAs state prediction vector Xk|k-1The covariance matrix PkAs a covariance prediction matrix Pk|k-1Will measure the vector z2kAnd measure the noise covariance matrix R2kSubstituting equation (9) to equation (11) to calculate a state vector X corresponding to the second image to be processedkSum covariance matrix Pk
Further, the central controller will use the state vector XkAs state prediction vector Xk|k-1The covariance matrix PkAs a covariance prediction matrix Pk|k-1Will measure the vector z3kAnd measure the noise covariance matrix R3kSubstituting equation (9) to equation (11) to calculate a state vector X corresponding to the third image to be processedkSum covariance matrix Pk. At this time, the measurement vector and the measurement noise covariance matrix corresponding to the three images to be processed are all involved in the calculation, and the state vector X corresponding to the third image to be processedkSum covariance matrix PkThe current state vector and the covariance vector obtained for the calculation can be used for the next calculation. And, the current state vector X corresponding to the third image to be processedkIncluding standard location information.
When there are a plurality of cameras, the calculation order of the images to be processed acquired by the respective cameras may be set in advance. For example, the plurality of cameras are arranged transversely, and the measurement vector and the measurement noise covariance matrix determined in the image to be processed collected by each camera can be calculated in the order from left to right.
In one embodiment, the central controller may first initialize the parameters of the extended kalman filter algorithm before performing steps 331B through 333B, or alternatively, before performing steps 331C through 333C.
The central controller may determine a state transition matrix F, an observation matrix H, a process noise covariance matrix Q from the configuration information. Initial covariance matrix P0Can be expressed as
Figure 246492DEST_PATH_IMAGE021
Wherein since the present scheme does not measure the moving speed of the object with respect to the vehicle, the covariance of the speed may be set to 100 in the initial covariance matrix, which may gradually converge in the subsequent calculation, while the standard deviation σ of the position accuracy corresponding to the X-axis directionxAnd the corresponding standard deviation sigma of the position accuracy in the Y-axis directionyMay be based on the initializationThe photometric parameters in the case are calculated.
The central controller may determine initial position information and initial velocity information from the historical images at the at least two time instants. Wherein the historical image is an image to be recognized collected by the camera before the central controller executes the method for correcting the target detection result. For example, the central controller may determine two position information of the target from the history images at two time points, and the central controller may determine initial velocity information of the target in the X-axis direction and initial velocity information of the target in the Y-axis direction according to the two position information and the time difference between the two acquired history images. The central controller may use the position information determined in the last acquired history image as the initial position information of the target.
And constructing an initial state vector of the extended Kalman filtering algorithm according to the initial position information and the initial speed information. Initial state vector X0Can be expressed as
Figure 635885DEST_PATH_IMAGE022
Wherein (x)0,y0) Initial position information in the form of coordinates, vx0Initial velocity information, v, indicating the X-axis directiony0Indicating initial velocity information in the Y-axis direction.
The central controller can calculate the position information and the luminosity parameters in the newly acquired image to be identified based on the initial state vector, the state transition matrix, the observation matrix, the process noise covariance matrix and the initial covariance matrix, so as to obtain standard position information through correction.
In one embodiment, the central controller may obtain a plurality of historical photometric parameters and a standard deviation of positional accuracy corresponding to each historical photometric parameter before calculating the photometric parameter according to a photometric feature function corresponding to an image to be identified. The historical luminosity parameters are luminosity parameters when a plurality of historical images are collected; the standard deviation of the position accuracy corresponding to the historical photometric parameter is calculated from a plurality of position information.
For a system for correcting the target detection result, a plurality of position information corresponding to each camera can be acquired through a plurality of tests under the same illumination condition (when the luminosity parameter is unchanged). For each camera, a standard deviation is calculated from the plurality of position information, and a position accuracy standard deviation corresponding to one photometric parameter is obtained.
The central controller can obtain a luminosity characteristic function through fitting according to a plurality of historical luminosity parameters and the position precision standard deviation.
Referring to fig. 8, a schematic diagram of a photometric characteristic function provided in an embodiment of the present application is shown in fig. 8, an image of the photometric characteristic function can be determined according to a plurality of position accuracy standard deviations σ and a photometric parameter L, and the photometric characteristic function can be fitted accordingly. In fig. 8, f (l) and g (l) represent photometric feature functions corresponding to two cameras.
For each camera, fitting to obtain a luminosity characteristic function corresponding to the abscissa according to the position precision standard deviation in the X-axis direction corresponding to a plurality of historical luminosity parameters; and fitting to obtain a luminosity characteristic function corresponding to the ordinate according to the position precision standard deviation in the Y-axis direction corresponding to the plurality of historical luminosity parameters.
After the photometric feature function corresponding to each camera is obtained by fitting, the central controller may execute the method for correcting the target detection result according to the embodiment of the present application.
Fig. 9 is a block diagram of an apparatus for correcting a target detection result according to an embodiment of the present application, and as shown in fig. 9, the apparatus may include: an acquisition module 910, a calculation module 920, and a correction module 930.
And the acquisition module is used for acquiring the luminosity parameters when the image to be identified is acquired.
And the calculation module is used for calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain the position precision standard deviation.
And the correction module is used for correcting the position information identified from the image to be identified based on the position precision standard deviation to obtain standard position information.
The implementation process of the functions and actions of each module in the device is specifically detailed in the implementation process of the corresponding step in the method for correcting the target detection result, and is not described herein again.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (6)

1. A method of correcting a target detection result, comprising:
obtaining a plurality of historical photometric parameters and a position accuracy standard deviation corresponding to each historical photometric parameter;
fitting to obtain a luminosity characteristic function according to the plurality of historical luminosity parameters and the position precision standard deviation;
acquiring photometric parameters when an image to be identified is acquired; wherein, the luminosity parameter is any one of luminous flux, luminous intensity and light emergent degree;
calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain a position precision standard deviation;
constructing a measurement noise covariance matrix according to the position precision standard deviation; constructing a measurement vector according to the position information; calculating the measurement noise covariance matrix and the measurement vector according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information;
if the image to be identified comprises at least two images to be processed which are acquired simultaneously, determining a weight factor according to the position precision standard deviation corresponding to each image to be processed; according to the weight factor corresponding to each image to be processed, carrying out weighted summation on the position information identified from each image to be processed to obtain standard position information;
or if the image to be identified comprises at least two images to be processed which are acquired simultaneously, respectively constructing a measurement noise covariance matrix corresponding to each image to be processed according to the position precision standard deviation corresponding to each image to be processed; respectively constructing a measurement vector corresponding to each image to be processed according to the position information corresponding to each image to be processed; sequentially calculating a measurement noise covariance matrix and a measurement vector corresponding to each image to be processed according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information.
2. The method of claim 1, wherein prior to performing calculations according to the extended kalman filter algorithm, the method further comprises:
determining initial position information and initial speed information according to historical images at least two moments;
and constructing an initial state vector of the extended Kalman filtering algorithm according to the initial position information and the initial speed information.
3. An apparatus for correcting a target detection result, comprising:
an acquisition module for acquiring a plurality of historical photometric parameters and a standard deviation of position accuracy corresponding to each historical photometric parameter; fitting to obtain a luminosity characteristic function according to the plurality of historical luminosity parameters and the position precision standard deviation; acquiring photometric parameters when an image to be identified is acquired; wherein, the luminosity parameter is any one of luminous flux, luminous intensity and light emergent degree;
the calculation module is used for calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain the position precision standard deviation;
the correction module is used for constructing a measurement noise covariance matrix according to the position precision standard deviation; constructing a measurement vector according to the position information; calculating the measurement noise covariance matrix and the measurement vector according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information; if the image to be identified comprises at least two images to be processed which are acquired simultaneously, determining a weight factor according to the position precision standard deviation corresponding to each image to be processed; according to the weight factor corresponding to each image to be processed, carrying out weighted summation on the position information identified from each image to be processed to obtain standard position information; or if the image to be identified comprises at least two images to be processed which are acquired simultaneously, respectively constructing a measurement noise covariance matrix corresponding to each image to be processed according to the position precision standard deviation corresponding to each image to be processed; respectively constructing a measurement vector corresponding to each image to be processed according to the position information corresponding to each image to be processed; sequentially calculating a measurement noise covariance matrix and a measurement vector corresponding to each image to be processed according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information.
4. A system for correcting a target detection result, comprising:
the camera controller is used for identifying the position information of the target from the image to be identified;
the luminosity sensor controller is used for determining and acquiring luminosity parameters in the image to be identified; wherein, the luminosity parameter is any one of luminous flux, luminous intensity and light emergent degree;
a central controller connected to said camera controller, to said photometric sensor controller, for acquiring a plurality of historical photometric parameters and a standard deviation of positional accuracy corresponding to each historical photometric parameter; fitting to obtain a luminosity characteristic function according to the plurality of historical luminosity parameters and the position precision standard deviation; acquiring the position information and the luminosity parameters, and calculating the luminosity parameters according to the luminosity characteristic function corresponding to the image to be identified to obtain a position precision standard deviation; constructing a measurement noise covariance matrix according to the position precision standard deviation; constructing a measurement vector according to the position information; calculating the measurement noise covariance matrix and the measurement vector according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information; if the image to be identified comprises at least two images to be processed which are acquired simultaneously, determining a weight factor according to the position precision standard deviation corresponding to each image to be processed; according to the weight factor corresponding to each image to be processed, carrying out weighted summation on the position information identified from each image to be processed to obtain standard position information; or if the image to be identified comprises at least two images to be processed which are acquired simultaneously, respectively constructing a measurement noise covariance matrix corresponding to each image to be processed according to the position precision standard deviation corresponding to each image to be processed; respectively constructing a measurement vector corresponding to each image to be processed according to the position information corresponding to each image to be processed; sequentially calculating a measurement noise covariance matrix and a measurement vector corresponding to each image to be processed according to an extended Kalman filtering algorithm to obtain a current state vector; wherein the current state vector includes standard position information.
5. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of correcting a target detection result of any one of claims 1-2.
6. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the method of correcting a target detection result according to any one of claims 1-2.
CN202011036772.2A 2020-09-28 2020-09-28 Method, device and system for correcting target detection result, electronic equipment and storage medium Active CN111882616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011036772.2A CN111882616B (en) 2020-09-28 2020-09-28 Method, device and system for correcting target detection result, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011036772.2A CN111882616B (en) 2020-09-28 2020-09-28 Method, device and system for correcting target detection result, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111882616A CN111882616A (en) 2020-11-03
CN111882616B true CN111882616B (en) 2021-06-18

Family

ID=73199189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011036772.2A Active CN111882616B (en) 2020-09-28 2020-09-28 Method, device and system for correcting target detection result, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111882616B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657052A (en) * 2021-07-07 2023-01-31 奥比中光科技集团股份有限公司 ITOF ranging system and method, device and equipment for determining relative precision of ITOF ranging system
CN114363219B (en) * 2022-01-07 2024-03-19 上海哔哩哔哩科技有限公司 Data processing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095163A (en) * 2005-01-04 2007-12-26 罗伯特·博世有限公司 Method for determining the displacement of a vehicle
CN106780297A (en) * 2016-11-30 2017-05-31 天津大学 Image high registration accuracy method under scene and Varying Illumination

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109556633B (en) * 2018-11-26 2020-11-20 北方工业大学 Bionic polarization sensor multi-source error calibration method based on adaptive EKF

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095163A (en) * 2005-01-04 2007-12-26 罗伯特·博世有限公司 Method for determining the displacement of a vehicle
CN106780297A (en) * 2016-11-30 2017-05-31 天津大学 Image high registration accuracy method under scene and Varying Illumination

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback》;Bloesch Michael等;《International Journal of Robotics Research》;20170901;1053–1072 *
《OpenCV目标跟踪(五)-kalman滤波器》;王大伟啊;《https://blog.csdn.net/w12345_ww/article/details/45031163》;20150413;1-5 *

Also Published As

Publication number Publication date
CN111882616A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
US11392146B2 (en) Method for detecting target object, detection apparatus and robot
CN109949372B (en) Laser radar and vision combined calibration method
US20200264011A1 (en) Drift calibration method and device for inertial measurement unit, and unmanned aerial vehicle
CN107481292B (en) Attitude error estimation method and device for vehicle-mounted camera
CN111882616B (en) Method, device and system for correcting target detection result, electronic equipment and storage medium
US10895458B2 (en) Method, apparatus, and system for determining a movement of a mobile platform
JP3880702B2 (en) Optical flow detection apparatus for image and self-position recognition system for moving object
CN111065043B (en) System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication
KR101672732B1 (en) Apparatus and method for tracking object
CN105955308A (en) Aircraft control method and device
CN110287828B (en) Signal lamp detection method and device and electronic equipment
CN110488838B (en) Accurate repeated positioning method for indoor autonomous navigation robot
JP2019032218A (en) Location information recording method and device
CN111989631A (en) Self-position estimation method
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN113899364B (en) Positioning method and device, equipment and storage medium
CN110291771B (en) Depth information acquisition method of target object and movable platform
US20110261162A1 (en) Method for Automatically Generating a Three-Dimensional Reference Model as Terrain Information for an Imaging Device
CN114035187A (en) Perception fusion method of automatic driving system
JP5267100B2 (en) Motion estimation apparatus and program
EP3722749A1 (en) Navigation augmentation system and method
CN113252066A (en) Method and device for calibrating parameters of odometer equipment, storage medium and electronic device
CN114821372A (en) Monocular vision-based method for measuring relative pose of individuals in unmanned aerial vehicle formation
JP7336223B2 (en) Self-localization method
Zhong et al. CalQNet-detection of calibration quality for life-long stereo camera setups

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant