CN118115544A - Vehicle motion state determining method and device based on forward-looking monocular camera - Google Patents
Vehicle motion state determining method and device based on forward-looking monocular camera Download PDFInfo
- Publication number
- CN118115544A CN118115544A CN202410067234.1A CN202410067234A CN118115544A CN 118115544 A CN118115544 A CN 118115544A CN 202410067234 A CN202410067234 A CN 202410067234A CN 118115544 A CN118115544 A CN 118115544A
- Authority
- CN
- China
- Prior art keywords
- target vehicle
- target
- compensation
- vehicle
- frame picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000003287 optical effect Effects 0.000 claims abstract description 30
- 239000011159 matrix material Substances 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 8
- 230000001133 acceleration Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a vehicle motion state determining method and device based on a forward-looking monocular camera. The method comprises the following steps: extracting a target rectangular frame containing a target vehicle from a current frame picture acquired by a vehicle front-view camera; performing pixel compensation on the target vehicle in the current frame picture to obtain a pixel value after the target vehicle compensation, wherein the pixel compensation comprises: width compensation for the width of the tail of the target vehicle and optical flow compensation for the height ratio of the tail of the target vehicle; detecting whether a target track matched with a target rectangular frame exists in a history track library; if the target track exists in the history track library, determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value after the target vehicle is compensated. The application solves the problems of low accuracy and high cost of vehicle motion state estimation in the related technology, and achieves the technical effects of improving the accuracy of vehicle motion state estimation and reducing the cost.
Description
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a vehicle motion state determining method and device based on a forward-looking monocular camera.
Background
Vehicle motion state estimation is a key issue in the technical field of intelligent driving, is a front-end task of many early warning and control tasks, and comprises: front Collision Warning (FCW), adaptive cruise (ACC), active braking (AEB), etc. The targets for vehicle motion state estimation are: the distance and the movement speed of the vehicle target relative to the own vehicle are estimated, and the result meets three conditions, namely:
The accuracy, the estimated value of the distance and the speed and the true value should be kept consistent as much as possible;
Stability, from the time axis, the estimated result numerical curve is smooth, and no severe fluctuation and abnormal jump points exist;
And when the motion state of the target vehicle changes, the estimated value responds rapidly, and the estimated value shows better following performance with the true value.
Currently, in the technical field of intelligent driving, the motion state estimation of the main stream is completed by fusing the sensing results of a camera and a radar, and the execution flow is shown in fig. 1. Because the distance measured by the camera is an indirect result obtained by methods such as projection ranging, aperture imaging ranging and the like, the precision and the stability are influenced by a plurality of factors in the middle (including model precision, camera external parameter drift, inaccurate prior information and the like), and the speed is essentially obtained by the difference of the distance results, the motion state measured by the camera is generally difficult to meet the precision requirements of early warning and control links and can only meet the requirements of fusion with the information collected by the radar, and the method is also a reason for introducing the radar.
The scheme fuses the direct information collected by the radar into the target motion state, so that the performance index (accuracy, stability, real-time performance and the like) is relatively good. However, such schemes also have their inherent drawbacks:
Firstly, the process of fusing the radar detection result and the camera prediction result consumes time and calculation force;
Secondly, the prior ranging method does not fully consider target pose information, so that the ranging effect of a camera is not ideal in certain situations, and the fusion effect of the camera and radar detection information is affected;
again, radar has its limitations including high cost, low resolution, susceptibility to unusual weather, difficulty in target identification, etc.
In summary, no effective solution has been proposed for the problems of low accuracy and high cost of vehicle motion state estimation in the related art.
Disclosure of Invention
The application aims at overcoming the defects in the prior art, and provides a vehicle motion state determining method, device, computer equipment and computer readable storage medium based on a forward-looking monocular camera, which are used for at least solving the problems of low accuracy and high cost in vehicle motion state estimation in the related art.
In order to achieve the above purpose, the technical scheme adopted by the application is as follows:
in a first aspect, an embodiment of the present application provides a method for determining a vehicle motion state based on a forward-looking monocular camera, including:
extracting a target rectangular frame containing a target vehicle from a current frame picture acquired by a vehicle front-view camera;
Performing pixel compensation on the target vehicle in the current frame picture to obtain a pixel value after the target vehicle compensation, wherein the pixel compensation comprises: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle;
detecting whether a target track matched with the target rectangular frame exists in a history track library;
and if the target track matched with the target rectangular frame exists in the history track library, determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value of the target vehicle after compensation.
In some of these embodiments, the performing pixel compensation on the target vehicle in the current frame picture includes:
Determining side edge data, wheel line data and view angle data of the target vehicle according to the thumbnail of the target vehicle, wherein the side edge data are used for indicating the boundary line of two visible sides of the target vehicle, the wheel line data are used for indicating the connecting line of two visible wheels of the target vehicle, and the view angle data are used for indicating the relative position of the target vehicle;
performing width compensation on the tail width of the target vehicle in the current frame picture by using the wheel line data, the tail width observation value of the target vehicle and the parameters of the front-view camera to obtain the compensated tail width of the target vehicle;
Performing optical flow compensation on the tail height ratio of the target vehicle by utilizing the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture to obtain the compensated tail height ratio of the target vehicle;
Wherein the pixel values after the target vehicle compensation include: the compensated tail width of the target vehicle and the compensated tail height ratio of the target vehicle.
In some embodiments, the performing width compensation on the tail width of the target vehicle in the current frame picture by using the wheel line data, the tail width observed value of the target vehicle and the parameters of the front-view camera to obtain the compensated tail width of the target vehicle includes:
determining the relative angle of the target vehicle by using the wheel line data and the parameters of the front-view camera;
And determining the tail width of the compensated target vehicle by using the relative angle of the target vehicle and the tail width observed value of the target vehicle.
In some embodiments, the performing optical flow compensation on the tail height ratio of the target vehicle by using the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture to obtain the compensated tail height ratio of the target vehicle includes:
Generating a pixel position offset matrix according to the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture, wherein the pixel position offset matrix comprises: a row coordinate pixel position offset matrix and a column coordinate pixel position offset matrix;
calculating a compensation coefficient according to the row coordinate pixel position offset matrix;
and carrying out optical flow compensation on the tail height ratio of the target vehicle by using the compensation coefficient to obtain the compensated tail height ratio of the target vehicle.
In some embodiments, the determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value compensated by the target vehicle includes:
Determining a predicted value of the target vehicle in the current frame picture, wherein the predicted value comprises: the motion state of the target vehicle in the current frame picture, which is obtained by prediction according to the motion state of the target vehicle in the previous frame picture, and the state covariance of the target vehicle in the current frame picture, which is obtained by prediction according to the state covariance of the target vehicle in the previous frame picture, wherein the motion state of the target vehicle comprises: the relative distance, relative speed, and relative acceleration of the target vehicle;
and updating the motion state and the state covariance of the target vehicle in the current frame picture according to the predicted value and the pixel value compensated by the target vehicle.
In some embodiments, the detecting whether the target track matched with the target rectangular box exists in the history track library includes:
for each track in the historical track library, the following steps are performed:
predicting the position of the rectangular frame of the vehicle in the current frame picture at the current moment based on the position of the rectangular frame of the vehicle in the picture at the historical moment of the track;
And if the predicted position of the rectangular frame of the vehicle in the current frame picture at the current moment is consistent with the predicted position of the rectangular frame of the target in the current frame picture, determining the track as the target track matched with the rectangular frame of the target.
In a second aspect, an embodiment of the present application provides a device for determining a vehicle motion state based on a forward-looking monocular camera, including:
The extraction unit is used for extracting a target rectangular frame containing a target vehicle from the current frame picture acquired by the vehicle front-view camera;
The compensation unit is configured to perform pixel compensation on the target vehicle in the current frame picture, and obtain a pixel value after the target vehicle compensation, where the pixel compensation includes: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle;
the detection unit is used for detecting whether a target track matched with the target rectangular frame exists in the history track library;
And the determining unit is used for determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value after the target vehicle compensation if the target track matched with the target rectangular frame exists in the history track library.
In some of these embodiments, the compensation unit comprises:
a first determining module, configured to determine, according to a thumbnail of the target vehicle, side edge data, wheel line data, and view angle data of the target vehicle, where the side edge data is used to indicate a boundary line between two visible sides of the target vehicle, the wheel line data is used to indicate a connection line between two visible wheels of the target vehicle, and the view angle data is used to indicate a relative position of the target vehicle;
The first compensation module is used for carrying out width compensation on the tail width of the target vehicle in the current frame picture by utilizing the wheel line data, the tail width observation value of the target vehicle and the parameters of the front-view camera to obtain the tail width of the target vehicle after compensation;
The second compensation module is used for carrying out optical flow compensation on the tail height ratio of the target vehicle by utilizing the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture to obtain the compensated tail height ratio of the target vehicle;
Wherein the pixel values after the target vehicle compensation include: the compensated tail width of the target vehicle and the compensated tail height ratio of the target vehicle.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for determining a vehicle motion state based on a forward looking monocular camera according to the first aspect when the processor executes the computer program.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for determining a vehicle motion state based on a forward looking monocular camera as described in the first aspect above.
Compared with the prior art, the vehicle motion state determining method based on the forward-looking monocular camera provided by the embodiment of the application extracts the target rectangular frame containing the target vehicle from the current frame picture acquired by the vehicle forward-looking camera; performing pixel compensation on the target vehicle in the current frame picture to obtain a pixel value after the target vehicle compensation, wherein the pixel compensation comprises: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle; detecting whether a target track matched with the target rectangular frame exists in a history track library; if the target track matched with the target rectangular frame exists in the history track library, determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value of the target vehicle after compensation, solving the problems of low accuracy and high cost in vehicle motion state estimation in the related technology, and realizing the technical effects of improving the vehicle motion state estimation accuracy and reducing the cost.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic diagram of a flow of execution of a vehicle motion state estimation method according to the related art;
fig. 2 is a block diagram of a mobile terminal according to an embodiment of the present application;
FIG. 3 is a flow chart of a forward looking monocular camera based vehicle motion state determination method according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a flow of execution of a forward-looking monocular camera-based vehicle motion state determination method according to a preferred embodiment of the present application;
Fig. 5 is a block diagram of a structure of a forward-looking monocular camera-based vehicle motion state determining apparatus according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The embodiment provides a mobile terminal. Fig. 2 is a block diagram of a mobile terminal according to an embodiment of the present application. As shown in fig. 2, the mobile terminal includes: radio Frequency (RF) circuitry 210, memory 220, input unit 230, display unit 240, sensor 250, audio circuitry 260, wireless fidelity (WIRELESS FIDELITY, wiFi) module 270, processor 280, and power supply 290. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 2 is not limiting of the mobile terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile terminal in detail with reference to fig. 2:
The RF circuit 210 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, after receiving downlink information of the base station, the downlink information is processed by the processor 280; in addition, the data of the design uplink is sent to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers (Low Noise Amplifier, abbreviated as LNAs), diplexers, and the like. In addition, the RF circuitry 210 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (Global System of Mobile communication, abbreviated as GSM), general packet Radio Service (GENERAL PACKET Radio Service, abbreviated as GPRS), code division multiple access (Code Division Multiple Access, abbreviated as CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, abbreviated as WCDMA), long term evolution (Long Term Evolution, abbreviated as LTE), email, short message Service (Short MESSAGING SERVICE, abbreviated as SMS), etc.
The memory 220 may be used to store software programs and modules, and the processor 280 performs various functional applications and data processing of the mobile terminal by executing the software programs and modules stored in the memory 220. The memory 220 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebooks, etc.) created according to the use of the mobile terminal, etc. In addition, memory 220 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 230 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. In particular, the input unit 230 may include a touch panel 231 and other input devices 232. The touch panel 231, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 231 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 231 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 280, and can receive commands from the processor 280 and execute them. In addition, the touch panel 231 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 230 may include other input devices 232 in addition to the touch panel 231. In particular, other input devices 232 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 240 may be used to display information input by a user or information provided to the user and various menus of the mobile terminal. The display unit 240 may include a display panel 241, and alternatively, the display panel 241 may be configured in the form of a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 231 may cover the display panel 241, and when the touch panel 231 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 280 to determine the type of the touch event, and then the processor 280 provides a corresponding visual output on the display panel 241 according to the type of the touch event. Although in fig. 2, the touch panel 231 and the display panel 241 implement the input and input functions of the mobile terminal as two separate components, in some embodiments, the touch panel 231 and the display panel 241 may be integrated to implement the input and output functions of the mobile terminal.
The mobile terminal may also include at least one sensor 250, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 241 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 241 and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the application of the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the mobile terminal are not described in detail herein.
Speaker 261 in audio circuit 260 and microphone 262 may provide an audio interface between a user and the mobile terminal. The audio circuit 260 may transmit the received electrical signal converted from audio data to the speaker 261, and the electrical signal is converted into a sound signal by the speaker 261 to be output; on the other hand, the microphone 262 converts the collected sound signals into electrical signals, which are received by the audio circuit 260 and converted into audio data, which are processed by the audio data output processor 280 for transmission to, for example, another mobile terminal via the RF circuit 210, or which are output to the memory 220 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile terminal can help a user to send and receive e-mails, browse web pages, access streaming media and the like through a WiFi module 270, so that wireless broadband Internet access is provided for the user. Although fig. 2 shows a WiFi module 270, it will be understood that it does not belong to the necessary configuration of the mobile terminal, and may be omitted entirely or replaced with other short-range wireless transmission modules, such as a Zigbee module, a WAPI module, or the like, as required within the scope of not changing the essence of the invention.
The processor 280 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 220, and calling data stored in the memory 220, thereby performing overall monitoring of the mobile terminal. Optionally, the processor 280 may include one or more processing units; preferably, the processor 280 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 280.
The mobile terminal also includes a power supply 290 (e.g., a battery) for powering the various components, which may be logically connected to the processor 280 by a power management system, such as a power management system for performing functions such as managing charging, discharging, and power consumption.
Although not shown, the mobile terminal may further include a camera, a bluetooth module, etc., which will not be described herein.
In this embodiment, the processor 280 is configured to:
extracting a target rectangular frame containing a target vehicle from a current frame picture acquired by a vehicle front-view camera;
Performing pixel compensation on the target vehicle in the current frame picture to obtain a pixel value after the target vehicle compensation, wherein the pixel compensation comprises: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle;
detecting whether a target track matched with the target rectangular frame exists in a history track library;
and if the target track matched with the target rectangular frame exists in the history track library, determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value of the target vehicle after compensation.
In some of these embodiments, the processor 280 is further configured to:
Determining side edge data, wheel line data and view angle data of the target vehicle according to the thumbnail of the target vehicle, wherein the side edge data are used for indicating the boundary line of two visible sides of the target vehicle, the wheel line data are used for indicating the connecting line of two visible wheels of the target vehicle, and the view angle data are used for indicating the relative position of the target vehicle;
performing width compensation on the tail width of the target vehicle in the current frame picture by using the wheel line data, the tail width observation value of the target vehicle and the parameters of the front-view camera to obtain the compensated tail width of the target vehicle;
Performing optical flow compensation on the tail height ratio of the target vehicle by utilizing the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture to obtain the compensated tail height ratio of the target vehicle;
Wherein the pixel values after the target vehicle compensation include: the compensated tail width of the target vehicle and the compensated tail height ratio of the target vehicle.
In some of these embodiments, the processor 280 is further configured to:
determining the relative angle of the target vehicle by using the wheel line data and the parameters of the front-view camera;
And determining the tail width of the compensated target vehicle by using the relative angle of the target vehicle and the tail width observed value of the target vehicle.
In some of these embodiments, the processor 280 is further configured to:
Generating a pixel position offset matrix according to the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture, wherein the pixel position offset matrix comprises: a row coordinate pixel position offset matrix and a column coordinate pixel position offset matrix;
calculating a compensation coefficient according to the row coordinate pixel position offset matrix;
and carrying out optical flow compensation on the tail height ratio of the target vehicle by using the compensation coefficient to obtain the compensated tail height ratio of the target vehicle.
In some of these embodiments, the processor 280 is further configured to:
Determining a predicted value of the target vehicle in the current frame picture, wherein the predicted value comprises: the motion state of the target vehicle in the current frame picture, which is obtained by prediction according to the motion state of the target vehicle in the previous frame picture, and the state covariance of the target vehicle in the current frame picture, which is obtained by prediction according to the state covariance of the target vehicle in the previous frame picture, wherein the motion state of the target vehicle comprises: the relative distance, relative speed, and relative acceleration of the target vehicle;
and updating the motion state and the state covariance of the target vehicle in the current frame picture according to the predicted value and the pixel value compensated by the target vehicle.
In some of these embodiments, the processor 280 is further configured to:
for each track in the historical track library, the following steps are performed:
predicting the position of the rectangular frame of the vehicle in the current frame picture at the current moment based on the position of the rectangular frame of the vehicle in the picture at the historical moment of the track;
And if the predicted position of the rectangular frame of the vehicle in the current frame picture at the current moment is consistent with the predicted position of the rectangular frame of the target in the current frame picture, determining the track as the target track matched with the rectangular frame of the target.
The embodiment provides a vehicle motion state determining method based on a forward-looking monocular camera. Fig. 3 is a flowchart of a forward-looking monocular camera-based vehicle motion state determining method according to an embodiment of the present application, as shown in fig. 3, the flowchart including the steps of:
Step S301, extracting a target rectangular frame containing a target vehicle from a current frame picture acquired by a vehicle front-view camera;
Step S302, performing pixel compensation on the target vehicle in the current frame picture to obtain a pixel value after the target vehicle compensation, where the pixel compensation includes: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle;
Step S303, detecting whether a target track matched with the target rectangular frame exists in a history track library;
step S304, if it is detected that the target track matched with the target rectangular frame exists in the history track library, determining a motion state of the target vehicle in the current frame picture according to the target track and the pixel value compensated by the target vehicle.
In the embodiment of the application, the front-view camera of the vehicle is set as the front middle position, the front-view camera always collects images at different moments, and the current frame picture is the image collected by the front-view camera at the current moment. And if the current frame picture comprises the target vehicle, extracting a target rectangular frame containing the target vehicle from the current frame picture. Alternatively, the extracted target rectangular box may be stored in a target rectangular box set.
Due to factors such as vehicle shake or visual angle, errors may exist in the observed relative speed and distance of the target vehicle, so that the embodiment of the application achieves the purpose of improving the accuracy of determining the motion state of the target vehicle at the current moment by carrying out pixel compensation on the target vehicle in the current frame picture.
In some of these embodiments, pixel compensation of the target vehicle in the current frame picture may include: width compensation is carried out on the tail width of the target vehicle; and compensating the optical flow of the tail height ratio of the target vehicle.
Specifically, the step S302 of performing pixel compensation on the target vehicle in the current frame picture may include:
Step S3021, determining, according to a thumbnail of the target vehicle, side edge data, wheel line data and view angle data of the target vehicle, where the side edge data is used to indicate a boundary line between two visible sides of the target vehicle, the wheel line data is used to indicate a connecting line between two visible wheels of the target vehicle, and the view angle data is used to indicate a relative position of the target vehicle;
Step S3022, performing width compensation on the tail width of the target vehicle in the current frame picture by using the wheel line data, the tail width observation value of the target vehicle, and the parameters of the front-view camera, to obtain a compensated tail width of the target vehicle;
Step S3023, performing optical flow compensation on the tail height ratio of the target vehicle by using the tail thumbnails of the target vehicle in the current frame picture and the previous frame picture, to obtain the compensated tail height ratio of the target vehicle.
Wherein the pixel values after the target vehicle compensation include: the compensated tail width of the target vehicle; and the compensated tail height ratio of the target vehicle.
In some embodiments, the specific process of step S3022 may include:
determining the relative angle of the target vehicle by using the wheel line data and the parameters of the front-view camera;
And determining the tail width of the compensated target vehicle by using the relative angle of the target vehicle and the tail width observed value of the target vehicle.
It should be noted that, the relative angle of the target vehicle may include a plurality of angles, such as an angle between a side surface of the target vehicle and a vertical direction, an angle between a tail inclined surface of the target vehicle and a horizontal direction, an angle between a connecting line between a breakpoint on one side of the tail of the target vehicle and the host vehicle and the horizontal direction, and the like. And calculating the tail width of the compensated target vehicle by utilizing the relative angle of the target vehicle relative to the vehicle and combining the tail width observation value of the target vehicle.
In some embodiments, the specific process of step S3023 may include:
Generating a pixel position offset matrix according to the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture, wherein the pixel position offset matrix comprises: a row coordinate pixel position offset matrix and a column coordinate pixel position offset matrix;
calculating a compensation coefficient according to the row coordinate pixel position offset matrix;
and carrying out optical flow compensation on the tail height ratio of the target vehicle by using the compensation coefficient to obtain the compensated tail height ratio of the target vehicle.
It should be noted that, the pixel position offset may be determined by the coordinate position of the same pixel in the front and rear frames of pictures, the compensation coefficient of the tail height ratio of the target vehicle may be determined by the pixel position offset, and then the compensated tail height ratio of the target vehicle may be obtained by performing optical flow compensation on the tail height ratio of the target vehicle by using the compensation coefficient.
By performing the width compensation and the optical flow compensation on the target vehicle in the current frame picture, the effect of considering the tail width of the target vehicle after compensation and the tail height ratio of the target vehicle after compensation when determining the motion state of the target vehicle at the current moment can be achieved, and the effect of improving the accuracy of the motion state estimation of the target vehicle can be further achieved.
The historical track library in the embodiment of the application can comprise a plurality of tracks, and the parameters of each track comprise the positions of rectangular frames of the vehicle in the pictures, the relative speeds of the vehicle, the relative distances of the vehicle and the like at different moments. According to the embodiment of the application, the motion state and the state covariance of the target vehicle at the current moment are predicted based on the motion state and the state covariance of the target vehicle at the historical moment by judging whether the target track matched with the target rectangular frame containing the target vehicle in the current frame picture exists in the historical track library, and the predicted result obtained through prediction is combined with the width of the compensated target vehicle obtained through width compensation and the height ratio of the compensated target vehicle obtained through optical flow compensation, so that the motion state and the state covariance of the target vehicle at the current moment can be accurately determined.
In some embodiments, the step S303 detecting whether there is a target track matching the target rectangular box in the history track library may include:
for each track in the historical track library, the following steps are performed:
predicting the position of the rectangular frame of the vehicle in the current frame picture at the current moment based on the position of the rectangular frame of the vehicle in the picture at the historical moment of the track;
And if the predicted position of the rectangular frame of the vehicle in the current frame picture at the current moment is consistent with the predicted position of the rectangular frame of the target in the current frame picture, determining the track as the target track matched with the rectangular frame of the target.
In some embodiments, the step S304 determines the motion state of the target vehicle in the current frame image according to the target track and the pixel value after the target vehicle compensation, that is, combines the prediction result obtained by predicting the motion state and the state covariance of the target vehicle at the current time based on the motion state and the state covariance of the target vehicle at the historical time with the width of the compensated target vehicle obtained after the width compensation and the height ratio of the compensated target vehicle obtained after the optical flow compensation, so as to determine the motion state and the state covariance of the target vehicle at the current time. The specific process may include:
Determining a predicted value of the target vehicle in the current frame picture, wherein the predicted value comprises: the motion state of the target vehicle in the current frame picture, which is obtained by prediction according to the motion state of the target vehicle in the previous frame picture, and the state covariance of the target vehicle in the current frame picture, which is obtained by prediction according to the state covariance of the target vehicle in the previous frame picture, wherein the motion state of the target vehicle comprises: the relative distance, relative speed, and relative acceleration of the target vehicle;
And updating the motion state and the state covariance of the target vehicle in the current frame picture according to the predicted value and the pixel value (the width of the target vehicle after compensation and the height ratio of the target vehicle after compensation) after compensation of the target vehicle.
The motion state estimation process is an iterative process, and the tracks in the historical track library are updated continuously along with the time.
Fig. 4 is a schematic diagram of a vehicle motion state determining method execution flow based on a forward looking monocular camera according to a preferred embodiment of the present application, and as shown in fig. 4, the embodiment of the present application adds processes of vehicle detailed attribute prediction (i.e., step S3021 in the above embodiment), width compensation (i.e., step S3022 in the above embodiment), optical flow compensation (i.e., step S3023 in the above embodiment) compared to the vehicle motion state estimating method execution flow in the related art. The improved motion state estimation process in the embodiment of the application comprehensively considers the width and height ratio of the target track matched with the target rectangular frame containing the target vehicle in the history track library and the compensated target vehicle, so that the motion state of the target vehicle at the current moment can be accurately determined.
The embodiment of the application greatly improves the perception performance of the camera on the motion state of the object, including precision, instantaneity, stability and the like. The vehicle motion state sensing scheme constructed based on the embodiment of the application can realize radar-free high-precision motion state sensing, and greatly reduce cost.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment provides a vehicle motion state determining device based on a forward-looking monocular camera, which is used for implementing the foregoing embodiment and a preferred embodiment, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a block diagram of a vehicle motion state determining apparatus based on a forward looking monocular camera according to an embodiment of the present application, as shown in fig. 5, the apparatus comprising:
An extracting unit 51, configured to extract a target rectangular frame including a target vehicle from a current frame picture acquired by a front-view camera of the vehicle;
A compensation unit 52, configured to perform pixel compensation on the target vehicle in the current frame picture, to obtain a pixel value after the target vehicle compensation, where the pixel compensation includes: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle;
A detecting unit 53, configured to detect whether a target track matched with the target rectangular frame exists in the history track library;
And the determining unit 54 is configured to determine, if the target track matched with the target rectangular frame is detected to exist in the history track library, a motion state of the target vehicle in the current frame picture according to the target track and the pixel value compensated by the target vehicle.
In some of these embodiments, the compensation unit 52 includes:
a first determining module, configured to determine, according to a thumbnail of the target vehicle, side edge data, wheel line data, and view angle data of the target vehicle, where the side edge data is used to indicate a boundary line between two visible sides of the target vehicle, the wheel line data is used to indicate a connection line between two visible wheels of the target vehicle, and the view angle data is used to indicate a relative position of the target vehicle;
The first compensation module is used for carrying out width compensation on the tail width of the target vehicle in the current frame picture by utilizing the wheel line data, the tail width observation value of the target vehicle and the parameters of the front-view camera to obtain the tail width of the target vehicle after compensation;
The second compensation module is used for carrying out optical flow compensation on the tail height ratio of the target vehicle by utilizing the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture to obtain the compensated tail height ratio of the target vehicle;
Wherein the pixel values after the target vehicle compensation include: the compensated tail width of the target vehicle and the compensated tail height ratio of the target vehicle.
In some of these embodiments, the first compensation module comprises:
The first determining submodule is used for determining the relative angle of the target vehicle by utilizing the wheel line data and the parameters of the front-view camera;
And the second determining submodule is used for determining the tail width of the target vehicle after compensation by using the relative angle of the target vehicle and the tail width observed value of the target vehicle.
In some of these embodiments, the second compensation module comprises:
A generating sub-module, configured to generate a pixel position offset matrix according to the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture, where the pixel position offset matrix includes: a row coordinate pixel position offset matrix and a column coordinate pixel position offset matrix;
the calculating sub-module is used for calculating a compensation coefficient according to the row coordinate pixel position offset matrix;
And the compensation sub-module is used for carrying out optical flow compensation on the vehicle tail height ratio of the target vehicle by utilizing the compensation coefficient to obtain the compensated vehicle tail height ratio of the target vehicle.
In some of these embodiments, the determining unit 54 includes:
A second determining module, configured to determine a predicted value of the target vehicle in the current frame picture, where the predicted value includes: the motion state of the target vehicle in the current frame picture, which is obtained by prediction according to the motion state of the target vehicle in the previous frame picture, and the state covariance of the target vehicle in the current frame picture, which is obtained by prediction according to the state covariance of the target vehicle in the previous frame picture, wherein the motion state of the target vehicle comprises: the relative distance, relative speed, and relative acceleration of the target vehicle;
and the updating module is used for updating the motion state and the state covariance of the target vehicle in the current frame picture according to the predicted value and the pixel value compensated by the target vehicle.
In some of these embodiments, the detection unit 53 is configured to perform, for each track in the historical track library, the following steps:
predicting the position of the rectangular frame of the vehicle in the current frame picture at the current moment based on the position of the rectangular frame of the vehicle in the picture at the historical moment of the track;
And if the predicted position of the rectangular frame of the vehicle in the current frame picture at the current moment is consistent with the predicted position of the rectangular frame of the target in the current frame picture, determining the track as the target track matched with the rectangular frame of the target.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
Embodiments provide a computer device. The vehicle motion state determining method based on the forward-looking monocular camera can be realized by computer equipment. Fig. 6 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present application.
The computer device may include a processor 61 and a memory 62 storing computer program instructions.
In particular, the processor 61 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 62 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 62 may comprise a hard disk drive (HARD DISK DRIVE, abbreviated HDD), a floppy disk drive, a Solid state drive (Solid STATE DRIVE, abbreviated SSD), flash memory, an optical disk, a magneto-optical disk, a magnetic tape, or a universal serial bus (Universal Serial Bus, abbreviated USB) drive, or a combination of two or more of these. The memory 62 may include removable or non-removable (or fixed) media, where appropriate. The memory 62 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 62 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 62 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (ELECTRICALLY ALTERABLE READ-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be a Static Random-Access Memory (SRAM) or a dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory, FPMDRAM), an extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory, EDODRAM), a synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory, SDRAM), or the like, as appropriate.
Memory 62 may be used to store or cache various data files that need to be processed and/or communicated, as well as possible computer program instructions for execution by processor 61.
The processor 61 implements any of the vehicle motion state determination methods based on the forward-looking monocular camera in the above embodiments by reading and executing the computer program instructions stored in the memory 62.
In some of these embodiments, the computer device may also include a communication interface 63 and a bus 60. As shown in fig. 6, the processor 61, the memory 62, and the communication interface 63 are connected to each other through the bus 60 and perform communication with each other.
The communication interface 63 is used to implement communications between various modules, devices, units, and/or units in embodiments of the application. Communication interface 63 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 60 includes hardware, software, or both, that couple components of the computer device to one another. Bus 60 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 60 may include a graphics acceleration interface (ACCELERATED GRAPHICS Port, abbreviated as AGP) or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) Bus, a Front Side Bus (Front Side Bus, abbreviated as FSB), a HyperTransport (abbreviated as HT) interconnect, an industry standard architecture (Industry Standard Architecture, abbreviated as ISA) Bus, a wireless bandwidth (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (Micro Channel Architecture, abbreviated as MCA) Bus, a peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, abbreviated as PCI) Bus, a PCI-Express (PCI-X) Bus, a serial advanced technology attachment (SERIAL ADVANCED Technology Attachment, abbreviated as SATA) Bus, a video electronics standards Association local (Video Electronics Standards Association Local Bus, abbreviated as VLB) Bus, or other suitable Bus, or a combination of two or more of these. Bus 60 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
In addition, in combination with the method for determining the vehicle motion state based on the forward-looking monocular camera in the above embodiment, the embodiment of the present application may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the forward looking monocular camera based vehicle motion state determination methods of the above embodiments.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (10)
1. The vehicle motion state determining method based on the forward-looking monocular camera is characterized by comprising the following steps of:
extracting a target rectangular frame containing a target vehicle from a current frame picture acquired by a vehicle front-view camera;
Performing pixel compensation on the target vehicle in the current frame picture to obtain a pixel value after the target vehicle compensation, wherein the pixel compensation comprises: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle;
detecting whether a target track matched with the target rectangular frame exists in a history track library;
and if the target track matched with the target rectangular frame exists in the history track library, determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value of the target vehicle after compensation.
2. The method of claim 1, wherein the pixel compensating the target vehicle in the current frame picture comprises:
Determining side edge data, wheel line data and view angle data of the target vehicle according to the thumbnail of the target vehicle, wherein the side edge data are used for indicating the boundary line of two visible sides of the target vehicle, the wheel line data are used for indicating the connecting line of two visible wheels of the target vehicle, and the view angle data are used for indicating the relative position of the target vehicle;
performing width compensation on the tail width of the target vehicle in the current frame picture by using the wheel line data, the tail width observation value of the target vehicle and the parameters of the front-view camera to obtain the compensated tail width of the target vehicle;
Performing optical flow compensation on the tail height ratio of the target vehicle by utilizing the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture to obtain the compensated tail height ratio of the target vehicle;
Wherein the pixel values after the target vehicle compensation include: the compensated tail width of the target vehicle and the compensated tail height ratio of the target vehicle.
3. The method according to claim 2, wherein the performing width compensation on the tail width of the target vehicle in the current frame picture by using the wheel line data, the tail width observation value of the target vehicle, and the parameters of the front-view camera to obtain the compensated tail width of the target vehicle includes:
determining the relative angle of the target vehicle by using the wheel line data and the parameters of the front-view camera;
And determining the tail width of the compensated target vehicle by using the relative angle of the target vehicle and the tail width observed value of the target vehicle.
4. The method according to claim 2, wherein performing optical flow compensation on the tail height ratio of the target vehicle by using the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture, to obtain the compensated tail height ratio of the target vehicle, includes:
Generating a pixel position offset matrix according to the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture, wherein the pixel position offset matrix comprises: a row coordinate pixel position offset matrix and a column coordinate pixel position offset matrix;
calculating a compensation coefficient according to the row coordinate pixel position offset matrix;
and carrying out optical flow compensation on the tail height ratio of the target vehicle by using the compensation coefficient to obtain the compensated tail height ratio of the target vehicle.
5. The method of claim 1, wherein determining the motion state of the target vehicle in the current frame picture from the target trajectory and the compensated pixel values of the target vehicle comprises:
Determining a predicted value of the target vehicle in the current frame picture, wherein the predicted value comprises: the motion state of the target vehicle in the current frame picture, which is obtained by prediction according to the motion state of the target vehicle in the previous frame picture, and the state covariance of the target vehicle in the current frame picture, which is obtained by prediction according to the state covariance of the target vehicle in the previous frame picture, wherein the motion state of the target vehicle comprises: the relative distance, relative speed, and relative acceleration of the target vehicle;
and updating the motion state and the state covariance of the target vehicle in the current frame picture according to the predicted value and the pixel value compensated by the target vehicle.
6. The method of claim 1, wherein detecting whether there is a target track in the historical track library that matches the target rectangular box comprises:
for each track in the historical track library, the following steps are performed:
predicting the position of the rectangular frame of the vehicle in the current frame picture at the current moment based on the position of the rectangular frame of the vehicle in the picture at the historical moment of the track;
And if the predicted position of the rectangular frame of the vehicle in the current frame picture at the current moment is consistent with the predicted position of the rectangular frame of the target in the current frame picture, determining the track as the target track matched with the rectangular frame of the target.
7. A forward-looking monocular camera-based vehicle motion state determining apparatus, comprising:
The extraction unit is used for extracting a target rectangular frame containing a target vehicle from the current frame picture acquired by the vehicle front-view camera;
The compensation unit is configured to perform pixel compensation on the target vehicle in the current frame picture, and obtain a pixel value after the target vehicle compensation, where the pixel compensation includes: width compensation for the tail width of the target vehicle and optical flow compensation for the tail height ratio of the target vehicle;
the detection unit is used for detecting whether a target track matched with the target rectangular frame exists in the history track library;
And the determining unit is used for determining the motion state of the target vehicle in the current frame picture according to the target track and the pixel value after the target vehicle compensation if the target track matched with the target rectangular frame exists in the history track library.
8. The apparatus of claim 7, wherein the compensation unit comprises:
a first determining module, configured to determine, according to a thumbnail of the target vehicle, side edge data, wheel line data, and view angle data of the target vehicle, where the side edge data is used to indicate a boundary line between two visible sides of the target vehicle, the wheel line data is used to indicate a connection line between two visible wheels of the target vehicle, and the view angle data is used to indicate a relative position of the target vehicle;
The first compensation module is used for carrying out width compensation on the tail width of the target vehicle in the current frame picture by utilizing the wheel line data, the tail width observation value of the target vehicle and the parameters of the front-view camera to obtain the tail width of the target vehicle after compensation;
The second compensation module is used for carrying out optical flow compensation on the tail height ratio of the target vehicle by utilizing the tail thumbnail of the target vehicle in the current frame picture and the previous frame picture to obtain the compensated tail height ratio of the target vehicle;
Wherein the pixel values after the target vehicle compensation include: the compensated tail width of the target vehicle and the compensated tail height ratio of the target vehicle.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410067234.1A CN118115544A (en) | 2024-01-17 | 2024-01-17 | Vehicle motion state determining method and device based on forward-looking monocular camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410067234.1A CN118115544A (en) | 2024-01-17 | 2024-01-17 | Vehicle motion state determining method and device based on forward-looking monocular camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118115544A true CN118115544A (en) | 2024-05-31 |
Family
ID=91218800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410067234.1A Pending CN118115544A (en) | 2024-01-17 | 2024-01-17 | Vehicle motion state determining method and device based on forward-looking monocular camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118115544A (en) |
-
2024
- 2024-01-17 CN CN202410067234.1A patent/CN118115544A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112330756B (en) | Camera calibration method and device, intelligent vehicle and storage medium | |
CN109165606B (en) | Vehicle information acquisition method and device and storage medium | |
CN107826109B (en) | Lane keeping method and apparatus | |
CN108447146B (en) | Shooting direction deviation detection method and device | |
CN109086761B (en) | Image processing method and device, storage medium and electronic equipment | |
CN113807470B (en) | Vehicle driving state determination method and related device | |
CN116577796B (en) | Verification method and device for alignment parameters, storage medium and electronic equipment | |
EP3416130A1 (en) | Method, device and nonvolatile computer-readable medium for image composition | |
CN115902882A (en) | Collected data processing method and device, storage medium and electronic equipment | |
CN110083742B (en) | Video query method and device | |
CN118052848A (en) | Vehicle pixel height ratio detection method and device and computer equipment | |
CN109462732B (en) | Image processing method, device and computer readable storage medium | |
CN112595728B (en) | Road problem determination method and related device | |
EP3627204A1 (en) | Focusing method and related product | |
CN110769162B (en) | Electronic equipment and focusing method | |
CN116775581A (en) | Riding interest event identification method, riding interest event identification device, riding interest event identification equipment and storage medium | |
CN105184750A (en) | Method and device of denoising real-time video images on mobile terminal | |
CN110708673A (en) | Position determination method and portable multifunctional equipment | |
CN118115544A (en) | Vehicle motion state determining method and device based on forward-looking monocular camera | |
CN110706158B (en) | Image processing method, image processing device and terminal equipment | |
CN110933305B (en) | Electronic equipment and focusing method | |
CN111210299B (en) | Single number generation and management method and device | |
CN118067077A (en) | Longitudinal distance measurement method and device for contralateral vehicle and computer equipment | |
CN108871356B (en) | Driving navigation method and mobile terminal | |
CN112866629B (en) | Control method and terminal for binocular vision application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |