WO2022178802A1 - 前车起步检测方法及装置 - Google Patents

前车起步检测方法及装置 Download PDF

Info

Publication number
WO2022178802A1
WO2022178802A1 PCT/CN2021/078027 CN2021078027W WO2022178802A1 WO 2022178802 A1 WO2022178802 A1 WO 2022178802A1 CN 2021078027 W CN2021078027 W CN 2021078027W WO 2022178802 A1 WO2022178802 A1 WO 2022178802A1
Authority
WO
WIPO (PCT)
Prior art keywords
preceding vehicle
image frame
vehicle
image
optical flow
Prior art date
Application number
PCT/CN2021/078027
Other languages
English (en)
French (fr)
Inventor
侯谊
吕勇
郭�东
张珺
吉沐舟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180050794.0A priority Critical patent/CN115884910A/zh
Priority to PCT/CN2021/078027 priority patent/WO2022178802A1/zh
Publication of WO2022178802A1 publication Critical patent/WO2022178802A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0965Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages responding to signals from another vehicle, e.g. emergency vehicle

Definitions

  • the embodiments of the present application relate to the field of vehicles, and in particular, to a method and device for detecting the start of a preceding vehicle.
  • the prior art adopts the method of intelligently recognizing the license plate of the preceding vehicle, or the method of intelligently recognizing the motion trajectory of the preceding vehicle, so as to detect that the preceding vehicle has started while the vehicle is still stationary. status and alert the user.
  • the prior art vehicle start determination method is prone to misjudgment or failure, the reliability of the prior art vehicle start detection method is low.
  • embodiments of the present application provide a method and device for detecting the start of a preceding vehicle.
  • the device can determine whether the preceding vehicle starts based on the optical flow information corresponding to the preceding vehicle images in two adjacent image frames, so as to improve the accuracy and reliability of the starting recognition of the preceding vehicle.
  • an embodiment of the present application provides a method for detecting the start of a preceding vehicle.
  • the method includes: acquiring a first image frame, where the first image frame includes an image of a preceding vehicle.
  • the flow information is used to indicate the optical flow movement trend between the feature points of the preceding vehicle in the first image frame and the feature points of the preceding vehicle in the second image frame. According to the first optical flow information, it is detected whether the preceding vehicle starts.
  • the identification of the vehicle in front based on the optical flow information allows the device to avoid the problem of misjudgment and repeated identification caused by the partial occlusion of the vehicle in front when the device recognizes the relative state between the vehicle in front and the vehicle. Therefore, the recognition efficiency and accuracy of the preceding vehicle's starting can be improved.
  • the apparatus may acquire image frames according to a set period.
  • the previous image frame is an image frame acquired in the previous cycle adjacent to the current cycle.
  • the starting of the preceding vehicle may optionally be that the preceding vehicle moves forward relative to the host vehicle.
  • the method before acquiring the first image frame, further includes: acquiring a third image frame and a fourth image frame, where both the third image frame and the fourth image frame include an image of the preceding vehicle; the third image frame The frame is adjacent to the fourth image frame; the second optical flow information is obtained according to the third image frame and the fourth image frame; the second optical flow information is used to indicate that the feature point of the preceding vehicle in the fourth image frame is relative to the third image frame.
  • the device can judge the stationary state between the preceding vehicle and the own vehicle based on the optical flow information.
  • the start of the preceding vehicle can be understood as the transition between the preceding vehicle and the host vehicle from a relatively static state to a relative moving state, and the preceding vehicle moves forward. Therefore, the device can further determine whether the preceding vehicle starts on the basis of determining that the preceding vehicle and the own vehicle are relatively stationary.
  • the third image frame and the fourth image frame are optionally acquired before the first image frame.
  • the third image frame and the fourth image frame are optionally acquired after the second image frame. That is, before it is determined that the preceding vehicle starts, or after the preceding vehicle starts, the apparatus may determine whether the preceding vehicle and the host vehicle are relatively stationary based on the acquired image frames.
  • detecting whether the preceding vehicle starts according to the first optical flow information includes: detecting, according to the first optical flow information, whether the vehicle and the preceding vehicle change from a relatively static state to a relatively moving state;
  • the relative motion state includes that the preceding vehicle moves forward relative to the own vehicle or the preceding vehicle moves backward relative to the own vehicle.
  • the device when the device recognizes that the vehicle in front and the vehicle are in a stationary state, the device can further monitor the state of the vehicle in front and the vehicle based on the optical flow, so as to determine whether the vehicle in front and the vehicle have changed from a relatively stationary state to a state of change. It is a relative motion state, so as to accurately identify the transition between the static state and the motion state between the preceding vehicle and the own vehicle, so as to provide an efficient and accurate starting identification method of the preceding vehicle.
  • the first optical flow information includes first amplitude information and first direction information; the first amplitude information is used to indicate the feature point of the preceding vehicle in the first image frame and the feature point in the second image frame The range of motion between the feature points of the preceding vehicle; the first direction information is used to indicate the motion direction between the feature points of the preceding vehicle in the first image frame and the feature points of the preceding vehicle in the second image frame.
  • the device can determine the moving direction of the preceding vehicle relative to the own vehicle based on the amplitude information and direction information of the optical flow, so as to accurately identify whether the preceding vehicle is moving forward relative to the own vehicle.
  • detecting whether the preceding vehicle starts according to the first optical flow information includes: when the first amplitude information is greater than or equal to a set first threshold, determining that the vehicle and the preceding vehicle are in a relative motion state.
  • the device can determine whether the preceding vehicle and the host vehicle are in a relative motion state based on the amplitude information of the optical flow. On the basis of determining that the preceding vehicle and the own vehicle are in a relative motion state, the device may further determine the moving direction of the preceding vehicle relative to the own vehicle.
  • detecting whether the preceding vehicle starts according to the first optical flow information includes: when the first amplitude information is less than or equal to a set second threshold, determining that the vehicle and the preceding vehicle are relatively stationary state; the second threshold is less than the first threshold.
  • the device can determine whether the preceding vehicle is relatively stationary relative to the host vehicle based on the amplitude information of the optical flow.
  • the device may further detect whether the preceding vehicle and the own vehicle change from a static state to a moving state based on the amplitude information of the optical flow.
  • detecting whether the preceding vehicle starts according to the first optical flow information further comprising: when the first direction information is greater than or equal to the set first threshold
  • the third threshold is determined to determine the forward movement of the preceding vehicle relative to the own vehicle.
  • the device can further determine the direction of movement of the vehicle in front relative to the vehicle based on the direction information of the optical flow, so as to accurately identify whether the vehicle in front is moving forward relative to the vehicle.
  • detecting whether the preceding vehicle starts according to the first optical flow information further comprising: when the first direction information is less than or equal to the set first threshold A predetermined fourth threshold is used to determine that the preceding vehicle is moving backward relative to the vehicle; the fourth threshold is smaller than the third threshold.
  • the device can further determine the moving direction of the preceding vehicle relative to the vehicle based on the direction information of the optical flow, so as to accurately identify whether the preceding vehicle is moving forward relative to the vehicle or is moving toward the vehicle. post exercise.
  • the first optical flow information is an optical flow vector between the feature point of the preceding vehicle in the first image frame and the feature point of the preceding vehicle in the second image frame, wherein each light
  • the flow vector includes magnitude information and direction information.
  • determining that the preceding vehicle moves forward relative to the own vehicle includes: determining the convergence point based on the direction information of all optical flow vectors. Based on the convergence point, all optical flow vectors are traversed, and it is determined whether the number of optical flow vectors in all optical flow vectors pointing to the convergence point is greater than or equal to the set third threshold. In this way, the detection device can identify the movement trend of the optical flow vector based on the direction information of the optical flow vector. If the vehicle in front moves forward relative to the vehicle, the optical flow vector shows a convergence trend, that is, the optical flow vector points to the convergence point. If the preceding vehicle moves backwards relative to the own vehicle, the optical flow vector becomes a divergent trend.
  • acquiring the first image frame includes: according to a set condition, detecting whether the image of the preceding vehicle is included in the first image frame. In this way, the misjudgment caused by the car moving too fast can be avoided, so that the subsequent detection steps can be performed under the condition that the preceding vehicle is included in the two image frames.
  • the set conditions include: the area of the image of the preceding vehicle in the preceding vehicle detection area of the first image frame is greater than or equal to the set preceding vehicle detection threshold; if there are multiple areas For images that are greater than or equal to the set preceding vehicle detection threshold, the image closest to the bottom edge of the first image frame is selected as the image of the preceding vehicle. In this way, based on the set conditions, it can be accurately identified whether the preceding vehicle is included in the image frame.
  • the size of the distribution area of the optical flow information in the image frame is smaller than the size of the image of the preceding vehicle in the image frame. In this way, the influence of other objects in the image frame, such as street lights, car lights, etc., on the optical flow algorithm can be effectively avoided.
  • the method further includes: acquiring a first image size of the preceding vehicle in the first image frame; when the first image size is smaller than a set fifth threshold, determining that the preceding vehicle is moving forward relative to the own vehicle ; When the size of the first image is greater than or equal to the fifth threshold, it is determined that the preceding vehicle moves backward relative to the own vehicle.
  • the embodiment of the present application can also determine whether the preceding vehicle starts based on the size transformation of the preceding vehicle in two adjacent images. Exemplarily, if the preceding vehicle and the host vehicle are in a relatively stationary state, the size of the preceding vehicle in two adjacent images is substantially the same (there may be a small difference). If the preceding vehicle and the own vehicle are in a relative motion state, the size of the preceding vehicle in two adjacent images is different, that is, the size changes.
  • the method when it is not detected that the preceding vehicle starts according to the first optical flow information, the method further includes: acquiring a first image size of the preceding vehicle in the first image frame; when the first image size is smaller than a set value When the first image size is greater than or equal to the fifth threshold, it is determined that the preceding vehicle moves backward relative to the own vehicle. In this way, the embodiment of the present application can also determine whether the preceding vehicle starts based on the size of the preceding vehicle in two adjacent images.
  • the fusion between the optical flow judgment and the image height judgment of the preceding vehicle is realized, so as to further improve the accuracy of the starting recognition of the preceding vehicle and prevent misjudgment.
  • the embodiments of the present application provide a method for identifying the starting of a preceding vehicle.
  • the method includes: acquiring a first image frame; acquiring a first image size of the preceding vehicle in the first image frame; when the first image size is smaller than a set first threshold, determining that the preceding vehicle is moving forward relative to the own vehicle; When an image size is greater than or equal to the second threshold, it is determined that the preceding vehicle moves backward relative to the host vehicle.
  • the method before acquiring the first image frame, further includes: acquiring a second image frame; acquiring a second image size of the preceding vehicle in the second image frame; when the second image size is larger than a preset size
  • the third threshold is to determine that the vehicle in front and the vehicle are in a relatively static state. Wherein, the second threshold is greater than the third threshold, and the third threshold is greater than the first threshold.
  • an embodiment of the present application provides a device for detecting the start of a preceding vehicle.
  • the device includes: a first acquisition module for acquiring a first image frame, the first image frame including an image of a preceding vehicle; a second acquisition module for acquiring the first light according to the first image frame and the second image frame Flow information; wherein, the second image frame is the previous image frame of the first image frame, and the second image frame includes the image of the preceding vehicle; the first optical flow information is used to indicate the characteristics of the preceding vehicle in the first image frame The optical flow movement trend between the points relative to the feature points of the preceding vehicle in the second image frame; the detection module is used for detecting whether the preceding vehicle starts according to the first optical flow information.
  • the first acquisition module is further configured to acquire a third image frame and a fourth image frame before acquiring the first image frame, and both the third image frame and the fourth image frame include the information of the preceding vehicle.
  • image the third image frame is adjacent to the fourth image frame
  • the second acquisition module is further configured to acquire second optical flow information according to the third image frame and the fourth image frame
  • the second optical flow information is used to indicate the fourth The optical flow movement trend between the feature points of the preceding vehicle in the image frame relative to the feature points of the preceding vehicle in the third image frame
  • the detection module is also used to determine the distance between the vehicle and the preceding vehicle according to the second optical flow information. is relatively static.
  • the detection module is specifically configured to: detect, according to the first optical flow information, whether the vehicle and the preceding vehicle change from a relative static state to a relative moving state; wherein the relative moving state includes the preceding vehicle Move forward relative to the vehicle or the vehicle in front moves backward relative to the vehicle.
  • the first optical flow information includes first amplitude information and first direction information; the first amplitude information is used to indicate the feature point of the preceding vehicle in the first image frame and the feature point in the second image frame The range of motion between the feature points of the preceding vehicle; the first direction information is used to indicate the motion direction between the feature points of the preceding vehicle in the first image frame and the feature points of the preceding vehicle in the second image frame.
  • the detection module is specifically configured to: when the first amplitude information is greater than or equal to a set first threshold, determine that the vehicle and the preceding vehicle are in a relative motion state.
  • the detection module is specifically configured to: when the first amplitude information is less than or equal to a set second threshold, determine that the vehicle and the preceding vehicle are in a relatively static state; the second threshold is smaller than the first threshold threshold.
  • the detection module when the first amplitude information is greater than or equal to the set first threshold, the detection module is further configured to: when the first direction information is greater than or equal to the set third threshold, determine the The vehicle moves forward relative to the vehicle.
  • the detection module when the first amplitude information is greater than or equal to the set first threshold, the detection module is further configured to: when the first direction information is less than or equal to the set fourth threshold, determine the The vehicle moves backward relative to the own vehicle; the fourth threshold is smaller than the third threshold.
  • the first optical flow information is an optical flow vector between the feature point of the preceding vehicle in the first image frame and the feature point of the preceding vehicle in the second image frame, wherein each light
  • the flow vector includes magnitude information and direction information.
  • the detection module is used to: determine the convergence point based on the direction information of all optical flow vectors. Based on the convergence point, all optical flow vectors are traversed, and it is determined whether the number of optical flow vectors in all optical flow vectors pointing to the convergence point is greater than or equal to the set third threshold.
  • the first acquisition module is configured to detect whether the image of the preceding vehicle is included in the first image frame according to a set condition.
  • the set conditions include: the area of the image of the preceding vehicle in the preceding vehicle detection area of the first image frame is greater than or equal to the set preceding vehicle detection threshold; if there are multiple areas For images that are greater than or equal to the set preceding vehicle detection threshold, the image closest to the bottom edge of the first image frame is selected as the image of the preceding vehicle.
  • the size of the distribution area of the optical flow information in the image frame is smaller than the size of the image of the preceding vehicle in the image frame.
  • the device further includes a third acquisition module: the third acquisition module is used to acquire the first image size of the preceding vehicle in the first image frame; the detection module is also used to acquire the first image size when the first image size is When the size of the first image is greater than or equal to the fifth threshold, it is determined that the preceding vehicle is moving forward relative to the own vehicle; the detection module is also used to determine that the preceding vehicle is moving backward relative to the own vehicle when the size of the first image is greater than or equal to the fifth threshold.
  • the third acquisition module is used to acquire the first image size of the preceding vehicle in the first image frame
  • the detection module is also used to acquire the first image size when the first image size is When the size of the first image is greater than or equal to the fifth threshold, it is determined that the preceding vehicle is moving forward relative to the own vehicle
  • the detection module is also used to determine that the preceding vehicle is moving backward relative to the own vehicle when the size of the first image is greater than or equal to the fifth threshold.
  • the device further includes a third acquisition module: the third acquisition module is configured to acquire the position of the preceding vehicle in the first image frame when the detection module does not detect that the preceding vehicle starts according to the first optical flow information the first image size of Five thresholds to determine the backward movement of the preceding vehicle relative to the own vehicle.
  • the third aspect and any implementation manner of the third aspect correspond to the first aspect and any implementation manner of the first aspect, respectively.
  • the technical effects corresponding to the third aspect and any implementation manner of the third aspect reference may be made to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which will not be repeated here.
  • an embodiment of the present application provides a device for detecting the start of a preceding vehicle.
  • the device includes: an acquisition module for acquiring a first image frame; an acquisition module for acquiring a first image size of the preceding vehicle in the first image frame; a determining module for when the first image size is smaller than a set size
  • the first threshold is used to determine that the vehicle in front moves forward relative to the vehicle; when the size of the first image is greater than or equal to the second threshold, it is determined that the vehicle in front moves backward relative to the vehicle.
  • the acquiring module is further configured to acquire the second image frame; the acquiring module is further configured to acquire the second image size of the preceding vehicle in the second image frame; the determining module is further configured to acquire the second image size when the first The size of the second image is larger than the set third threshold, and it is determined that the vehicle in front and the vehicle are in a relatively static state.
  • the second threshold is greater than the third threshold, and the third threshold is greater than the first threshold.
  • an embodiment of the present application provides a device for detecting the start of a preceding vehicle.
  • the apparatus includes at least one processor and an interface; the processor receives or transmits data through the interface; and the at least one processor is configured to invoke a software program stored in the memory to perform the first aspect or any possibility of the first aspect method in the implementation.
  • the fifth aspect and any implementation manner of the fifth aspect correspond to the first aspect and any implementation manner of the first aspect, respectively.
  • the technical effects corresponding to the fifth aspect and any implementation manner of the fifth aspect reference may be made to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which will not be repeated here.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, which when run on a computer or processor, causes the computer or processor to execute the method in the first aspect or any possible implementation manner of the first aspect.
  • the sixth aspect and any implementation manner of the sixth aspect correspond to the first aspect and any implementation manner of the first aspect, respectively.
  • the technical effects corresponding to the sixth aspect and any implementation manner of the sixth aspect reference may be made to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which will not be repeated here.
  • a computer program product comprises a software program which, when executed by a computer or processor, causes the method of the first aspect or any possible implementation of the first aspect to be performed.
  • the seventh aspect and any implementation manner of the seventh aspect correspond to the first aspect and any implementation manner of the first aspect, respectively.
  • the technical effects corresponding to the seventh aspect and any implementation manner of the seventh aspect reference may be made to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which will not be repeated here.
  • Fig. 1 is a kind of application scenario schematic diagram exemplarily shown
  • 2a is a schematic flowchart of a method for detecting the start of a preceding vehicle used in an embodiment of the present application
  • 2b is a schematic flowchart of a method for detecting the start of a preceding vehicle used in an embodiment of the present application
  • FIG. 3 is a schematic diagram of an exemplary front vehicle detection area
  • FIG. 4 is an exemplary schematic diagram of a preceding vehicle detection method
  • FIG. 5 is a schematic flowchart of an exemplary relative static state determination
  • Fig. 6a is the schematic diagram of the image frame recognition exemplarily shown
  • Fig. 6b is the schematic diagram of the image frame recognition exemplarily shown
  • Figure 7a is a schematic diagram of an exemplary image frame recognition result
  • Figure 7b is a schematic diagram of an exemplary image frame recognition result
  • FIG. 8 is a schematic diagram of the image frame recognition exemplarily shown
  • FIG. 9 is a schematic diagram of the corresponding relationship of feature points shown in an exemplary manner.
  • FIG. 10 is a schematic flowchart of the relative motion state determination exemplarily shown.
  • FIG. 11 is a schematic diagram of an exemplarily shown image of a preceding vehicle
  • Figure 12a is a schematic diagram of an exemplary image frame recognition result
  • Figure 12b is a schematic diagram of an exemplary image frame recognition result
  • FIG. 13 is a schematic diagram of the corresponding relationship of feature points shown in an exemplary manner
  • FIG. 14 is a schematic diagram of an exemplary optical flow vector
  • FIG. 15 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle
  • FIG. 16 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle
  • FIG. 17 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle
  • FIG. 18 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle
  • FIG. 19 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle
  • FIG. 20 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle
  • FIG. 21 is a schematic diagram of an image size of a preceding vehicle in an image frame exemplarily shown
  • FIG. 22 is a schematic diagram of the image size of the preceding vehicle in the image frame exemplarily shown.
  • FIG. 23 is a schematic structural diagram of a preceding vehicle starting detection device provided by an embodiment of the application.
  • FIG. 24 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • FIG. 25 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe a specific order of the objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than to describe a specific order of the target objects.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • multiple processing units refers to two or more processing units; multiple systems refers to two or more systems.
  • FIG. 1 it is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • This application scenario includes the preceding vehicle and the own vehicle (also referred to as the own vehicle).
  • the host vehicle is optionally a vehicle being driven by the user.
  • the preceding vehicle is optionally a vehicle traveling or stopped in front of the own vehicle.
  • FIG. 1 is only a schematic example. In other embodiments, the scene may include multiple preceding vehicles, which is not limited in this application.
  • FIG. 2a is a schematic flowchart of a method for detecting the start of a preceding vehicle according to an embodiment of the present application. As shown in Figure 2a, the process of the present application for detecting the start of the preceding vehicle mainly includes:
  • S10 Acquire a first image frame, where the first image frame includes an image of the preceding vehicle.
  • S20 Acquire first optical flow information according to the first image frame and the second image frame.
  • the second image frame is an image frame preceding the first image frame
  • the second image frame includes an image of the preceding vehicle.
  • the first optical flow information is used to indicate the optical flow movement trend between the feature points of the preceding vehicle in the first image frame and the feature points of the preceding vehicle in the second image frame.
  • FIG. 2 b is a schematic flowchart of a method for detecting the start of a preceding vehicle according to an embodiment of the present application. Please refer to Figure 2b, which specifically includes:
  • the vehicle is equipped with a driving recorder, or other camera devices installed at the front of the vehicle.
  • the driving recorder or the camera device can be installed on the inner side of the front glass of the vehicle. It can also be installed in other parts, which is not limited in this application.
  • a driving recorder is installed on the vehicle (referring to the own vehicle), and the frame rate of the driving recorder is 60fps as an example.
  • the frame rate represents the number of images collected by the driving recorder per second.
  • 60bps means that the dash cam collects 60 images per second.
  • the detection device may periodically acquire images collected by the driving recorder.
  • the period length is 500 ms (milliseconds) as an example for description.
  • the cycle duration may also be longer or shorter, which is not limited in this application.
  • the detection device described in the embodiments of the present application is optionally integrated into a driving recorder.
  • the detection device may be a chip of a driving recorder.
  • the detection device may be a module integrated on the chip of the driving recorder.
  • the detection device may also be program code (or program instructions), and the program instructions may be executed by the processor of the driving recorder.
  • the detection device may also be a chip of the vehicle, a module integrated on the chip of the vehicle, or a program code executed by a processor in the vehicle.
  • the detection device is taken as an example of a hardened optical flow realization module in a vehicle.
  • the hardened optical flow realization module can be integrated on the chip where the processor of the vehicle is located, or can be outside the chip where the processor is located.
  • implementing the identification method in the embodiments of the present application through a separate hardened optical flow implementation module can reduce resource occupation of a central processing unit (Central Processing Unit, CPU) and ensure the efficiency of the identification method.
  • CPU Central Processing Unit
  • the detection device acquires an image (also referred to as an image frame) currently captured by the driving recorder at the arrival time of each cycle.
  • the image frame acquired by the detection device in this cycle is optionally the first image frame described in FIG. 2a. That is to say, in the process that the driving recorder collects images at 60fps, the detection device can obtain the image currently collected by the driving recorder from the driving recorder every 500ms (ie, the cycle duration).
  • each time the detection apparatus acquires an image it may be executed once based on the process shown in FIG. 2b. That is to say, the flow shown in Fig. 2b is executed cyclically.
  • the detection device may detect whether the vehicle and the preceding vehicle are in a relatively stationary state based on the last detection result, that is, whether the vehicle in front and the vehicle in front are in a relatively stationary state based on the last detection result.
  • the manner of acquiring the detection result will be described in detail in the following embodiments.
  • the previous detection result may include two types, one is that the previous detection result indicates that the preceding vehicle and the vehicle are in a stationary state. The other is that the previous detection result indicates that the vehicle ahead and the vehicle are not stationary.
  • the process proceeds to S103 .
  • the "non-relative stationary state” is optionally a state where the vehicle and the preceding vehicle are in a relative motion state.
  • the “non-relative static state” may optionally mean that no detection result has been queried. The case of "no detection result is found" will be described in the following embodiments.
  • the process proceeds to S105 .
  • the detection device may first identify whether the preceding vehicle is included in the current image frame.
  • FIG. 3 is an exemplary schematic diagram of a preceding vehicle detection area. Please refer to FIG. 3 , for example, an image frame 301 is taken as an example.
  • the shaded portion in FIG. 3 is the preceding vehicle detection area 302 .
  • the position of the preceding vehicle detection area 302 is optionally the middle of the image frame 301 .
  • the width of the preceding vehicle detection area 302 is optionally a quarter of the width of the image frame 301 .
  • the width and position of the preceding vehicle detection area 302 in the embodiment of the present application are only schematic examples, and are not limited in the present application.
  • FIG. 4 is an exemplary schematic diagram of a preceding vehicle detection manner. Please refer to FIG. 4, still taking the image frame 301 as an example.
  • the image frame 301 includes an image of a vehicle 401 (hereinafter referred to as vehicle 401 ) and an image of a vehicle 402 (hereinafter referred to as vehicle 402 ).
  • the detection device is pre-configured with a preceding vehicle judgment condition
  • the preceding vehicle judgment condition includes:
  • the area of the image of the preceding vehicle in the preceding vehicle detection area is greater than or equal to the set preceding vehicle detection threshold.
  • the preceding vehicle judgment condition may further include: the distance between the preceding vehicle and the own vehicle is less than a set distance threshold (for example, 3m, which can be set according to actual needs, which is not limited in this application). It should be noted that the distance between the vehicle in front and the vehicle can be obtained by any distance detection method, which is not repeated in this application.
  • a set distance threshold for example, 3m, which can be set according to actual needs, which is not limited in this application.
  • the detection device may determine whether there is a preceding vehicle in the image frame based on the above conditions. That is to say, in the embodiment of the present application, the "preceding vehicle” refers to a vehicle that satisfies the preceding vehicle judgment condition.
  • the set front vehicle detection threshold is 70% of the overall area of the vehicle image.
  • the area of the image of the vehicle 401 in the preceding vehicle detection area is larger than 70% of the image area of the vehicle 401 .
  • the area of the image of the vehicle 402 in the preceding vehicle detection area is larger than 70% of the area of the image of the vehicle 402 .
  • the method of recognizing the image area of the vehicle may refer to any image recognition method in the prior art, for example, an edge recognition method, which is not limited in this application.
  • both the vehicle 401 and the vehicle 402 in FIG. 4 satisfy the preceding vehicle judgment condition 1). Furthermore, the detection device may determine that there are currently two vehicles that satisfy the preceding vehicle judgment condition 1). The detection device is further based on the preceding vehicle decision condition 2) to determine the preceding vehicle.
  • the image of the vehicle 401 is closer to the bottom border of the image frame 301 than the image of the vehicle 402 .
  • the detection device may determine that the vehicle 401 is the preceding vehicle.
  • the detection device if the detection device does not recognize that the image frame includes the preceding vehicle, the detection device does not perform processing, and the processing flow of this cycle ends. Correspondingly, the flow returns to S101 to continue processing the next frame. That is to say, in the current cycle, the detection device will not cache any detection results.
  • the detection device executes the process shown in FIG. 2b in the next cycle, and proceeds to S102, the detection device can determine that the detection result has not been queried, and can determine that the vehicle and the preceding vehicle are in a non-relatively stationary state. That is, the failure to recognize the preceding vehicle is one of the reasons for the above-mentioned "no detection result found".
  • the detection device may determine whether the vehicle and the preceding vehicle are relatively stationary based on the cached previous image frame and the current image frame.
  • FIG. 5 is a schematic flowchart of an exemplary relative static state determination. Please refer to Figure 5, which includes:
  • the detection device caches image information of a previous frame of image, and the image information optionally includes a preceding vehicle image and optical flow information in the previous frame of image.
  • the detection device may acquire the preceding vehicle image in the current image frame based on the size and position of the preceding vehicle image in the previous frame image. The manner in which the detection device acquires the preceding vehicle image and the optical flow image of the previous frame image in the previous cycle will be described below.
  • the exemplary previous image frame is optionally the second image frame described in FIG. 2a.
  • Fig. 6a is a schematic diagram exemplarily showing image frame recognition.
  • the image frame 301 shown in FIG. 4 is the image frame acquired last time.
  • the image frame acquired last time refers to the image frame acquired by the detection device in the previous period adjacent to the current period based on the image frame acquisition period (eg, 500 ms as described above).
  • the detection device may recognize the vehicle 401 in the image 301 based on an image recognition method (refer to the prior art, which is not limited in this application).
  • the detection device may further acquire the preceding vehicle area 601 based on the identified center point of the vehicle 401.
  • the center point of the vehicle 401 may be obtained by the detection device constructing a rectangular frame based on the edge of the vehicle 401 , and selecting the center of the rectangular frame as the center point of the vehicle 401 . This acquisition method is only a schematic example, and is not limited in this application.
  • the size of the preceding vehicle area 601 is a set size.
  • it can be 80*80 pixels.
  • the size of the preceding vehicle area may be set according to actual needs, which is not limited in this application.
  • the size of the front vehicle area is smaller than the size of the vehicle in the image to avoid the influence of other objects in the image, such as street lights, car lights, etc., on the optical flow algorithm.
  • the preceding vehicle area may also have other shapes, which are not limited in this application.
  • the detection device acquires the optical flow information of the preceding vehicle area 601 .
  • the detection device may identify feature points in the preceding vehicle area 601 based on the STCorner algorithm, and the feature points in the preceding vehicle area 601 are optical flow information.
  • the recognition result can be shown in Figure 7a.
  • the feature point may be an edge point, a center point, or a reflective point in an image, etc., which is not limited in this application.
  • a plurality of feature points in the preceding vehicle area 601 are the optical flow information of the previous image frame. It should be noted that the detection device may also acquire the feature points in the preceding vehicle area 601 based on other algorithms.
  • the algorithm in this application is only a schematic example, which is not limited in this application.
  • Fig. 6b is a schematic diagram exemplarily shown for image frame recognition.
  • the detection device may acquire an image 602 of the preceding vehicle based on the center point of the preceding vehicle area 601 .
  • the center point of the preceding vehicle area 601 may be the center of the rectangular frame of the preceding vehicle area 601 , eg, the intersection of two diagonal lines.
  • the acquisition method of the center thereof may be set according to actual requirements, which is not limited in this application.
  • the detection device may acquire a preceding vehicle image 602 with a set size based on the center point of the preceding vehicle area 601 .
  • the set size may be 120*120 pixels.
  • the size of the preceding vehicle image 602 is optionally larger than the size of the preceding vehicle area 601 to improve the accuracy of subsequent optical flow calculation.
  • the center of the preceding vehicle image 602 coincides with the center of the preceding vehicle area 601 .
  • the shape and size of the preceding vehicle image in the embodiments of the present application are schematic illustrations, and the shape and size of the preceding vehicle image may be set based on actual requirements, which are not limited in this application.
  • the detection device stores the acquired optical flow information of the previous frame image and the preceding vehicle image in the memory.
  • the preceding vehicle image saved by the detection device includes: the image content of the preceding vehicle image 602 , the size of the preceding vehicle image 602 and the position in the image frame 301 .
  • the position of the preceding vehicle image 602 in the image frame 301 may be the coordinates of the four vertices of the preceding vehicle image 602 in a coordinate system constructed with the bottom and side edges of the image frame 301 , which is not limited in this application.
  • the above-mentioned processing of the image of the previous frame including the processing of image recognition, optical flow information acquisition, storage, etc., is all completed in the previous cycle. That is to say, in the previous cycle, the detection device has performed corresponding processing on the image of the previous frame, and obtained the corresponding image information (including the image of the preceding vehicle and the optical flow information).
  • FIG. 8 is a schematic diagram exemplarily shown for image frame recognition.
  • the image frame 801 is an image frame acquired in the current cycle.
  • the detection device may determine based on the previous image frame, that is, the image frame 301.
  • the preceding vehicle image 602 is obtained, and the preceding vehicle image 802 is obtained in the image frame (ie, the image frame 801 ) obtained in this cycle.
  • the detection device may acquire the preceding vehicle image 802 in the image frame 802 based on the size of the preceding vehicle image 602 and the position in the image frame 301 .
  • the size and position in the image frame 802 of the image 802 are the same as the size and position in the image frame 301 of the preceding vehicle image 602.
  • the preceding vehicle image 602 and the preceding vehicle image 802 may include all or part of the image of the vehicle 401, and may also include other background images.
  • the preceding vehicle image 802 also includes partial images of tires of the vehicle 402 . It should be noted that other background images in the preceding vehicle image 602 and the preceding vehicle image 802 will not affect the detection result in a high probability.
  • the detection device may obtain the optical flow information in the preceding vehicle image 802 through an optical flow algorithm based on the obtained preceding vehicle image 602 of the previous image frame, the optical flow information, and the preceding vehicle image 802 of the current image frame. , and based on the optical flow information of the preceding vehicle image 602 and the optical flow information of the preceding vehicle image 802 , the optical flow vector is obtained.
  • the optical flow algorithm may determine the relationship between the preceding vehicle image 802 and the preceding vehicle image 602 based on the input optical flow information (ie, multiple feature points in the preceding vehicle area 601) and the preceding vehicle image 602. A feature point corresponding to each feature point in (ie, the optical flow information corresponding to the preceding vehicle image 802 ).
  • the optical flow algorithm may obtain an optical flow vector corresponding to each feature point based on the acquired multiple feature points of the preceding vehicle image 602 and the correspondence between the multiple feature points in the preceding vehicle image 802 .
  • FIG. 9 is a schematic diagram of the corresponding relationship of feature points exemplarily shown.
  • one of the feature points (ie, the feature point 901 ) in the multiple feature points (ie, the optical flow information) in the preceding vehicle image 602 obtained by the detection device is used as an example for description.
  • the feature point corresponding to the feature point 901 in the preceding vehicle image 802 obtained by the detection device through the optical flow algorithm is the feature point 902 .
  • the detection device obtains a corresponding vector (ie, an optical flow vector) based on the feature point 902 and the feature point 901 according to the optical flow algorithm, wherein the vector points to the feature point corresponding to the current image frame, that is, the feature point 901 .
  • a corresponding vector ie, an optical flow vector
  • the detection device may acquire optical flow vectors corresponding to all or part of the feature points in the preceding vehicle image 602 .
  • tracking the optical flow based on the feature points in the preceding vehicle image 602 and the preceding vehicle image 802 can avoid that the preceding vehicle moves rapidly, for example, when the preceding vehicle moves rapidly, the corresponding features may be caused The point is not within the coverage of the preceding vehicle area 603 . Therefore, taking a relatively larger tracking range than the preceding vehicle area 603 , that is, the feature point identification range, can effectively reduce misjudgments.
  • the detection device may also determine the corresponding preceding vehicle area in the image frame 801 based on the preceding vehicle area 601, and search for corresponding feature points. That is to say, in this example, the image features saved by the detection device are the preceding vehicle area 601 and the optical flow information, and there is no need to save the preceding vehicle image.
  • optical flow algorithm described in the embodiments of the present application is only a schematic example.
  • the purpose of the optical flow algorithm is to track the optical flow based on the optical flow in the previous image frame, which can also be called feature point tracking, so as to obtain the difference between the optical flow in the current image frame and the optical flow in the previous image frame. movement trends.
  • This application does not limit the specific calculation method of the optical flow algorithm.
  • the detection device may further acquire an image pyramid of the preceding vehicle image 602 and an image pyramid of the preceding vehicle image 802, and use the image pyramid based on the preceding vehicle image 602 and the preceding vehicle image 802 as the light The input parameters of the flow algorithm, thereby improving the accuracy of feature point recognition.
  • the length of each optical flow vector is used to indicate the motion magnitude between the two feature points that construct the optical flow vector, and may also be referred to as an offset.
  • the detection device may obtain the median value of the lengths of all optical flow vectors, which may also be understood as the median value of the motion amplitudes of the feature points.
  • the detection device may take the average value of the lengths of all vectors, etc., which is not limited in this application.
  • the detection device detects that the median value is less than or equal to the set static threshold.
  • the flow proceeds to S206.
  • the static threshold is optionally 2 pixels. In other embodiments, the static threshold may also be other values, which are not limited in this application.
  • the detection device obtains the detection result.
  • the detection result includes indicating that the host vehicle and the preceding vehicle are in a relatively stationary state, or indicating that the host vehicle and the preceding vehicle are in a relative motion state.
  • the detection device detects that the vehicle and the preceding vehicle are in a relatively stationary state within this cycle.
  • the detection device saves the detection result, as well as the optical flow information of the current image frame and the preceding vehicle image 802 .
  • the detection result indicates that the preceding vehicle and the host vehicle are relatively stationary.
  • optical flow information of the current image frame may be obtained by the detection device performing the optical flow information acquisition step on the current image frame again, and the optical flow information of the current image frame may also be obtained by the detection device performing the optical flow algorithm. obtained at time, such as feature point 901. This application is not limited.
  • the detection device detects that the host vehicle and the preceding vehicle are in a relative motion state within the current cycle.
  • the detection device saves the detection result, as well as the optical flow information of the current image frame and the preceding vehicle image 802 .
  • the detection result indicates that the preceding vehicle and the own vehicle are in a relative motion state.
  • this cycle of processing ends, and the flow returns to S101. That is to say, in the next cycle, the detection device can be based on the detection results stored this time (including indicating that the preceding vehicle and the vehicle are in a relatively static state or a relatively moving state) and the image information of the image frame (that is, the optical flow information of the current image frame). ) to judge the relative motion state of the image frame acquired in the next cycle.
  • the detection device detects that the vehicle and the preceding vehicle are in a relative motion state within this cycle.
  • the detection apparatus may not save the detection result, but only save the optical flow information of the current image frame and the preceding vehicle image 802 . That is to say, in the next cycle, when the step proceeds to S102, the detection device does not obtain the previous detection result (that is, the detection result in this cycle), that is, it is determined that the preceding vehicle and the vehicle are in a non-relatively stationary state, then the flow Proceed to S103.
  • S105 based on the previous image frame and the current image frame, detect whether the vehicle and the preceding vehicle change from a relatively static state to a relatively moving state.
  • the detection device may further detect whether the vehicle and the preceding vehicle have changed from a relatively stationary state to a relative motion. state. It should be noted that, in the embodiment of the present application, whether the preceding vehicle starts, it is necessary to first determine whether the preceding vehicle is moving relative to the own vehicle, and then determine whether the preceding vehicle is moving forward. Therefore, in this embodiment of the present application, the detection device needs to perform detection of the relative stationary state of the preceding vehicle and the own vehicle first, that is, the step described in S103. And after the preceding vehicle and the own vehicle have been in a relatively stationary state last time, the detection of the relative motion state is performed, that is, the step described in S105.
  • FIG. 10 is an exemplary schematic flowchart of relative motion state determination. Please refer to Figure 10, including:
  • FIG. 11 is a schematic diagram of an exemplarily shown image of a preceding vehicle.
  • the image frame 301 in FIG. 6 is still the previous image frame.
  • FIG. 11 is an image frame 1101 acquired in the current cycle.
  • the detection device may obtain the preceding vehicle image 802 of the current image frame 1101 based on the preceding vehicle image 602 of the image frame 301 , and the specific obtaining method may refer to the relevant content in S201 , which will not be repeated here.
  • the optical flow information (ie, a plurality of feature points) in the preceding vehicle area 601 in the previous image frame 301 is stored in the memory of the detection device.
  • the detection device can obtain the difference between the preceding vehicle image 1102 and the preceding image through an optical flow algorithm based on the preceding vehicle image 602 and optical flow information of the previous image frame and the preceding vehicle image 1102 of the current image frame 1101 .
  • the optical flow algorithm may obtain an optical flow vector corresponding to each feature point based on the acquired multiple feature points of the preceding vehicle image 602 and the correspondence between the multiple feature points in the preceding vehicle image 1102 .
  • the optical flow algorithm may obtain an optical flow vector corresponding to each feature point based on the acquired multiple feature points of the preceding vehicle image 602 and the correspondence between the multiple feature points in the preceding vehicle image 1102 .
  • FIG. 13 is a schematic diagram of the corresponding relationship of feature points exemplarily shown.
  • the feature points (ie optical flow information) in the preceding vehicle image 602 obtained by the detection device include feature points 1302 , and the features corresponding to the feature points 1302 in the preceding vehicle image 1102 obtained by the detection device through the optical flow algorithm
  • the point is the feature point 1301 .
  • the detection device obtains a corresponding vector (ie, an optical flow vector) based on the feature point 1301 and the feature point 1302 according to the optical flow algorithm, wherein the vector points to the feature point corresponding to the current image frame, that is, the feature point 1301 .
  • the processing of other feature points is similar and will not be repeated here.
  • the motion threshold is greater than the stationary threshold described above.
  • the motion threshold can be set to 6 pixels. In other embodiments, the motion threshold may also be other values, which are not limited in this application.
  • the detection device has determined that in the last detection result, the host vehicle and the preceding vehicle have been in a relatively stationary state.
  • the detection device determines that the vehicle and the preceding vehicle are in a relatively stationary state. It can be understood that the vehicle in front and the vehicle in front remain relatively static within two periodic intervals, for example, within 500 ms.
  • the detection device has determined that in the last detection result, the host vehicle and the preceding vehicle have been in a relatively stationary state.
  • the detection device determines that the vehicle in front and the vehicle in front are in a relative motion state
  • the detection device can determine that the vehicle in front and the vehicle in front are in a relative motion state from a relative static state. The flow proceeds to S106 in Fig. 2b.
  • the detection apparatus may acquire vectors corresponding to all or part of the feature points in the preceding vehicle area 603 in the image frame 1101 .
  • the lengths, directions and quantities of the optical flow vectors in FIG. 14 are only schematic examples, and are not limited in this application.
  • each optical flow vector has a length and a direction.
  • the length is used to indicate the motion amplitude between the feature points.
  • the direction of the optical flow vector is optionally used to indicate the direction of movement between feature points.
  • the detection device can use the direction information of all optical flow vectors to obtain the motion direction between the feature points in the preceding vehicle area of the current image frame and the feature points in the preceding vehicle area of the previous image frame.
  • the detection device may further determine whether the preceding vehicle moves forward relative to the own vehicle, that is, whether the preceding vehicle starts, based on the acquired moving directions between the feature points.
  • the specific steps of the detection device detecting whether the preceding vehicle moves forward may include:
  • the detection device may find a convergence point 1401 (also referred to as a convergence center) based on the intersection of a plurality of optical flow vectors in all optical flow vectors. It should be noted that, ideally, the distance between the convergence point and all vectors should be 0. Therefore, when there are multiple intersections formed by all the optical flow vectors, the convergence point can be determined based on the sum of the distances from each obtained intersection to all the vectors. Exemplarily, the intersection with the smallest sum of distances is the convergence point.
  • the detection device determines the convergence point, it can traverse all the optical flow vectors. According to the direction information of each optical flow vector, the number of optical flow vectors pointing to the convergence point among all the optical flow vectors is determined.
  • the detection device may determine that the preceding vehicle moves forward.
  • the first optical flow threshold is optionally 60% of the total optical flow. In other embodiments, the first optical flow threshold may also be other values, which are not limited in this application.
  • the detection device may determine that the preceding vehicle moves backward. It should be noted that if the vehicle moves backward, the direction of the optical flow vector is divergent. Therefore, the number of optical flow vectors pointing to the convergence point is very small or zero.
  • the detection device after the detection device detects that the vehicle in front starts, the detection device gives an alarm to prompt the user to start the vehicle in front.
  • the detection device may give an alarm through the audio device of the vehicle or through the prompt sound of the driving recorder.
  • the detection device may also issue an alarm to prompt the user that the vehicle in front is moving backward.
  • the detection apparatus clears the cached information, and executes S101 again.
  • the emptied cache information includes but is not limited to: the recorded at least one detection result, the recorded image information of the previous image frame (for example, including the optical flow information of the previous image frame and the image of the preceding vehicle) and the like.
  • the detection device may determine whether the vehicle and the preceding vehicle have been in a relatively stationary state for a set period of time, ie, for multiple periods, based on the multiple times of cached detection results. And after it is detected that the detection results of the multiple caches are that the vehicle and the preceding vehicle are relatively stationary, S105 is executed.
  • the detection device may determine that the preceding vehicle and the vehicle have changed from a relatively static state to a moving state based on image frames of multiple consecutive periods, and then perform S106 to prevent misjudgment .
  • the detection device may first detect whether the vehicle is in a stationary state. And after it is determined that the vehicle is in a stationary state, it is determined whether the vehicle in front and the vehicle in front are in a relatively stationary state. It can also be understood that both the own vehicle and the preceding vehicle are in an absolutely stationary state relative to the ground.
  • the detection device may acquire parameters collected by an accelerator, a gyroscope, etc. integrated in the vehicle, so as to determine whether the vehicle is absolutely stationary relative to the ground based on the acquired parameters.
  • the detection device may detects that the vehicle is not in a stationary state, it may be determined that the detection result is a non-relative stationary state.
  • 15 to 19 are schematic flowcharts of an exemplary method for detecting the start of a preceding vehicle. The method for detecting the start of a preceding vehicle in the embodiment of the present application will be described in detail below in turn based on FIGS. 15 to 19 .
  • FIG. 15 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle. Please refer to Figure 15, which includes:
  • the driving recorder starts to collect images.
  • the detection device acquires the first image frame at the trigger moment of the current detection cycle.
  • S402 Determine whether the detection result and/or the image information of the image frame are cached.
  • the detection device does not detect the cached detection result and the image information of the image frame as an example. It should be noted that there are various reasons why the detection device does not detect the buffered detection result and the image information of the image frame. For example, in one example, as described above, the detection device will clear the cache after detecting that the preceding vehicle starts up last time. In another example, when the detection device does not detect the preceding vehicle in the last image frame, it also does not cache the detection result and the image information. In another example, when the vehicle is started, that is, after the driving recorder and the detection device are initially started, the detection device also does not store the detection result and image information.
  • S403 Detect whether the first image frame includes a preceding vehicle.
  • the first image frame includes the preceding vehicle as an example for description.
  • the flow proceeds to S404.
  • the detection device detects that the first image frame includes a preceding vehicle, and may further acquire feature points in the preceding vehicle area of the first image frame, that is, optical flow information, and acquire an image pyramid of the preceding vehicle image.
  • feature points in the preceding vehicle area of the first image frame that is, optical flow information
  • image pyramid of the preceding vehicle image For specific details, reference may be made to the above, which will not be repeated here.
  • the detection device buffers the image information of the first image frame, that is, the optical flow information of the first image frame and the image pyramid of the preceding vehicle image.
  • FIG. 16 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle. Please refer to Figure 16, which includes:
  • the detection device obtains the image collected at the current moment from the driving recorder.
  • the duration of the interval between the acquisition of the first image frame and the second image frame is the above-mentioned period duration (which may also be referred to as the detection period duration).
  • the period duration is 500ms.
  • S502 Determine whether the detection result and/or the image information of the image frame are cached.
  • the detection apparatus buffers the image information of the first image frame in the previous cycle. Such as optical flow information and an image pyramid of the preceding vehicle image.
  • the detection device may detect the image information of the image frame buffered. The flow proceeds to S503.
  • the detection apparatus only buffers the image information of the image frame, but does not buffer the detection result. Therefore, in S503, the detection device determines that the host vehicle and the preceding vehicle are not relatively stationary in the previous cycle. For a specific description, reference may be made to the related content of S202, which will not be repeated here. The flow proceeds to S504.
  • S504 Detect whether a preceding vehicle is included in the second image frame.
  • the second image frame includes the preceding vehicle as an example for description.
  • the flow proceeds to S505.
  • the detection device detects that the image information of the image frame acquired in the previous cycle, that is, the image information corresponding to the first image frame, is buffered in the memory.
  • the detection device may acquire the image frame acquired in the current cycle, that is, the image information corresponding to the second image frame.
  • the detection device may detect whether the vehicle and the preceding vehicle are relatively stationary based on the image information of the first image frame and the image information of the second image frame.
  • the detection method please refer to the relevant content of S103 above, which will not be repeated here.
  • the obtained median value (see above for the concept, which will not be repeated here) is greater than the set static threshold (2 pixels) as an example for description.
  • the detection device determines that the vehicle and the preceding vehicle are in a relative motion state. The flow proceeds to S506.
  • the detection device caches the current detection result, that is, the own vehicle and the preceding vehicle are in a relative motion state.
  • the detection device caches the image information of the second image frame, that is, the optical flow information of the second image frame and the image pyramid of the preceding vehicle image.
  • the image information below is similar, and the description will not be repeated below.
  • FIG. 17 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle. Please refer to Figure 17, including:
  • the detection device obtains the image collected at the current moment from the driving recorder.
  • the duration of the interval between the acquisition of the second image frame and the third image frame is the above-mentioned cycle duration (it may also be the detection cycle duration).
  • the period duration is 500ms.
  • S602 Determine whether the detection result and/or the image information of the image frame are cached.
  • the detection apparatus buffers the detection result and the image information of the second image frame.
  • the detection apparatus may detect that the previous detection result and the image information of the image frame are cached. The flow proceeds to S603.
  • the detection result buffered in one cycle on the detection device indicates that the vehicle and the preceding vehicle are in a relative motion state. Therefore, in this step, the detection device determines that the own vehicle and the preceding vehicle are not relatively stationary in the previous cycle. For a specific description, reference may be made to the related content of S202, which will not be repeated here. The flow proceeds to S604.
  • the third image frame includes the preceding vehicle as an example for description.
  • the flow proceeds to S605.
  • S605 based on the second image frame and the third image frame, detect whether the vehicle and the preceding vehicle are in a relatively stationary state.
  • the detection device detects that the detection result obtained in the previous cycle and the image information of the image frame, that is, the image information corresponding to the second image frame, are buffered in the memory.
  • the detection device may acquire the image frame acquired in the current cycle, that is, the image information corresponding to the third image frame.
  • the detection device may detect whether the vehicle and the preceding vehicle are relatively stationary based on the image information of the second image frame and the image information of the third image frame.
  • the detection method please refer to the relevant content of S103 above, which will not be repeated here.
  • the obtained median value (see above for the concept, which will not be repeated here) is less than the set static threshold (2 pixels) as an example for description.
  • the detection device determines that the vehicle and the preceding vehicle are relatively stationary. The flow proceeds to S606.
  • the detection device caches the current detection result, that is, the own vehicle and the preceding vehicle are in a relatively stationary state. And, the detection device buffers the image information of the third image frame.
  • FIG. 18 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle. Please refer to Figure 18, which includes:
  • the detection device obtains the image collected at the current moment from the driving recorder.
  • the duration of the interval between the acquisition of the third image frame and the fourth image frame is the above-mentioned cycle duration (it may also be the detection cycle duration).
  • the period duration is 500ms.
  • S702 Determine whether the detection result and/or the image information of the image frame are cached.
  • the detection apparatus buffers the detection result and the image information of the third image frame.
  • the detection apparatus may detect that the previous detection result and the image information of the image frame are cached. The flow proceeds to S703.
  • the detection result buffered in one cycle on the detection device indicates that the vehicle and the preceding vehicle are in a relatively stationary state. Therefore, in this step, the detection device determines that the own vehicle and the preceding vehicle are relatively stationary in the previous cycle. For a specific description, reference may be made to the related content of S202, which will not be repeated here.
  • the flow proceeds to S704.
  • the fourth image frame includes the preceding vehicle as an example for description.
  • the flow proceeds to S705.
  • S705 based on the third image frame and the fourth image frame, detect whether the vehicle and the preceding vehicle change from a relatively static state to a relatively moving state.
  • the detection device detects that the detection result obtained in the previous cycle and the image information of the image frame, that is, the image information corresponding to the third image frame, are buffered in the memory. such as optical flow information and images of the preceding vehicle.
  • the detection device may acquire the image frame acquired in the current cycle, that is, the image information corresponding to the fourth image frame.
  • the detection device may detect whether the vehicle and the preceding vehicle are in a relative motion state based on the image information of the third image frame and the image information of the fourth image frame. That is to say, whether the vehicle and the preceding vehicle have changed from the relative static state of the previous cycle to the relative motion state.
  • the detection method please refer to the relevant content of S105 above, which will not be repeated here.
  • the obtained median value (see above for the concept, which will not be repeated here) is less than the set motion threshold (6 pixels) as an example for description.
  • the detection device determines that the vehicle and the preceding vehicle are still relatively stationary. That is, it does not change from a relatively stationary state to a relatively moving state. The flow proceeds to S706.
  • the detection device caches the current detection result, that is, the own vehicle and the preceding vehicle are in a relatively stationary state. And, the detection device buffers the image information of the fourth image frame.
  • FIG. 19 is a schematic flowchart of an exemplary method for detecting the start of a preceding vehicle. Please refer to Figure 19, including:
  • the detection device obtains the image collected at the current moment from the driving recorder.
  • the duration of the interval between the acquisition of the fourth image frame and the fifth image frame is the above-mentioned cycle duration (it may also be the detection cycle duration).
  • the period duration is 500ms.
  • the detection apparatus buffers the detection result and the image information of the fourth image frame.
  • the detection apparatus may detect that the previous detection result and the image information of the image frame are cached. The flow proceeds to S803.
  • the detection result buffered in one cycle on the detection device indicates that the vehicle and the preceding vehicle are in a relatively stationary state. Therefore, in this step, the detection device determines that the own vehicle and the preceding vehicle are relatively stationary in the previous cycle. For a specific description, reference may be made to the related content of S202, which will not be repeated here. The flow proceeds to S804.
  • the fifth image frame includes the preceding vehicle as an example for description.
  • the flow proceeds to S805.
  • S805 based on the fourth image frame and the fifth image frame, detect whether the vehicle and the preceding vehicle change from a relatively static state to a relatively moving state.
  • the detection device detects that the detection result obtained in the previous cycle and the image information of the image frame, that is, the image information corresponding to the fourth image frame, are buffered in the memory. For example, the feature points of the preceding vehicle region and the image pyramid of the preceding vehicle image.
  • the detection device may acquire the image frame acquired in the current cycle, that is, the image information corresponding to the fifth image frame.
  • the detection device may detect whether the vehicle and the preceding vehicle are in a relative motion state based on the image information of the fourth image frame and the image information of the fifth image frame. That is to say, whether the vehicle and the preceding vehicle have changed from the relative static state of the previous cycle to the relative motion state.
  • the detection method please refer to the relevant content of S105 above, which will not be repeated here.
  • the obtained median value (see above for the concept, which will not be repeated here) is greater than the set motion threshold (6 pixels) as an example for description.
  • the detection device determines that the vehicle and the preceding vehicle are in a relative motion state. That is, the vehicle and the preceding vehicle change from a relatively static state to a moving state. The flow proceeds to S806.
  • the detection device may further determine whether the preceding vehicle is moving forward based on the image information of the fourth image frame and the image information of the fifth image frame.
  • the specific determination method reference may be made to the description in S106, which will not be repeated here.
  • the detection device determines that the preceding vehicle moves forward, that is, the preceding vehicle starts as an example for description.
  • the flow proceeds to S807.
  • the detection device knows the cached detection results and the image information of all the image frames. And at the time when the next cycle arrives, S401 to S808 are repeatedly executed.
  • FIG. 20 is a schematic flowchart of a method for detecting the start of a preceding vehicle according to an embodiment of the present application. Please refer to Figure 20, which includes:
  • the detection device acquires the image frame in the current cycle.
  • S101 which is not repeated here.
  • the process proceeds to S903.
  • FIG. 21 is a schematic diagram exemplarily showing the image size of the preceding vehicle in the image frame.
  • the detection device can determine the preceding vehicle area 2103 corresponding to the preceding vehicle 2102 through image recognition, and obtain the image size of the preceding vehicle area 2103 , which is the image size of the preceding vehicle 2102 .
  • the size of the preceding vehicle area 2103 includes the height H1 of the preceding vehicle area 2103 .
  • the embodiments of the present application only take a rectangular frame as an example for description.
  • the image size of the preceding vehicle 2102 may also be the size corresponding to the image formed by the edges of the preceding vehicle 2102 .
  • This application is not limited. It should be noted that what is shown in Fig. 21 is the image size in the image frame 2101 obtained by the preceding vehicle 2102 in the previous cycle.
  • the detection device may obtain the preceding vehicle area in the current image frame based on the image frame obtained in the current cycle, and the specific obtaining method may refer to the above, which will not be repeated here.
  • the detection device may compare the size of the preceding vehicle area, eg, the height of the preceding vehicle area, with the image size of the preceding vehicle area 2103 , eg, the height H of the preceding vehicle area, based on the current image frame.
  • the difference between the image sizes (eg, heights) of the preceding vehicle area in the two image frames is less than or equal to a set stationary threshold, it is determined that the preceding vehicle and the host vehicle are relatively stationary.
  • the static threshold may be set based on actual requirements, which is not limited in this application.
  • the detection device may also be based on the ratio between the image size (eg height) of the preceding vehicle area in the current image frame and the image size (eg height) of the preceding vehicle area in the previous image frame, that is, the image of the preceding vehicle area.
  • the size (eg, height) is divided by the image size (eg, height) of the preceding vehicle area in the previous image frame to determine whether the preceding vehicle and the host vehicle are relatively stationary.
  • the ratio is greater than or equal to the set stationary threshold, it is determined that the preceding vehicle is stationary. If the ratio is smaller than the set stationary threshold, the relative motion of the preceding vehicle and the host vehicle is determined.
  • the set static thresholds are also different.
  • FIG. 21 is the image size in the image frame 2104 acquired by the preceding vehicle 2102 in the current cycle acquired by the exemplary detection device.
  • the image size of the preceding vehicle area 2105 in the image frame 2104 of the preceding vehicle 2102 becomes smaller, that is, the size of the preceding vehicle area 2105 becomes smaller.
  • the image size is smaller than the preceding vehicle area 2103. That is, the height H2 of the preceding vehicle area 2105 is smaller than the height H1 of the preceding vehicle area 2103 .
  • the detection device may be configured with a set motion threshold.
  • the detection device obtains the ratio of the height ( H2 ) of the preceding vehicle area 2105 to the height ( H1 ) of the preceding vehicle area 2103 , that is, the height value of the preceding vehicle area 2105 is divided by the height value of the preceding vehicle area 2103 .
  • the detection device can determine that the preceding vehicle starts, that is, the preceding vehicle moves forward relative to the own vehicle.
  • the detection device may determine that the preceding vehicle and the host vehicle are still relatively stationary. In yet another example, if the ratio is greater than 1, the detection device may determine that the preceding vehicle is moving backward relative to the host vehicle.
  • the detection device may give an alarm, that is, perform S906.
  • the detection device may give an alarm, that is, perform S906.
  • S901 is repeatedly performed.
  • the optical flow-based vehicle start detection method in scene 1 can also be combined with the image size-based vehicle start detection method in scene 2 to avoid the optical flow detection method affecting motion.
  • the misjudgment of the state improves the accuracy of the start detection of the preceding vehicle.
  • the detection device may determine whether the median value is greater than or equal to the set motion threshold. In one example, if the median value is greater than or equal to the set motion threshold, the process proceeds to S306. In this embodiment, if the median value is smaller than the set motion threshold, the detection device can combine the relative motion judgment method in the second scene.
  • the detection device can detect whether the preceding vehicle starts based on the image size of the preceding vehicle in the previous image frame and the image size of the preceding vehicle in the current image frame. Repeat. In one example, if the detection device determines that the preceding vehicle and the host vehicle are relatively stationary based on the image size of the preceding vehicle. That is, the detection device detects that the vehicle in front and the vehicle in front remain relatively stationary based on both the optical flow detection method and the image size detection method. In another example, if the detection device determines, based on the image size of the preceding vehicle, that the preceding vehicle and the host vehicle are in a relative motion state, that is, from a relatively static state to a relative motion state.
  • the detection apparatus includes corresponding hardware structures and/or software modules for performing each function.
  • the embodiments of the present application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the detection device may be divided into functional modules according to the above method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 23 shows a possible schematic structural diagram of the detection apparatus 2300 involved in the above embodiment, as shown in the figure.
  • the apparatus includes: a first acquisition module 2301 for acquiring a first image frame, where the first image frame includes an image of a preceding vehicle.
  • the second obtaining module 2302 is configured to obtain the first optical flow information according to the first image frame and the second image frame; wherein the second image frame is an image frame preceding the first image frame, and the second image frame includes The image of the preceding vehicle; the first optical flow information is used to indicate the optical flow movement trend between the feature points of the preceding vehicle in the first image frame and the feature points of the preceding vehicle in the second image frame.
  • the detection module 2303 is configured to detect whether the preceding vehicle starts according to the first optical flow information.
  • the first acquisition module 2301 is further configured to acquire a third image frame and a fourth image frame before acquiring the first image frame, and both the third image frame and the fourth image frame include the preceding vehicle
  • the third image frame is adjacent to the fourth image frame
  • the second obtaining module 2302 is further configured to obtain second optical flow information according to the third image frame and the fourth image frame
  • the second optical flow information is used to indicate The optical flow movement trend between the feature points of the preceding vehicle in the fourth image frame relative to the feature points of the preceding vehicle in the third image frame
  • the detection module 2303 is further configured to determine the relationship between the vehicle and the vehicle according to the second optical flow information.
  • the vehicle in front is relatively stationary.
  • the detection module 2303 is specifically configured to: according to the first optical flow information, detect whether the vehicle and the preceding vehicle change from a relative static state to a relative motion state; wherein, the relative motion state includes the preceding The vehicle moves forward relative to the vehicle or the vehicle in front moves backward relative to the vehicle.
  • the first optical flow information includes first amplitude information and first direction information; the first amplitude information is used to indicate the feature point of the preceding vehicle in the first image frame and the feature point in the second image frame.
  • the detection module 2303 is specifically configured to: when the first amplitude information is greater than or equal to the set first threshold, determine that the vehicle and the preceding vehicle are in a relative motion state.
  • the detection module 2303 is specifically configured to: when the first amplitude information is less than or equal to the set second threshold, determine that the vehicle and the preceding vehicle are in a relatively static state; the second threshold is less than or equal to the second threshold a threshold.
  • the detection module 2303 is further configured to: when the first direction information is greater than or equal to the set third threshold, determine The vehicle in front moves forward relative to the vehicle.
  • the detection module 2303 is further configured to: when the first direction information is less than or equal to the set fourth threshold, determine The preceding vehicle moves backward relative to the own vehicle; the fourth threshold is smaller than the third threshold.
  • the first optical flow information is an optical flow vector between the feature point of the preceding vehicle in the first image frame and the feature point of the preceding vehicle in the second image frame, wherein each light
  • the flow vector includes magnitude information and direction information.
  • the detection module 2303 is used to: determine the convergence point based on the direction information of all the optical flow vectors. Based on the convergence point, all optical flow vectors are traversed, and it is determined whether the number of optical flow vectors in all optical flow vectors pointing to the convergence point is greater than or equal to the set third threshold.
  • the first acquisition module 2301 is configured to detect whether the first image frame includes an image of the preceding vehicle according to a set condition.
  • the set conditions include: the area of the image of the preceding vehicle in the preceding vehicle detection area of the first image frame is greater than or equal to the set preceding vehicle detection threshold; if there are multiple areas For images that are greater than or equal to the set preceding vehicle detection threshold, the image closest to the bottom edge of the first image frame is selected as the image of the preceding vehicle.
  • the size of the distribution area of the optical flow information in the image frame is smaller than the size of the image of the preceding vehicle in the image frame.
  • the device further includes a third acquisition module 2304, which is used to acquire the first image size of the preceding vehicle in the first image frame; the detection module is also used for when the first image size is smaller than the set
  • the fifth threshold is used to determine that the preceding vehicle moves forward relative to the vehicle; the detection module is also used to determine that the preceding vehicle moves backward relative to the vehicle when the size of the first image is greater than or equal to the fifth threshold.
  • the device further includes a third acquisition module 2304, configured to acquire a first image of the preceding vehicle in the first image frame when the detection module does not detect that the preceding vehicle starts according to the first optical flow information size; the detection module is also used to determine that the preceding vehicle moves forward relative to the vehicle when the size of the first image is smaller than the set fifth threshold; the detection module is also used to determine when the size of the first image is greater than or equal to the fifth threshold The vehicle in front moves backward relative to the vehicle.
  • a third acquisition module 2304 configured to acquire a first image of the preceding vehicle in the first image frame when the detection module does not detect that the preceding vehicle starts according to the first optical flow information size; the detection module is also used to determine that the preceding vehicle moves forward relative to the vehicle when the size of the first image is smaller than the set fifth threshold; the detection module is also used to determine when the size of the first image is greater than or equal to the fifth threshold The vehicle in front moves backward relative to the vehicle.
  • FIG. 24 is a schematic structural diagram of a communication apparatus according to an embodiment of the present application.
  • the communication apparatus 2400 may include: a processor 2401 , a transceiver 2405 , and optionally a memory 2402 .
  • the transceiver 2405 may be referred to as a transceiver unit, a transceiver, or a transceiver circuit, etc., for implementing a transceiver function.
  • the transceiver 2405 may include a receiver and a transmitter, the receiver may be called a receiver or a receiving circuit, etc., for implementing the receiving function; the transmitter may be called a transmitter or a transmitting circuit, etc., for implementing the transmitting function.
  • the processor 2401 can control the MAC layer and the PHY layer by running the computer program or software code or instruction 2403 therein, or by calling the computer program or software code or instruction 2404 stored in the memory 2402, so as to realize the following aspects of the present application.
  • the OM negotiation method provided by the embodiment.
  • the processor 2401 can be a central processing unit (central processing unit, CPU), and the memory 2402 can be, for example, a read-only memory (read-only memory, ROM), or a random access memory (random access memory, RAM).
  • the processor 2401 and transceiver 2405 described in this application may be implemented in integrated circuits (ICs), analog ICs, radio frequency integrated circuits (RFICs), mixed-signal ICs, application specific integrated circuits (ASICs), printed circuits board (printed circuit board, PCB), electronic equipment, etc.
  • ICs integrated circuits
  • RFICs radio frequency integrated circuits
  • ASICs application specific integrated circuits
  • PCB printed circuits board
  • electronic equipment etc.
  • the above-mentioned communication apparatus 2400 may further include an antenna 2406, and each module included in the communication apparatus 2400 is only illustrative, and is not limited in this application.
  • the communication device described in the above embodiments may be a terminal, but the scope of the communication device described in this application is not limited thereto, and the structure of the communication device may not be limited by FIG. 24 .
  • the communication apparatus may be a stand-alone device or may be part of a larger device.
  • the implementation form of the communication device may be:
  • Independent integrated circuit IC or chip, or, chip system or subsystem
  • a set of one or more ICs, optionally, the IC set may also include storage for storing data and instructions components; (3) modules that can be embedded in other devices; (4) in-vehicle devices, etc.; (5) others, etc.
  • the chip shown in FIG. 25 includes a processor 2501 and an interface 2502 .
  • the number of processors 2501 may be one or more, and the number of interfaces 2502 may be multiple.
  • the chip or chip system may include memory 2503 .
  • embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program includes at least a piece of code, and the at least one piece of code can be executed by a terminal device to control
  • the terminal device is used to implement the above method embodiments.
  • the embodiments of the present application further provide a computer program, which is used to implement the above method embodiments when the computer program is executed by a terminal device.
  • the program may be stored in whole or in part on a storage medium packaged with the processor, or may be stored in part or in part in a memory not packaged with the processor.
  • an embodiment of the present application further provides a processor, and the processor is used to implement the above method embodiments.
  • the above-mentioned processor may be a chip.
  • the steps of the method or algorithm described in conjunction with the disclosure of the embodiments of this application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (Random Access Memory, RAM), flash memory, read only memory (Read Only Memory, ROM), erasable programmable read only memory ( Erasable Programmable ROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), registers, hard disk, removable hard disk, CD-ROM, or any other form of storage medium known in the art.
  • RAM Random Access Memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • registers hard disk, removable hard disk, CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage medium may reside in an ASIC.
  • the ASIC may be located in a network device.
  • the processor and storage medium may also exist in the network device as discrete components.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Abstract

本申请实施例提供了一种前车起步检测方法及装置。该方法包括:获取前车在相邻两个图像帧之间的光流信息,对前车在相邻两个图像之间的位置进行追踪,以确定前车是否相对本车向前运动。从而提供一种基于光流信息的前车起步识别方式,有效提高前车起步识别的效率和准确性。

Description

前车起步检测方法及装置 技术领域
本申请实施例涉及车辆领域,尤其涉及一种前车起步检测方法及装置。
背景技术
随着智能识别技术的发展,智能识别技术的应用场景越来越广泛。目前,道路的拥堵情况不断攀升,例如在道路路口,自车与前车在等待红绿灯时,若自车司机未注意到前车已起步,则会增加道路的拥堵。因此,为尽量减小由人为原因导致的拥堵,已有技术采用智能识别前车车牌的方式,或者,智能识别前车的运动轨迹的方式,以检测前车已起步,而本车仍处于静止状态,并向用户告警。但是,由于已有技术的前车起步判定方式容易造成误判或者容易失效,导致已有技术的前车起步检测方式的可靠性较低。
发明内容
为了解决上述技术问题,本申请实施例提供一种前车起步检测方法及装置。在该方法中,装置可基于相邻的两个图像帧中的前车图像对应的光流信息,确定前车是否起步,以提升前车起步识别的准确性和可靠性。
第一方面,本申请实施例提供一种前车起步检测方法。该方法包括:获取第一图像帧,第一图像帧中包括前车的图像。根据第一图像帧与第二图像帧,获取第一光流信息;其中,第二图像帧为第一图像帧的前一帧图像帧,且第二图像帧包括前车的图像;第一光流信息用于指示第一图像帧中的前车的特征点相对于第二图像帧中的前车的特征点之间光流运动趋势。根据第一光流信息,检测前车是否起步。
这样,基于光流信息对前车起步进行识别,可使得装置在对前车与本车之间的相对状态进行识别时,能够避免前车的部分被遮挡,所造成的误判问题以及重复识别的问题,从而可提高前车起步的识别效率以及准确性。
示例性的,装置可按照设定的周期,获取图像帧。所述前一帧图像帧为与当前周期相邻的前一周期获取到的图像帧。
示例性的,前车起步可选地为前车相对本车向前运动。
在一种可能的实现方式中,获取第一图像帧之前,方法还包括:获取第三图像帧和第四图像帧,第三图像帧和第四图像帧均包括前车的图像;第三图像帧和第四图像帧相邻;根据第三图像帧与第四图像帧,获取第二光流信息;第二光流信息用于指示第四图像帧中的前车的特征点相对于第三图像帧中的前车的特征点之间的光流运动趋势;根据第二光流信息,确定本车与前车之间为相对静止状态。
这样,装置可基于光流信息,对前车与本车之间的静止状态进行判断。示例性的,前车起步可以理解为,前车与本车之间,从相对静止状态变为相对运动状态,且前车向前运动。因此,装置可在确定前车与本车相对静止的基础上,进一步判断前车是否起步。
示例性的,第三图像帧和第四图像帧可选地是在第一图像帧之前获取到的。
示例性的,第三图像帧和第四图像帧可选地是在第二图像帧之后获取到的。也就是说,在确定前车起步之前,或者前车起步之后,装置可基于获取到的图像帧,确定前车与本车是否相对静止。
在一种可能的实现方式中,根据第一光流信息,检测前车是否起步,包括:根据第一光流信息,检测本车与前车之间是否从相对静止状态变为相对运动状态;其中,相对运动状态包括前车相对本车向前运动或者前车相对本车向后运动。
这样,装置在识别到前车与本车已处于静止状态的情况下,可基于光流,进一步对前车与本车的状态进行监控,以判断前车与本车是否从相对静止状态,变为相对运动状态,以准确识别出前车与本车之间的静止状态与运动状态之间的变换,以提供一种高效且准确的前车起步识别方式。
在一种可能的实现方式中,第一光流信息包括第一幅度信息和第一方向信息;第一幅度信息用于指示第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的运动幅度;第一方向信息用于指示第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的运动方向。
这样,装置可基于光流的幅度信息和方向信息,确定前车相对本车的运动方向,以精准识别前车是否相对本车向前运动。
在一种可能的实现方式中,根据第一光流信息,检测前车是否起步,包括:当第一幅度信息大于或等于设定的第一阈值,确定本车与前车为相对运动状态。
这样,装置可基于光流的幅度信息,确定前车与本车是否为相对运动状态。在确定前车与本车为相对运动状态的基础上,装置可进一步确定前车相对于本车的运动方向。
在一种可能的实现方式中,根据第一光流信息,检测前车是否起步,包括:当第一幅度信息小于或等于设定的第二阈值,确定本车与前车之间为相对静止状态;第二阈值小于第一阈值。
这样,装置可基于光流的幅度信息,确定前车相对本车是否为相对静止状态。在本申请实施例中,装置在判定前车与本车处于相对静止状态后,可进一步基于光流的幅度信息,检测前车与本车是否从静止状态变为运动状态。
在一种可能的实现方式中,当第一幅度信息大于或等于设定的第一阈值时,根据第一光流信息,检测前车是否起步,还包括:当第一方向信息大于或等于设定的第三阈值,确定前车相对本车向前运动。
这样,装置在确定前车与本车为相对运动状态基础上,可进一步基于光流的方向信息,确定前车相对本车的运动方向,以精准识别前车是否相对本车向前运动。
在一种可能的实现方式中,当第一幅度信息大于或等于设定的第一阈值时,根据第一光流信息,检测前车是否起步,还包括:当第一方向信息小于或等于设定的第四阈值,确定前车相对本车向后运动;第四阈值小于第三阈值。
这样,装置在确定前车与本车为相对运动状态基础上,可进一步基于光流的方向信息,确定前车相对本车的运动方向,以精准识别前车相对本车向前运动或是向后运动。
在一种可能的实现方式中,第一光流信息为第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的光流向量,其中,每个光流向量包括幅度信息和方向信息。当第一方向信息大于或等于设定的第三阈值,确定前车相对本车向前运动,包括:基于所有光流向量的方向信息,确定汇聚点。基于汇聚点,遍历所有光流向量,判定所有光流向量中指向汇聚点的光流向量的数量是否大于或等于设定的第三阈值。这样,检测装置可基于光流向量的方向信息,识别出光流向量的运动趋势。如果前车相对于本车向前运动,则光流向量呈汇聚趋势,即光流向量指向汇聚点。如果前车相对于本车向后运动,则光流向量成发散趋势。
在一种可能的实现方式中,获取第一图像帧包括:按照设定的条件,检测第一图像帧中是否包括前车的图像。这样,可避免汽车移动过快导致的误判,以在保证两个图像帧中均包括前车的情况下,再进行后续的检测步骤。
在一种可能的实现方式中,设定的条件包括:前车的图像在所述第一图像帧的前车检测区域中的面积大于或等于设定的前车检测阈值;若存在多个面积大于或等于设定的前车检测阈值的图像,选择距离第一图像帧底边最近的图像为前车的图像。这样,基于设定的条件,可准确的识别出图像帧中是否包括前车。
在一种可能的实现方式中,光流信息在图像帧中的分布区域的尺寸小于前车在图像帧中的图像的尺寸。这样,可有效避免图像帧中的其它物体,例如路灯、车灯等对光流算法的影响。
在一种可能的实现方式中,方法还包括:获取前车在第一图像帧中的第一图像尺寸;当第一图像尺寸小于设定的第五阈值,确定前车相对本车向前运动;当第一图像尺寸大于或等于第五阈值,确定前车相对本车向后运动。
这样,本申请实施例还可以基于前车在相邻两个图像中的尺寸变换情况,以确定前车是否起步。示例性的,若前车与本车处于相对静止状态,则前车在相邻两个图像中的尺寸基本相同(可存在较小差别)。若前车与本车处于相对运动状态,则前车在相邻两个图像中的尺寸不相同,即,尺寸发生变化。
在一种可能的实现方式中,当根据第一光流信息未检测到前车起步时,方法还包括:获取前车在第一图像帧中的第一图像尺寸;当第一图像尺寸小于设定的第五阈值,确定 前车相对本车向前运动;当第一图像尺寸大于或等于第五阈值,确定前车相对本车向后运动。这样,本申请实施例还可以基于前车在相邻两个图像中的尺寸,以确定前车是否起步。
从而实现光流判断与前车图像高度判断之间的融合,以进一步提高前车起步识别的准确性,防止误判。
第二方面,本申请实施例提供一种前车起步识别方法。该方法包括:获取第一图像帧;获取前车在第一图像帧中的第一图像尺寸;当第一图像尺寸小于设定的第一阈值,确定前车相对本车向前运动;当第一图像尺寸大于或等于第二阈值,确定前车相对本车向后运动。
在一种可能的实现方式中,获取第一图像帧之前,方法还包括:获取第二图像帧;获取前车在第二图像帧中的第二图像尺寸;当第二图像尺寸大于设定的第三阈值,确定前车与本车之间为相对静止状态。其中,第二阈值大于第三阈值,第三阈值大于第一阈值。
第三方面,本申请实施例提供一种前车起步检测装置。该装置包括:第一获取模块,用于获取第一图像帧,第一图像帧中包括前车的图像;第二获取模块,用于根据第一图像帧与第二图像帧,获取第一光流信息;其中,第二图像帧为第一图像帧的前一帧图像帧,且第二图像帧包括前车的图像;第一光流信息用于指示第一图像帧中的前车的特征点相对于第二图像帧中的前车的特征点之间光流运动趋势;检测模块,用于根据第一光流信息,检测前车是否起步。
在一种可能的实现方式中,第一获取模块,还用于在获取第一图像帧之前,获取第三图像帧和第四图像帧,第三图像帧和第四图像帧均包括前车的图像;第三图像帧和第四图像帧相邻;第二获取模块,还用于根据第三图像帧与第四图像帧,获取第二光流信息;第二光流信息用于指示第四图像帧中的前车的特征点相对于第三图像帧中的前车的特征点之间的光流运动趋势;检测模块,还用于根据第二光流信息,确定本车与前车之间为相对静止状态。
在一种可能的实现方式中,检测模块,具体用于:根据第一光流信息,检测本车与前车之间是否从相对静止状态变为相对运动状态;其中,相对运动状态包括前车相对本车向前运动或者前车相对本车向后运动。
在一种可能的实现方式中,第一光流信息包括第一幅度信息和第一方向信息;第一幅度信息用于指示第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的运动幅度;第一方向信息用于指示第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的运动方向。
在一种可能的实现方式中,检测模块,具体用于:当第一幅度信息大于或等于设定的第一阈值,确定本车与前车为相对运动状态。
在一种可能的实现方式中,检测模块,具体用于:当第一幅度信息小于或等于设定的第二阈值,确定本车与前车之间为相对静止状态;第二阈值小于第一阈值。
在一种可能的实现方式中,当第一幅度信息大于或等于设定的第一阈值时,检测模块,具体还用于:当第一方向信息大于或等于设定的第三阈值,确定前车相对本车向前运动。
在一种可能的实现方式中,当第一幅度信息大于或等于设定的第一阈值时,检测模块,具体还用于:当第一方向信息小于或等于设定的第四阈值,确定前车相对本车向后运动;第四阈值小于第三阈值。
在一种可能的实现方式中,第一光流信息为第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的光流向量,其中,每个光流向量包括幅度信息和方向信息。检测模块用于:基于所有光流向量的方向信息,确定汇聚点。基于汇聚点,遍历所有光流向量,判定所有光流向量中指向汇聚点的光流向量的数量是否大于或等于设定的第三阈值。
在一种可能的实现方式中,第一获取模块,用于按照设定的条件,检测第一图像帧中是否包括前车的图像。
在一种可能的实现方式中,设定的条件包括:前车的图像在所述第一图像帧的前车检测区域中的面积大于或等于设定的前车检测阈值;若存在多个面积大于或等于设定的前车检测阈值的图像,选择距离第一图像帧底边最近的图像为前车的图像。
在一种可能的实现方式中,光流信息在图像帧中的分布区域的尺寸小于前车在图像帧中的图像的尺寸。
在一种可能的实现方式中,装置还包括第三获取模块:第三获取模块,用于获取前车在第一图像帧中的第一图像尺寸;检测模块,还用于当第一图像尺寸小于设定的第五阈值,确定前车相对本车向前运动;检测模块,还用于当第一图像尺寸大于或等于第五阈值,确定前车相对本车向后运动。
在一种可能的实现方式中,装置还包括第三获取模块:第三获取模块,用于当检测模块根据第一光流信息未检测到前车起步时,获取前车在第一图像帧中的第一图像尺寸; 检测模块,还用于当第一图像尺寸小于设定的第五阈值,确定前车相对本车向前运动;检测模块,还用于当第一图像尺寸大于或等于第五阈值,确定前车相对本车向后运动。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请实施例提供一种前车起步检测装置。该装置包括:获取模块,用于获取第一图像帧;获取模块,还用于获取前车在第一图像帧中的第一图像尺寸;确定模块,用于当第一图像尺寸小于设定的第一阈值,确定前车相对本车向前运动;当第一图像尺寸大于或等于第二阈值,确定前车相对本车向后运动。
在一种可能的实现方式中,获取模块,还用于获取第二图像帧;获取模块,还用于获取前车在第二图像帧中的第二图像尺寸;确定模块,还用于当第二图像尺寸大于设定的第三阈值,确定前车与本车之间为相对静止状态。其中,第二阈值大于第三阈值,第三阈值大于第一阈值。
第五方面,本申请实施例提供一种前车起步检测装置。该装置包括至少一个处理器和接口;处理器通过接口接收或发送数据;至少一个处理器被配置为调用存储在存储器中的软件程序,以执行第一方面或第一方面的任一种可能的实现方式中的方法。
第五方面以及第五方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第六方面,本申请实施例提供一种计算机可读存储介质。计算机可读存储介质存储有计算机程序,当计算机程序运行在计算机或处理器上时,使得计算机或处理器执行第一方面或第一方面的任一种可能的实现方式中的方法。
第六方面以及第六方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第六方面以及第六方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第七方面,本申请实施例提供一种计算机程序产品。计算机程序产品包含软件程序,当软件程序被计算机或处理器执行时,使得第一方面或第一方面的任一种可能的实现方式中的方法被执行。
第七方面以及第七方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第七方面以及第七方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
图1为示例性示出的一种应用场景示意图;
图2a为本申请实施例体用的一种前车起步检测方法的流程示意图;
图2b为本申请实施例体用的一种前车起步检测方法的流程示意图;
图3为示例性示出的前车检测区域示意图;
图4为示例性示出的前车检测方式示意图;
图5为示例性示出的相对静止状态判定的流程示意图;
图6a为示例性示出的图像帧识别的示意图;
图6b为示例性示出的图像帧识别的示意图;
图7a为示例性示出的图像帧识别结果的示意图;
图7b为示例性示出的图像帧识别结果的示意图;
图8为示例性示出的图像帧识别的示意图;
图9为示例性示出的特征点对应关系示意图;
图10为示例性示出的相对运动状态判定的流程示意图;
图11为示例性示出的前车图像的示意图;
图12a为示例性示出的图像帧识别结果的示意图;
图12b为示例性示出的图像帧识别结果的示意图;
图13为示例性示出的特征点对应关系示意图;
图14为示例性示出的光流向量的示意图;
图15为示例性示出的前车起步检测方法的流程示意图;
图16为示例性示出的前车起步检测方法的流程示意图;
图17为示例性示出的前车起步检测方法的流程示意图;
图18为示例性示出的前车起步检测方法的流程示意图;
图19为示例性示出的前车起步检测方法的流程示意图;
图20为示例性示出的前车起步检测方法的流程示意图;
图21为示例性示出的前车在图像帧中的图像尺寸的示意图;
图22为示例性示出的前车在图像帧中的图像尺寸的示意图;
图23为本申请实施例提供的一种前车起步检测装置的结构示意图;
图24为本申请实施例提供的一种装置的结构示意图;
图25为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的应用场景进行说明。参见图1,为本申请实施例提供的一种应用场景示意图。该应用场景中包括前车和本车(也可以称为自车)。其中,本车可选地为用户正在驾驶的车辆。前车可选地为行驶或停止在本车前的车辆。需要说明的是,图1所示的应用场景仅为示意性举例。在其他实施例中,场景中可包括多个前车,本申请不做限定。
场景一
图2a为本申请实施例提供的一种前车起步检测方法的流程示意图。如图2a所示,本申请检测前车起步的流程主要包括:
S10,获取第一图像帧,第一图像帧中包括前车的图像。
S20,根据第一图像帧与第二图像帧,获取第一光流信息。其中,第二图像帧为第一图像帧的前一帧图像帧,且第二图像帧包括前车的图像。第一光流信息用于指示第一图像帧中的前车的特征点相对于第二图像帧中的前车的特征点之间光流运动趋势。
S30,根据第一光流信息,检测前车是否起步。
下面结合具体实施例,对图2a中的方法进行详细说明。结合图1,图2b为本申请实施例提供的一种前车起步检测方法的流程示意图。请参照图2b,具体包括:
S101,获取图像帧。
示例性的,本车安装有行车记录仪,或者是其它安装于本车前部的摄像装置。其中,行车记录仪或摄像装置可安装于车辆的前玻璃内侧。也可以安装于其它部位,本申请不做限定。
示例性的,以车辆(指本车)上安装有行车记录仪,且行车记录仪的帧率为60fps为例。其中,帧率表示行车记录仪每秒采集的图像个数。60bps表示行车记录仪每秒采集60个图像。本申请实施例所述的帧率仅为示意性举例,本申请不做限定。
示例性的,检测装置可周期性地获取行车记录仪采集的图像。本申请实施例中以周期时长为500ms(毫秒)为例进行说明。在其他实施例中,周期时长也可以更长或更短,本申请不做限定。
需要说明的是,本申请实施例中所述的检测装置可选地集成在行车记录仪中。示例性的,检测装置可以为行车记录仪的芯片。可选地,检测装置可以为集成在行车记录仪 的芯片上的模块。可选地,检测装置还可以是程序代码(或程序指令),该程序指令可以由行车记录仪的处理器执行。示例性的,检测装置也可以是车辆的芯片、集成在车辆的芯片上的模块,或由车辆中的处理器执行的程序代码。本申请实施例中,以检测装置为车辆中的硬化光流实现模块为例。其中,硬化光流实现模块可集成在车辆的处理器所在芯片上,也可以在处理器所在芯片外。示例性的,通过单独的硬化光流实现模块实现本申请实施例中的识别方法,可减少中央处理器(Central Processing Unit,CPU)的资源占用,并保证识别方法的高效性。
示例性的,检测装置在每个周期到达时刻,获取行车记录仪当前采集的图像(也可以称为图像帧)。示例性的,检测装置在本周期内获取到的图像帧可选地为图2a中所述的第一图像帧。也就是说,在行车记录仪以60fps采集图像的过程中,检测装置可每隔500ms(即周期时长)从行车记录仪获取行车记录仪当前采集到的图像。
S102,判断前一次检测结果是否指示本车和前车已处于相对静止状态。
示例性的,检测装置每获取一次图像,则可基于图2b所示的流程执行一次。也就是说,图2b所示的流程是循环执行的。
示例性的,检测装置可基于上一次的检测结果,检测本车和前车是否已经为相对静止状态,也就是说,上一次的检测结果是否为本车和前车为相对静止状态。检测结果的获取方式将在下面实施例中进行详细说明。
示例性的,前一次检测结果可包括两种,一种为前一次检测结果指示前车和本车为静止状态。另一种为前一次检测结果指示前车和本车为非静止状态。
在一个示例中,若检测装置获取到的上一次的检测结果指示本车和前车为非相对静止状态,则流程前进至S103。示例性的,“非相对静止状态”可选地为本车和前车为相对运动状态。示例性的,“非相对静止状态”可选地为未查询到检测结果。其中,“未查询到检测结果”的情况将在下面的实施例中说明。
在一个示例中,若检测装置获取到的上一次的检测结果指示本车和前车为相对静止状态,则流程前进至S105。
S103,基于前一图像帧和当前图像帧,检测本车和前车是否为相对静止状态。
示例性的,在检测本车和前车的相对状态之前,检测装置可先识别当前图像帧中是否包括前车。
下面对前车识别方式进行说明:
图3为示例性示出的前车检测区域示意图。请参照图3,示例性的,以图像帧301为例。图3中的阴影部分即为前车检测区域302。示例性的,前车检测区域302位置可选地为图像帧301的中部。前车检测区域302的宽度可选地为图像帧301宽度的四分之一。本申请实施例中的前车检测区域302的宽度和位置仅为示意性举例,本申请不做限定。
图4为示例性示出的前车检测方式示意图。请参照图4,仍以图像帧301为例。示例性的,图像帧301中包括车辆401的图像(以下简称车辆401)和车辆402的图像(以下简称车辆402)。
示例性的,检测装置预先配置有前车判决条件,前车判决条件包括:
1)前车的图像在前车检测区域中的面积大于或等于设定的前车检测阈值。
2)当存在多个满足条件1)的车辆的情况下,距离图像帧底边框最近的为前车。
可选地,前车判决条件还可以包括:前车与本车之间的距离小于设定的距离阈值(例如3m,可根据实际需求进行设置,本申请不做限定)。需要说明的是,前车与本车之间的距离可以通过任一种距离检测方式获取,本申请不再赘述。
示例性的,检测装置可基于上述条件,判断图像帧中是否存在前车。也就是说,在本申请实施例中,“前车”即为满足上述前车判决条件的车辆。
仍参照图4,示例性的,以设定的前车检测阈值为车辆图像总体面积的70%为例。车辆401的图像在前车检测区域中的面积均大于车辆401的图像面积的70%。并且,车辆402的图像在前车检测区域中的面积均大于车辆402的图像面积的70%。需要说明的是,识别车辆的图像面积的方式可参照已有技术中的任意一种图像识别方法,例如,边缘识别方法等,本申请不做限定。
示例性的,图4中的车辆401和车辆402均满足上述前车判决条件1)。并且,检测装置可确定,当前存在两个满足前车判决条件1)的车辆。检测装置进一步基于前车判决条件2),以确定前车。
请继续参照图4,示例性的,车辆401的图像较之车辆402的图像更靠近图像帧301的底边框。相应的,检测装置可确定车辆401即为前车。
在一个示例中,若检测装置未识别到图像帧中包括前车,检测装置不作处理,本次周期的处理流程结束。相应的,流程返回至S101,继续对下一帧进行处理。也就是说,在当前周期内,检测装置不会缓存任何检测结果。相应的,检测装置下个周期执行图2b中的流程,并前进至S102时,检测装置可确定未查询到检测结果,即可确定本车和前车处于非相对静止状态。即,未识别到前车是作为上文所述的“未查询到检测结果”的原因之一。
在另一个示例中,若检测装置识别到当前图像帧中存在前车,检测装置可基于缓存的前一图像帧与当前图像帧,判断本车和前车是否为相对静止状态。下面结合附图对本车和前车的相对静止状态的识别进行详细说明。
图5为示例性示出的相对静止状态判定的流程示意图。请参照图5,具体包括:
S201,获取当前图像帧中的前车图像。
示例性的,检测装置缓存有前一帧图像的图像信息,图像信息可选地包括前一帧图像中的前车图像和光流信息。检测装置可基于前一帧图像中的前车图像的尺寸和位置,获取当前图像帧中的前车图像。下面对检测装置在上一个周期内,获取前一帧图像的前车图像和光流图像的方式进行说明。示例性的前一帧图像可选地为图2a所述的第二图像帧。
图6a为示例性示出的图像帧识别的示意图。请参照图6a,示例性的,假设图4中所示的图像帧301为前一次获取的图像帧。需要说明的是,“前一次获取的图像帧”是指检测装置基于图像帧获取周期(例如上文所述的500ms),在与当前周期相邻的上一个周期获取到的图像帧。
示例性的,检测装置可基于图像识别方法(可参照已有技术,本申请不做限定),识别到图像301中的车辆401。示例性的,检测装置可进一步基于识别到的车辆401的中 心点,获取前车区域601。需要说明的是,车辆401的中心点的获取方式可以为检测装置基于车辆401的边缘构造矩形框,并选取矩形框的中心作为车辆401的中心点。该获取方式仅为示意性举例,本申请不做限定。
可选地,前车区域601的尺寸为设定的尺寸。例如可以为80*80像素。前车区域的尺寸可按照实际需求进行设置,本申请不做限定。通常情况下,前车区域的尺寸小于车辆在图像中的尺寸,以避免图像中的其它物体,例如,路灯、车灯等对光流算法的影响。需要说明的是,前车区域还可以是其它形状,本申请不做限定。
示例性的,检测装置获取前车区域601的光流信息。可选地,检测装置可基于STCorner算法,识别前车区域601中的特征点,前车区域601中的特征点即为光流信息。识别结果可如图7a所示。可选地,特征点可以是边缘点、中心点、或者是图像中的反光点等,本申请不做限定。
请参照图7a,前车区域601中的多个特征点,即为前一图像帧的光流信息。需要说明的是,检测装置也可以基于其它算法获取到前车区域601中的特征点,本申请中的算法仅为示意性举例,本申请不做限定。
图6b为示例性示出的图像帧识别的示意图。请参照图6b,示例性的,检测装置可基于前车区域601的中心点,获取前车图像602。可选地,前车区域601的中心点可以是前车区域601的矩形框的中心,例如,两条对角线的交点。可选地,若前车区域601为其它形状,其中心的获取方式可根据实际需求设置,本申请不做限定。
请参照图6b,示例性的,检测装置可基于前车区域601的中心点,获取具有设定的尺寸的前车图像602。可选地,设定的尺寸可以为120*120像素。前车图像602的尺寸可选地大于前车区域601的尺寸,以提升后续进行光流计算时的精准度。示例性的,前车图像602的中心与前车区域601的中心重合。需要说明的是,本申请实施例中的前车图像的形状和尺寸为示意性说明,前车图像的形状和尺寸可基于实际需求设置,本申请不做限定。
示例性的,检测装置将获取到的前一帧图像的光流信息和前车图像保存在存储器中。示例性的,检测装置保存的前车图像包括:前车图像602的图像内容、前车图像602的尺寸和在图像帧301中的位置。其中,前车图像602在图像帧301中的位置可以是前车图像602的四个顶点在以图像帧301的底边和侧边构建的坐标系中的坐标,本申请不做限定。
需要说明的是,上述对前一帧图像的处理,包括图像识别、光流信息获取、存储等处理,均是在上一个周期完成的。也就是说,在上一个周期,检测装置已经对前一帧图像进行相应处理,并获取到相应的图像信息(包括前车图像和光流信息)。
图8为示例性示出的图像帧识别的示意图。请参照图8,假设图像帧801为当前周期获取到的图像帧。示例性的,检测装置识别到图像帧801存在满足前车判决条件的前车(具体步骤可参照上文,此处不赘述)后,检测装置可基于前一图像帧,即图像帧301中确定的前车图像602,在本周期获取到的图像帧(即图像帧801)中获取前车图像802。
示例性的,检测装置可基于前车图像602的尺寸和在图像帧301中的位置,在图像帧802中获取前车图像802。其中,图像802的尺寸和在图像帧802中的位置,与前车图 像602的尺寸和在图像帧301中的位置相同。
需要说明的是,在前车图像602和前车图像802中,可能包括车辆401的全部或部分图像,还可能包括其它背景图像。举例说明,前车图像802中还包括车辆402的轮胎的部分图像。需要说明的是,前车图像602和前车图像802中的其它背景图像大概率上不会影响检测结果。
S202,基于前一图像帧的前车图像和光流信息,以及当前图像帧中的前车图像,获取获取光流向量。
示例性的,检测装置可基于获取到的前一图像帧的前车图像602、光流信息、以及当前图像帧的前车图像802,通过光流算法获取到前车图像802中的光流信息,并基于前车图像602的光流信息和前车图像802的光流信息,获取光流向量。
请参照图7b,示例性的,光流算法可基于输入的光流信息(即前车区域601中的多个特征点)和前车图像602,在前车图像802中确定与前车图像602中的每个特征点对应的特征点(即前车图像802对应的光流信息)。
接着,光流算法可基于获取到的前车图像602的多个特征点,以及前车图像802中的多个特征点之间的对应关系,获取每个特征点对应的光流向量。
举例说明,图9为示例性示出的特征点对应关系示意图。请参照图9,示例性的,以检测装置获取到的前车图像602内的多个特征点(即光流信息)中的其中一个特征点,即特征点901为例进行说明。示例性的,检测装置通过光流算法获取到的前车图像802内与特征点901对应的特征点为特征点902。检测装置按照光流算法,基于特征点902和特征点901,获取对应的向量(即光流向量),其中,向量指向当前图像帧对应的特征点,即特征点901。其它特征点的处理可参照图9,本申请不再逐一举例说明。相应的,检测装置可获取到前车图像602中的全部或部分特征点对应的光流向量。
需要说明的是,基于前车图像602和前车图像802中的特征点进行光流的追踪,可避免由于前车移动较快,例如,前车移动较快的情况下,可能导致对应的特征点不在前车区域603的覆盖范围内。因此,取较之前车区域603相对更大的追踪范围,即特征点识别范围,可有效降低误判。在其他实施例中,检测装置也可以基于前车区域601,在图像帧801中确定对应的前车区域,并查找对应的特征点。也就是说,在该示例中,检测装置保存的图像特征即为前车区域601和光流信息,而无需保存前车图像。
进一步需要说明的是,本申请实施例中所述的光流算法的流程仅为示意性举例。光流算法的目的均是为了基于前一图像帧中的光流进行光流追踪,也可以称为特征点追踪,以获取到当前图像帧中的光流与前一图像帧中的光流之间的运动趋势。本申请对光流算法的具体计算方式不做限定。
在一种可能的实现方式中,检测装置还可以获取前车图像602的图像金字塔,以及前车图像802的图像金字塔,以基于前车图像602的图像金字塔和前车图像802的图像金字塔作为光流算法的输入参数,从而提高特征点识别的准确性。
S203,获取所有光流向量的运动幅度的中值。
示例性的,每个光流向量的长度,用于指示构造该光流向量的两个特征点之间的运动幅度,也可以称为偏移量。示例性的,检测装置可获取所有光流向量的长度的中值, 也可以理解为,特征点的运动幅度的中值。
可选地,检测装置可以取所有向量的长度的平均值等,本申请不做限定。
S204,判断中值是否小于或等于设定的静止阈值。
在一个示例中,若检测装置检测到中值小于或等于设定的静止阈值。流程前进至S206。示例性的,静止阈值可选地为2像素。在其他实施例中,静止阈值也可以是其它数值,本申请不做限定。
在另一个示例中,若检测装置检测到中值大于设定的静止阈值。流程前进至S205。
S205,确定本车与前车为相对运动状态。
S206,确定本车与前车为相对静止状态。
请继续参照图2b,示例性的,检测装置获取到检测结果后,流程前进至S104。
S104,保存检测结果。
示例性的,检测装置获取到检测结果。示例性的,检测结果包括指示本车与前车为相对静止状态,或者,指示本车与前车为相对运动状态。
一个示例中,检测装置在本周期内,检测到本车和前车处于相对静止状态。示例性的,检测装置保存检测结果,以及,当前图像帧的光流信息和前车图像802。其中,检测结果指示前车和本车为相对静止状态。
需要说明的是,当前图像帧的光流信息可以是检测装置对当前图像帧重新执行一次光流信息获取的步骤获取到的,当前图像帧的光流信息也可以是检测装置在进行光流算法时获取到的,例如特征点901。本申请不做限定。
另一个示例中,检测装置在本周期内,检测到本车和前车处于相对运动状态。示例性的,检测装置保存检测结果,以及,当前图像帧的光流信息和前车图像802。其中,检测结果指示前车和本车为相对运动状态。
示例性的,本次周期处理结束,流程重新回到S101。也就是说,在下一个周期,检测装置可基于本次存储的检测结果(包括指示前车和本车为相对静止状态或相对运动状态)以及图像帧的图像信息(即当前图像帧的光流信息),对下一个周期获取到的图像帧进行相对运动状态的判断。
在一种可能的实现方式中,检测装置在本周期内,检测到本车和前车处于相对运动状态。示例性的,检测装置可不保存检测结果,只保存当前图像帧的光流信息和前车图像802。也就是说,在下一个周期内,当步骤前进至S102时,检测装置未获取到前一次检测结果(即本周期内的检测结果),即判定前车和本车为非相对静止状态,则流程前进至S103。
S105,基于前一图像帧和当前图像帧,检测本车和前车是否从相对静止状态变为相对运动状态。
示例性的,检测装置基于获取到的上一周期内的检测结果,确定本车和前车已处于相对静止状态后,检测装置可进一步检测本车和前车是否从相对静止状态变为相对运动状态。需要说明的是,本申请实施例中,前车是否起步需要先判定前车是否相对于本车运动,再判断前车是否向前运动。因此,在本申请实施例中,检测装置需要先执行前车 和本车的相对静止状态的检测,即S103所述的步骤。并在上一次前车和本车已经处于相对静止状态后,再执行相对运动状态的检测,即S105所述的步骤。
图10为示例性示出的相对运动状态判定的流程示意图。请参照图10,具体包括:
S301,获取当前图像帧中的前车图像。
示例性的,图11为示例性示出的前车图像的示意图。示例性的,假设在本步骤中,仍以图6中的图像帧301为前一图像帧。图11为当前周期获取到的图像帧1101。示例性的,检测装置可基于图像帧301的前车图像602,获取当前图像帧1101的前车图像802,具体获取方式可参照S201中的相关内容,此处不再赘述。
S302,基于前一图像帧的前车图像和光流信息,以及当前图像帧中的前车图像,获取获取光流向量。
示例性的,如图12a所示,检测装置存储器中存储有前一图像帧301中的前车区域601内的光流信息(即多个特征点)。如图12b所示,检测装置可基于前一图像帧的前车图像602和光流信息,以及当前图像帧1101的前车图像1102,通过光流算法,获取到前车图像1102中与前一图像帧301中的光流信息(即多个特征点)中的全部或部分特征点对应的特征点。
示例性的,光流算法可基于获取到的前车图像602的多个特征点,以及前车图像1102中的多个特征点之间的对应关系,获取每个特征点对应的光流向量。光流算法的未描述部分可参照上文,此处不再赘述。
举例说明,图13为示例性示出的特征点对应关系示意图。请参照图13,检测装置获取到的前车图像602内的特征点(即光流信息)包括特征点1302,检测装置通过光流算法获取到的前车图像1102内与特征点1302对应的特征点为特征点1301。检测装置按照光流算法,基于特征点1301和特征点1302,获取对应的向量(即光流向量),其中,向量指向当前图像帧对应的特征点,即特征点1301。其它特征点的处理类似,此处不再赘述。
未描述部分可参照S202的相关内容,此处不再赘述。
S303,获取所有光流向量的运动幅度的中值。
S303的具体细节可参照S203的相关描述,此处不再赘述。
S304,判断中值是否大于或等于设定的运动阈值。
在一个示例中,若检测装置检测到中值大于或等于设定的运动阈值。流程前进至S306。示例性的,运动阈值大于上文所述的静止阈值。例如,运动阈值可以设置为6像素。在其他实施例中,运动阈值也可以是其它数值,本申请不做限定。
在另一个示例中,若检测装置检测到中值小于设定的运动阈值。流程前进至S305。
S305,确定本车与前车为相对静止状态。
示例性的,前文所述,检测装置已确定在上一次检测结果中,本车和前车已处于相对静止状态。相应的,若在本次检测过程中,检测装置确定本车和前车为相对静止状态。可以理解为,本车和前车在两个周期间隔内,例如500ms内,保持相对静止状态不变。
S306,确定本车与前车为相对运动状态。
示例性的,如前文所述,检测装置已确定在上一次检测结果中,本车和前车已处于 相对静止状态。相应的,在本次检测过程中,若检测装置确定本车和前车为相对运动状态,则检测装置可确定前车与本车从相对静止状态变为相对运动状态。流程前进至图2b中的S106。
请继续参照图2b,示例性的,检测装置检测到前车相对于本车从静止状态变为运动状态后,流程前进至S106。
S106,检测前车是否向前运动。
示例性的,如图14所示,检测装置在S302中可获取到图像帧1101中的前车区域603内的全部或部分特征点对应的向量。需要说明的是,图14中的光流向量的长度、方向和数量仅为示意性举例,本申请不做限定。
请参照图14,示例性的,每个光流向量具有长度和方向。其中,如上文所述,长度用于指示特征点之间的运动幅度。而光流向量的方向,可选地用于指示特征点之间的运动方向。相应的,检测装置可将与所有光流向量的方向信息,获取当前图像帧的前车区域内的特征点与前一图像帧的前车区域内的特征点之间的运动方向。并且,检测装置可进一步基于获取到的特征点之间的运动方向,确定前车是否相对于本车向前运动,即前车是否起步。
示例性的,检测装置检测前车是否向前运动的具体步骤可以包括:
1)基于所有光流向量的方向信息,确定汇聚点。
示例性的,如果前车相对与本车向前运动,则前车的特征点对应的光流会呈现汇聚趋势。请继续参照图14,示例性的,检测装置可基于所有光流向量中的多个光流向量的交点,找到汇聚点1401(也可以称为汇聚中心)。需要说明的是,理想状态下,汇聚点到所有向量之间的距离应当均为0。因此,在所有光流向量形成的交叉点有多个的情况下,可基于每个得到的交叉点到所有向量的距离之和,确定汇聚点。示例性的,距离之和最小的交叉点,即为汇聚点。
2)基于汇聚点,遍历所有光流向量,判定光流向量中指向汇聚点的数量是否大于或等于设定的第一光流阈值。
示例性的,检测装置确定汇聚点后,可遍历所有光流向量。根据各光流向量的方向信息,确定所有光流向量中指向汇聚点的光流向量的数量。
在一个示例中,若指向汇聚点的光流向量的数量大于或等于设定的第一光流阈值,检测装置可确定前车向前移动。示例性的,第一光流阈值可选地为光流总数的60%。在其他实施例中,第一光流阈值也可以是其它数值,本申请不做限定。
在另一个示例中,若指向汇聚点的光流向量的数量小于或等于设定的第二光流阈值,检测装置可确定前车向后移动。需要说明的是,若车辆向后移动,则光流向量的方向是发散的。因此,指向汇聚点的光流向量的数量非常少或者为0。
S107,告警。
示例性的,检测装置检测到前车起步后,检测装置告警,以提示用户前车起步。可选地,检测装置可通过车辆的音响装置,或者通过行车记录仪的提示音进行告警。
示例性的,若检测装置检测到前车相对本车后退,检测装置同样可发出告警,以提示用户前车正在后退。
示例性的,检测装置清空缓存的信息,并重新执行S101。可选地,清空的缓存信息包括但不限于:记录的至少一次检测结果,记录的前一图像帧的图像信息(例如包括前一图像帧的光流信息和前车图像)等。
在一种可能的实现方式中,在S102中,检测装置可基于多次缓存的检测结果,确定本车和前车是否已在设定的时长内,即多个周期内处于相对静止状态。并在检测到多次缓存的检测结果均为本车和前车相对静止后,再执行S105。
在另一种可能的实现方式中,在S105中,检测装置可基于多个连续周期的图像帧,确定前车与本车从相对静止状态变为运动状态后,再执行S106,以防止误判。
在又一种可能的实现方式中,在S103中,检测装置可先检测本车是否处于静止状态。并在确定本车处于静止状态后,判断本车和前车是否处于相对静止状态。也可以理解为,本车和前车均相对于地面处于绝对静止状态。可选地,检测装置可获取车辆中集成的加速器、陀螺仪等采集的参数,以基于获取到的参数,确定本车是否相对于地面处于绝对静止状态。可选地,若检测装置检测到本车未处于静止状态,则可确定检测结果为非相对静止状态。
为使本领域人员更好的理解本申请实施例中的技术方案,下面以具体实施例对本申请实施例中的前车起步检测方法进行详细说明。图15~图19为示例性示出的前车起步检测方法的流程示意图。下面依次基于图15~图19对本申请实施例中的前车起步检测方法进行详细说明。
图15为示例性示出的前车起步检测方法的流程示意图。请参照图15,具体包括:
S401,获取第一图像帧。
示例性的,如图1中的场景所示,本车启动后,行车记录仪开始采集图像。检测装置在当前检测周期触发时刻,获取第一图像帧。
S402,判断是否缓存有检测结果和/或图像帧的图像信息。
示例性的,在本实施例中,以检测装置未检测到缓存的检测结果和图像帧的图像信息为例。需要说明的是,检测装置未检测到缓存的检测结果和图像帧的图像信息的原因有多种。例如,一个示例中,如前文所述的,检测装置在上一次检测到前车起步后,将会清除缓存。另一个示例中,检测装置在上一次的图像帧中未检测到前车的情况下,同样未缓存检测结果和图像信息。又一个示例中,本车启动,即行车记录仪和检测装置初始启动后,检测装置同样未存储有检测结果和图像信息。
S403,检测第一图像帧是否包括前车。
检测方式可参照上文,此处不赘述。示例性的,本申请实施例中以第一图像帧中包括前车为例进行说明。流程前进至S404。
S404,缓存第一图像帧的图像信息。
示例性的,检测装置检测到第一图像帧包括前车,可进一步获取第一图像帧的前车区域内的特征点,即光流信息,并获取前车图像的图像金字塔。具体细节可参照上文,此处不再赘述。
示例性的,检测装置缓存第一图像帧的图像信息,即第一图像帧的光流信息和前车 图像的图像金字塔。
接图15,图16为示例性示出的前车起步检测方法的流程示意图。请参照图16,具体包括:
S501,获取第二图像帧。
示例性的,在周期到达时刻,检测装置从行车记录仪获取当前时刻采集的图像。示例性的,第一图像帧与第二图像帧的获取间隔时长即为上文所述周期时长(也可以称为检测周期时长)。可选地,周期时长为500ms。
S502,判断是否缓存有检测结果和/或图像帧的图像信息。
示例性的,如上文中的S404的描述,检测装置在上一个周期,缓存有第一图像帧的图像信息。例如光流信息和前车图像的图像金字塔。示例性的,检测装置在本步骤中,可检测到缓存有图像帧的图像信息。流程前进至S503。
S503,判断前一次检测结果是否为本车和前车相对静止。
示例性的,如图15中的S404的描述,检测装置仅缓存有图像帧的图像信息,而未缓存有检测结果。因此,在S503中,检测装置判定本车和前车在上一个周期非相对静止。具体描述可参照S202的相关内容,此处不再赘述。流程前进至S504。
S504,检测第二图像帧中是否包括前车。
检测方式可参照上文,此处不赘述。示例性的,本申请实施例中以第二图像帧中包括前车为例进行说明。流程前进至S505。
S505,基于第一图像帧和第二图像帧,检测本车和前车是否为相对静止状态。
示例性的,如S502中所述,检测装置检测到存储器中缓存有前一个周期获取的图像帧的图像信息,即第一图像帧对应的图像信息。例如光流信息和前车图像的图像金字塔。相应的,检测装置可获取当前周期获取的图像帧,即第二图像帧对应的图像信息。例如第二图像帧的前车图像的图像金字塔和光流信息。接着,检测装置可基于第一图像帧的图像信息和第二图像帧的图像信息,检测本车和前车是否为相对静止状态。具体检测方式请参照上文中的S103的相关内容,此处不再赘述。
示例性的,本实施例中,以获取到的中值(概念见上文,此处不赘述)大于设定的静止阈值(2像素)为例进行说明。相应的,检测装置确定本车和前车为相对运动状态。流程前进至S506。
S506,缓存检测结果和第二图像帧的图像信息。
示例性的,检测装置检测到本车和前车为相对运动状态后,检测装置缓存本次检测结果,即本车和前车为相对运动状态。并且,检测装置缓存第二图像帧的图像信息,即第二图像帧的光流信息和前车图像的图像金字塔,下文中的图像信息均类似,下文中不再重复说明。
接图16,图17为示例性示出的前车起步检测方法的流程示意图。请参照图17,具体包括:
S601,获取第三图像帧。
示例性的,在周期到达时刻,检测装置从行车记录仪获取当前时刻采集的图像。示例性的,第二图像帧与第三图像帧的获取间隔时长即为上文所述周期时长(也可以成为 检测周期时长)。可选地,周期时长为500ms。
S602,判断是否缓存有检测结果和/或图像帧的图像信息。
示例性的,如上文中的S506的描述,检测装置在上一个周期,缓存有检测结果和第二图像帧的图像信息。示例性的,检测装置在本步骤中,可检测到缓存有前一次的检测结果和图像帧的图像信息。流程前进至S603。
S603,判断前一次检测结果是否为本车和前车相对静止。
示例性的,如上文中的S506的描述,检测装置上一个周期缓存的检测结果指示本车和前车为相对运动状态。因此,在本步骤中,检测装置判定本车和前车在上一个周期非相对静止。具体描述可参照S202的相关内容,此处不再赘述。流程前进至S604。
S604,检测第三图像帧中是否包括前车。
检测方式可参照上文,此处不赘述。示例性的,本申请实施例中以第三图像帧中包括前车为例进行说明。流程前进至S605。
S605,基于第二图像帧和第三图像帧,检测本车和前车是否为相对静止状态。
示例性的,如S602中所述,检测装置检测到存储器中缓存有前一个周期获取的检测结果和图像帧的图像信息,即第二图像帧对应的图像信息。相应的,检测装置可获取当前周期获取的图像帧,即第三图像帧对应的图像信息。接着,检测装置可基于第二图像帧的图像信息和第三图像帧的图像信息,检测本车和前车是否为相对静止状态。具体检测方式请参照上文中的S103的相关内容,此处不再赘述。
示例性的,本实施例中,以获取到的中值(概念见上文,此处不赘述)小于设定的静止阈值(2像素)为例进行说明。相应的,检测装置确定本车和前车为相对静止状态。流程前进至S606。
S606,缓存检测结果和第三图像帧的图像信息。
示例性的,检测装置检测到本车和前车为相对静止状态后,检测装置缓存本次检测结果,即本车和前车为相对静止状态。并且,检测装置缓存第三图像帧的图像信息。
接图17,图18为示例性示出的前车起步检测方法的流程示意图。请参照图18,具体包括:
S701,获取第四图像帧。
示例性的,在周期到达时刻,检测装置从行车记录仪获取当前时刻采集的图像。示例性的,第三图像帧与第四图像帧的获取间隔时长即为上文所述周期时长(也可以成为检测周期时长)。可选地,周期时长为500ms。
S702,判断是否缓存有检测结果和/或图像帧的图像信息。
示例性的,如上文中的S606的描述,检测装置在上一个周期,缓存有检测结果和第三图像帧的图像信息。示例性的,检测装置在本步骤中,可检测到缓存有前一次的检测结果和图像帧的图像信息。流程前进至S703。
S703,判断前一次检测结果是否为本车和前车相对静止。
示例性的,如上文中的S606的描述,检测装置上一个周期缓存的检测结果指示本车和前车为相对静止状态。因此,在本步骤中,检测装置判定本车和前车在上一个周期处于相对静止。具体描述可参照S202的相关内容,此处不再赘述。流程前进至S704。
S704,检测第四图像帧中是否包括前车。
检测方式可参照上文,此处不赘述。示例性的,本申请实施例中以第四图像帧中包括前车为例进行说明。流程前进至S705。
S705,基于第三图像帧和第四图像帧,检测本车和前车是否从相对静止状态变为相对运动状态。
示例性的,如S702中所述,检测装置检测到存储器中缓存有前一个周期获取的检测结果和图像帧的图像信息,即第三图像帧对应的图像信息。例如光流信息和前车图像。相应的,检测装置可获取当前周期获取的图像帧,即第四图像帧对应的图像信息。接着,检测装置可基于第三图像帧的图像信息和第四图像帧的图像信息,检测本车和前车是否为相对运动状态。也就是说,本车和前车是否从上一个周期的相对静止状态变为相对运动状态。具体检测方式请参照上文中的S105的相关内容,此处不再赘述。
示例性的,本实施例中,以获取到的中值(概念见上文,此处不赘述)小于设定的运动阈值(6像素)为例进行说明。相应的,检测装置确定本车和前车仍为相对静止状态。即,未从相对静止状态变为相对运动状态。流程前进至S706。
S706,缓存检测结果和第四图像帧的图像信息。
示例性的,检测装置检测到本车和前车为相对静止状态后,检测装置缓存本次检测结果,即本车和前车为相对静止状态。并且,检测装置缓存第四图像帧的图像信息。
接图18,图19为示例性示出的前车起步检测方法的流程示意图。请参照图19,具体包括:
S801,获取第五图像帧。
示例性的,在周期到达时刻,检测装置从行车记录仪获取当前时刻采集的图像。示例性的,第四图像帧与第五图像帧的获取间隔时长即为上文所述周期时长(也可以成为检测周期时长)。可选地,周期时长为500ms。
S802,判断是否缓存有检测结果和/或图像帧的图像信息。
示例性的,如上文中的S706的描述,检测装置在上一个周期,缓存有检测结果和第四图像帧的图像信息。示例性的,检测装置在本步骤中,可检测到缓存有前一次的检测结果和图像帧的图像信息。流程前进至S803。
S803,判断前一次检测结果是否为本车和前车相对静止。
示例性的,如上文中的S706的描述,检测装置上一个周期缓存的检测结果指示本车和前车为相对静止状态。因此,在本步骤中,检测装置判定本车和前车在上一个周期处于相对静止。具体描述可参照S202的相关内容,此处不再赘述。流程前进至S804。
S804,检测第五图像帧中是否包括前车。
检测方式可参照上文,此处不赘述。示例性的,本申请实施例中以第五图像帧中包括前车为例进行说明。流程前进至S805。
S805,基于第四图像帧和第五图像帧,检测本车和前车是否从相对静止状态变为相对运动状态。
示例性的,如S802中所述,检测装置检测到存储器中缓存有前一个周期获取的检测结果和图像帧的图像信息,即第四图像帧对应的图像信息。例如,前车区域的特征点和 前车图像的图像金字塔。相应的,检测装置可获取当前周期获取的图像帧,即第五图像帧对应的图像信息。接着,检测装置可基于第四图像帧的图像信息和第五图像帧的图像信息,检测本车和前车是否为相对运动状态。也就是说,本车和前车是否从上一个周期的相对静止状态变为相对运动状态。具体检测方式请参照上文中的S105的相关内容,此处不再赘述。
示例性的,本实施例中,以获取到的中值(概念见上文,此处不赘述)大于设定的运动阈值(6像素)为例进行说明。相应的,检测装置确定本车和前车为相对运动状态。即本车和前车从相对静止状态变为运动状态。流程前进至S806。
S806,判断前车是否向前运动。
示例性的,检测装置可进一步基于第四图像帧的图像信息和第五图像帧的图像信息,判断前车是否向前运动。具体判断方式可参照S106中的描述,此处不再赘述。
示例性的,本步骤中,以检测装置确定前车向前运动,即前车起步为例进行说明。流程前进至S807。
S807,告警。
具体描述可参照S107的相关内容,此处不再赘述。
S808,清除缓存。
示例性的,检测装置清楚缓存的检测结果和所有图像帧的图像信息。并在下一个周期到达时刻,重复执行S401~S808。
场景二
结合图1,图20为本申请实施例提供的一种前车起步检测方法的流程示意图。请参照图20,具体包括:
S901,获取图像帧。
示例性的,检测装置在当前周期获取图像帧。具体描述可参照S101,此处不赘述。
S902,判断前一次检测结果是否指示本车和前车已处于相对静止状态。
一个示例中,若前一次检测结果指示本车和前车为非相对静止状态(概念可参照场景一,此处不赘述),流程前进至S903。
另一个示例中,若前一次检测结果指示本车和前车已处于相对静止状态,流程前进至S905。
其它未描述内容可参照S102,此处不再赘述。
S903,基于前车在前一图像帧中的图像尺寸和前车在当前图像帧中的图像尺寸,检测本车和前车是否为相对静止状态。
示例性的,图21为示例性示出的前车在图像帧中的图像尺寸的示意图。请参照图21的,示例性的,检测装置可通过图像识别,确定前车2102对应的前车区域2103,并获取到前车区域2103的图像尺寸,即为前车2102的图像尺寸。可选地,前车区域2103的尺寸包括前车区域2103的高H1。
如前文所述,本申请实施例仅以矩形框为例进行说明。在其他实施例中,前车2102的图像尺寸也可以是前车2102的边缘所构成的图像对应的尺寸。本申请不做限定。需要 说明的是,图21的所示出的是前车2102在前一周期获取到的图像帧2101中的图像尺寸。
示例性的,在当前周期,检测装置可基于当前周期获取到的图像帧,获取到当前图像帧中的前车区域,具体获取方式可参照上文,此处不赘述。示例性的,检测装置可基于当前图像帧的前车区域的尺寸,例如前车区域的高,与前车区域2103的图像尺寸,例如前车区域2103的高H进行比较。示例性的,若两个图像帧的前车区域的图像尺寸(例如高度)之差小于或等于设定的静止阈值,则确定前车与本车相对静止。若两个图像帧的前车区域的图像尺寸(例如高度)之差大于设定的静止阈值,则确定前车与本车相对运动。需要说明的是,静止阈值可基于实际需求进行设置,本申请不做限定。
可选地,检测装置也可以基于当前图像帧的前车区域的图像尺寸(例如高度)与前一图像帧的前车区域的图像尺寸(例如高度)之间的比值,即前车区域的图像尺寸(例如高度)除以前一图像帧的前车区域的图像尺寸(例如高度),确定前车与本车是否相对静止。相应的,若比值大于或等于设定的静止阈值,则确定前车静止。若比值小于设定的静止阈值,则确定前车和本车相对运动。需要说明的是,根据不同的比较方法(包括差值和比值),设置的静止阈值也不相同。
S904,保存检测结果。
S905,基于前车在前一图像帧中的图像尺寸和前车在当前图像帧中的图像尺寸,检测前车是否起步。
示例性的,仍以图21为前一周期获取到的图像帧。图22为示例性示出的检测装置获取到的前车2102在当前周期获取到的图像帧2104中的图像尺寸。
请参照图22,示例性的,若前车2102相对本车从静止状态变为运动状态,则前车2102在图像帧2104中的前车区域2105的图像尺寸变小,即前车区域2105的图像尺寸小于前车区域2103。也就是说,前车区域2105的高H2小于前车区域2103的高H1。
示例性的,在本申请实施例中,检测装置可配置设定的运动阈值。举例说明,检测装置获取前车区域2105的高度(H2)与前车区域2103的高度(H1)的比值,即前车区域2105的高度值除以前车区域2103的高度值。在一个示例中,若该比值小于或等于设定的运动阈值(可根据实际情况设置,本申请不做限定),则检测装置可确定前车起步,即前车相对本车向前运动。另一个示例中,若该比值大于设定的运动阈值,则检测装置可确定前车与本车仍相对静止。在又一个示例中,若该比值大于1,则检测装置可确定前车相对本车向后运动。
示例性的,检测装置检测到前车向前运动或向后运动,检测装置可告警,即执行S906。示例性的,若检测装置检测到前车与本车仍相对静止,则重复执行S901。
S906,告警。
具体细节可参照S107,此处不赘述。
需要说明的是,在本申请实施例中,场景一中基于光流的前车起步检测方法,还可以与场景二中基于图像尺寸的前车起步检测方法结合,以避免光流检测方式对运动状态的误判,提高前车起步检测的准确性。具体的,在上文所述的S304中,检测装置可判断中值是否大于或等于设定的运动阈值。一个示例中,若中值大于或等于设定的运动阈值,则流程前进至S306。在本实施例中,若中值小于设定的运动阈值,则检测装置可结合场 景二中的相对运动判断方式。示例性的,检测装置可前车在前一图像帧中的图像尺寸和前车在当前图像帧中的图像尺寸,检测前车是否起步,具体检测方式可参照场景二的描述,此处不再赘述。在一个示例中,若检测装置基于前车的图像尺寸,确定前车与本车为相对静止状态。即检测装置基于光流检测方式和图像尺寸检测方式均检测到本车和前车仍保持相对静止状态,则检测装置可判定在当前周期,本车和前车仍保持相对静止状态。在另一个示例中,若检测装置基于前车的图像尺寸,确定前车与本车为相对运动状态,即从相对静止状态变为相对运动状态。
上述主要从各个网元之间交互的角度对本申请实施例提供的方案进行了介绍。可以理解的是,检测装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对检测装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,在采用对应各个功能划分各个功能模块的情况下,图23示出了上述实施例中所涉及的检测装置2300的一种可能的结构示意图,如图23所示,装置包括:第一获取模块2301,用于获取第一图像帧,第一图像帧中包括前车的图像。第二获取模块2302,用于根据第一图像帧与第二图像帧,获取第一光流信息;其中,第二图像帧为第一图像帧的前一帧图像帧,且第二图像帧包括前车的图像;第一光流信息用于指示第一图像帧中的前车的特征点相对于第二图像帧中的前车的特征点之间光流运动趋势。检测模块2303,用于根据第一光流信息,检测前车是否起步。
在上述方法实施例的基础上,第一获取模块2301,还用于在获取第一图像帧之前,获取第三图像帧和第四图像帧,第三图像帧和第四图像帧均包括前车的图像;第三图像帧和第四图像帧相邻;第二获取模块2302,还用于根据第三图像帧与第四图像帧,获取第二光流信息;第二光流信息用于指示第四图像帧中的前车的特征点相对于第三图像帧中的前车的特征点之间的光流运动趋势;检测模块2303,还用于根据第二光流信息,确定本车与前车之间为相对静止状态。
在上述方法实施例的基础上,检测模块2303,具体用于:根据第一光流信息,检测本车与前车之间是否从相对静止状态变为相对运动状态;其中,相对运动状态包括前车相对本车向前运动或者前车相对本车向后运动。
在上述方法实施例的基础上,第一光流信息包括第一幅度信息和第一方向信息;第一幅度信息用于指示第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间 的运动幅度;第一方向信息用于指示第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的运动方向。
在上述方法实施例的基础上,检测模块2303,具体用于:当第一幅度信息大于或等于设定的第一阈值,确定本车与前车为相对运动状态。
在上述方法实施例的基础上,检测模块2303,具体用于:当第一幅度信息小于或等于设定的第二阈值,确定本车与前车之间为相对静止状态;第二阈值小于第一阈值。
在上述方法实施例的基础上,当第一幅度信息大于或等于设定的第一阈值时,检测模块2303,具体还用于:当第一方向信息大于或等于设定的第三阈值,确定前车相对本车向前运动。
在上述方法实施例的基础上,当第一幅度信息大于或等于设定的第一阈值时,检测模块2303,具体还用于:当第一方向信息小于或等于设定的第四阈值,确定前车相对本车向后运动;第四阈值小于第三阈值。
在上述方法实施例的基础上,第一光流信息为第一图像帧中的前车的特征点与第二图像帧中的前车的特征点之间的光流向量,其中,每个光流向量包括幅度信息和方向信息。检测模块2303用于:基于所有光流向量的方向信息,确定汇聚点。基于汇聚点,遍历所有光流向量,判定所有光流向量中指向汇聚点的光流向量的数量是否大于或等于设定的第三阈值。
在上述方法实施例的基础上,第一获取模块2301,用于按照设定的条件,检测第一图像帧中是否包括前车的图像。
在上述方法实施例的基础上,设定的条件包括:前车的图像在所述第一图像帧的前车检测区域中的面积大于或等于设定的前车检测阈值;若存在多个面积大于或等于设定的前车检测阈值的图像,选择距离第一图像帧底边最近的图像为前车的图像。
在上述方法实施例的基础上,光流信息在图像帧中的分布区域的尺寸小于前车在图像帧中的图像的尺寸。
在上述方法实施例的基础上,装置还包括第三获取模块2304,用于获取前车在第一图像帧中的第一图像尺寸;检测模块,还用于当第一图像尺寸小于设定的第五阈值,确定前车相对本车向前运动;检测模块,还用于当第一图像尺寸大于或等于第五阈值,确定前车相对本车向后运动。
在上述方法实施例的基础上,装置还包括第三获取模块2304,用于当检测模块根据第一光流信息未检测到前车起步时,获取前车在第一图像帧中的第一图像尺寸;检测模块,还用于当第一图像尺寸小于设定的第五阈值,确定前车相对本车向前运动;检测模块,还用于当第一图像尺寸大于或等于第五阈值,确定前车相对本车向后运动。
下面介绍本申请实施例提供的一种装置。如图24所示:
图24为本申请实施例提供的一种通信装置的结构示意图。如图24所示,该通信装置2400可包括:处理器2401、收发器2405,可选的还包括存储器2402。
所述收发器2405可以称为收发单元、收发机、或收发电路等,用于实现收发功能。收发器2405可以包括接收器和发送器,接收器可以称为接收机或接收电路等,用于实现 接收功能;发送器可以称为发送机或发送电路等,用于实现发送功能。
存储器2402中可存储计算机程序或软件代码或指令2404,该计算机程序或软件代码或指令2404还可称为固件。处理器2401可通过运行其中的计算机程序或软件代码或指令2403,或通过调用存储器2402中存储的计算机程序或软件代码或指令2404,对MAC层和PHY层进行控制,以实现本申请下述各实施例提供的OM协商方法。其中,处理器2401可以为中央处理器(central processing unit,CPU),存储器2402例如可以为只读存储器(read-only memory,ROM),或为随机存取存储器(random access memory,RAM)。
本申请中描述的处理器2401和收发器2405可实现在集成电路(integrated circuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。
上述通信装置2400还可以包括天线2406,该通信装置2400所包括的各模块仅为示例说明,本申请不对此进行限制。
如前所述,以上实施例描述中的通信装置可以是终端,但本申请中描述的通信装置的范围并不限于此,而且通信装置的结构可以不受图24的限制。通信装置可以是独立的设备或者可以是较大设备的一部分。例如所述通信装置的实现形式可以是:
(1)独立的集成电路IC,或芯片,或,芯片系统或子系统;(2)具有一个或多个IC的集合,可选的,该IC集合也可以包括用于存储数据,指令的存储部件;(3)可嵌入在其他设备内的模块;(4)车载设备等等;(5)其他等等。
对于通信装置的实现形式是芯片或芯片系统的情况,可参见图25所示的芯片的结构示意图。图25所示的芯片包括处理器2501和接口2502。其中,处理器2501的数量可以是一个或多个,接口2502的数量可以是多个。可选的,该芯片或芯片系统可以包括存储器2503。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
基于相同的技术构思,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序包含至少一段代码,该至少一段代码可由终端设备执行,以控制终端设备用以实现上述方法实施例。
基于相同的技术构思,本申请实施例还提供一种计算机程序,当该计算机程序被终端设备执行时,用以实现上述方法实施例。
所述程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。
基于相同的技术构思,本申请实施例还提供一种处理器,该处理器用以实现上述方法实施例。上述处理器可以为芯片。
结合本申请实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存 器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于网络设备中。当然,处理器和存储介质也可以作为分立组件存在于网络设备中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (23)

  1. 一种前车起步检测方法,其特征在于,包括:
    获取第一图像帧,所述第一图像帧中包括前车的图像;
    根据所述第一图像帧与第二图像帧,获取第一光流信息;其中,所述第二图像帧为所述第一图像帧的前一帧图像帧,且所述第二图像帧包括所述前车的图像;所述第一光流信息用于指示所述第一图像帧中的所述前车的特征点相对于所述第二图像帧中的所述前车的特征点之间光流运动趋势;
    根据所述第一光流信息,检测所述前车是否起步。
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一图像帧之前,方法还包括:
    获取第三图像帧和第四图像帧,所述第三图像帧和所述第四图像帧均包括所述前车的图像;所述第三图像帧和所述第四图像帧相邻;
    根据所述第三图像帧与所述第四图像帧,获取第二光流信息;所述第二光流信息用于指示所述第四图像帧中的所述前车的特征点相对于所述第三图像帧中的所述前车的特征点之间的光流运动趋势;
    根据所述第二光流信息,确定本车与所述前车之间为相对静止状态。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一光流信息,检测所述前车是否起步,包括:
    根据所述第一光流信息,检测所述本车与所述前车之间是否从所述相对静止状态变为相对运动状态;
    其中,所述相对运动状态包括所述前车相对所述本车向前运动或者所述前车相对所述本车向后运动。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述第一光流信息包括第一幅度信息和第一方向信息;
    所述第一幅度信息用于指示所述第一图像帧中的所述前车的特征点与所述第二图像帧中的所述前车的特征点之间的运动幅度;
    所述第一方向信息用于指示所述第一图像帧中的所述前车的特征点与所述第二图像帧中的所述前车的特征点之间的运动方向。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述第一光流信息,检测所述前车是否起步,包括:
    当所述第一幅度信息大于或等于设定的第一阈值,确定所述本车与所述前车为相对运动状态。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第一光流信息,检测所述前车是否起步,包括:
    当所述第一幅度信息小于或等于设定的第二阈值,确定所述本车与所述前车之间为相对静止状态;所述第二阈值小于所述第一阈值。
  7. 根据权利要求5所述的方法,其特征在于,当所述第一幅度信息大于或等于所述设定的第一阈值时,所述根据所述第一光流信息,检测所述前车是否起步,还包括:
    当所述第一方向信息大于或等于设定的第三阈值,确定所述前车相对本车向前运动。
  8. 根据权利要求7所述的方法,其特征在于,当所述第一幅度信息大于或等于所述设定的第一阈值时,所述根据所述第一光流信息,检测所述前车是否起步,还包括:
    当所述第一方向信息小于或等于设定的第四阈值,确定所述前车相对所述本车向后运动;所述第四阈值小于所述第三阈值。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述方法还包括:
    获取所述前车在所述第一图像帧中的第一图像尺寸;
    当所述第一图像尺寸小于设定的第五阈值,确定所述前车相对所述本车向前运动;
    当所述第一图像尺寸大于或等于所述第五阈值,确定所述前车相对所述本车向后运动。
  10. 根据权利要求1至8任一项所述的方法,其特征在于,当根据所述第一光流信息未检测到所述前车起步时,所述方法还包括:
    获取所述前车在所述第一图像帧中的第一图像尺寸;
    当所述第一图像尺寸小于设定的第五阈值,确定所述前车相对所述本车向前运动;
    当所述第一图像尺寸大于或等于所述第五阈值,确定所述前车相对所述本车向后运动。
  11. 一种前车起步检测装置,其特征在于,包括:
    第一获取模块,用于获取第一图像帧,所述第一图像帧中包括前车的图像;
    第二获取模块,用于根据所述第一图像帧与第二图像帧,获取第一光流信息;其中,所述第二图像帧为所述第一图像帧的前一帧图像帧,且所述第二图像帧包括所述前车的图像;所述第一光流信息用于指示所述第一图像帧中的所述前车的特征点相对于所述第二图像帧中的所述前车的特征点之间光流运动趋势;
    检测模块,用于根据所述第一光流信息,检测所述前车是否起步。
  12. 根据权利要求11所述的装置,其特征在于,
    所述第一获取模块,还用于在获取所述第一图像帧之前,获取第三图像帧和第四图像帧,所述第三图像帧和所述第四图像帧均包括所述前车的图像;所述第三图像帧和所 述第四图像帧相邻;
    所述第二获取模块,还用于根据所述第三图像帧与所述第四图像帧,获取第二光流信息;所述第二光流信息用于指示所述第四图像帧中的所述前车的特征点相对于所述第三图像帧中的所述前车的特征点之间的光流运动趋势;
    所述检测模块,还用于根据所述第二光流信息,确定本车与所述前车之间为相对静止状态。
  13. 根据权利要求11或12所述的装置,其特征在于,所述检测模块,具体用于:
    根据所述第一光流信息,检测所述本车与所述前车之间是否从所述相对静止状态变为相对运动状态;
    其中,所述相对运动状态包括所述前车相对所述本车向前运动或者所述前车相对所述本车向后运动。
  14. 根据权利要求11至13任一项所述的装置,其特征在于,所述第一光流信息包括第一幅度信息和第一方向信息;
    所述第一幅度信息用于指示所述第一图像帧中的所述前车的特征点与所述第二图像帧中的所述前车的特征点之间的运动幅度;
    所述第一方向信息用于指示所述第一图像帧中的所述前车的特征点与所述第二图像帧中的所述前车的特征点之间的运动方向。
  15. 根据权利要求14所述的装置,其特征在于,所述检测模块,具体用于:
    当所述第一幅度信息大于或等于设定的第一阈值,确定所述本车与所述前车为相对运动状态。
  16. 根据权利要求15所述的装置,其特征在于,所述检测模块,具体用于:
    当所述第一幅度信息小于或等于设定的第二阈值,确定所述本车与所述前车之间为相对静止状态;所述第二阈值小于所述第一阈值。
  17. 根据权利要求15所述的装置,其特征在于,当所述第一幅度信息大于或等于所述设定的第一阈值时,所述检测模块,具体还用于:
    当所述第一方向信息大于或等于设定的第三阈值,确定所述前车相对本车向前运动。
  18. 根据权利要求17所述的装置,其特征在于,当所述第一幅度信息大于或等于所述设定的第一阈值时,所述检测模块,具体还用于:
    当所述第一方向信息小于或等于设定的第四阈值,确定所述前车相对所述本车向后运动;所述第四阈值小于所述第三阈值。
  19. 根据权利要求11至18任一项所述的装置,其特征在于,所述装置还包括第三 获取模块:
    所述第三获取模块,用于获取所述前车在所述第一图像帧中的第一图像尺寸;
    所述检测模块,还用于当所述第一图像尺寸小于设定的第五阈值,确定所述前车相对所述本车向前运动;
    所述检测模块,还用于当所述第一图像尺寸大于或等于所述第五阈值,确定所述前车相对所述本车向后运动。
  20. 根据权利要求11至18任一项所述的装置,其特征在于,所述装置还包括第三获取模块:
    所述第三获取模块,用于当所述检测模块根据所述第一光流信息未检测到所述前车起步时,获取所述前车在所述第一图像帧中的第一图像尺寸;
    所述检测模块,还用于当所述第一图像尺寸小于设定的第五阈值,确定所述前车相对所述本车向前运动;
    所述检测模块,还用于当所述第一图像尺寸大于或等于所述第五阈值,确定所述前车相对所述本车向后运动。
  21. 一种前车起步检测装置,其特征在于,包括至少一个处理器和接口;所述处理器通过所述接口接收或发送数据;所述至少一个处理器被配置为调用存储在存储器中的软件程序,以执行如权利要求1至10任一项所述的方法。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序运行在计算机或处理器上时,使得所述计算机或所述处理器执行如权利要求1至10任一项所述的方法。
  23. 一种计算机程序产品,其特征在于,所述计算机程序产品包含软件程序,当所述软件程序被计算机或处理器执行时,使得权利要求1至10任一项所述的方法被执行。
PCT/CN2021/078027 2021-02-26 2021-02-26 前车起步检测方法及装置 WO2022178802A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180050794.0A CN115884910A (zh) 2021-02-26 2021-02-26 前车起步检测方法及装置
PCT/CN2021/078027 WO2022178802A1 (zh) 2021-02-26 2021-02-26 前车起步检测方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/078027 WO2022178802A1 (zh) 2021-02-26 2021-02-26 前车起步检测方法及装置

Publications (1)

Publication Number Publication Date
WO2022178802A1 true WO2022178802A1 (zh) 2022-09-01

Family

ID=83047746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078027 WO2022178802A1 (zh) 2021-02-26 2021-02-26 前车起步检测方法及装置

Country Status (2)

Country Link
CN (1) CN115884910A (zh)
WO (1) WO2022178802A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009230560A (ja) * 2008-03-24 2009-10-08 Casio Comput Co Ltd 信号認識装置及び信号認識処理のプログラム
KR101344056B1 (ko) * 2013-09-25 2014-01-16 주식회사 피엘케이 테크놀로지 차량의 출발 정지 지원 장치 및 그 방법
CN104508720A (zh) * 2012-08-01 2015-04-08 丰田自动车株式会社 驾驶辅助装置
CN104827968A (zh) * 2015-04-08 2015-08-12 上海交通大学 一种基于安卓的低成本停车等待驾驶提醒系统
CN106611512A (zh) * 2015-10-23 2017-05-03 杭州海康威视数字技术股份有限公司 前车起步的处理方法、装置和系统
CN111179608A (zh) * 2019-12-25 2020-05-19 广州方纬智慧大脑研究开发有限公司 一种路口溢出检测方法、系统及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009230560A (ja) * 2008-03-24 2009-10-08 Casio Comput Co Ltd 信号認識装置及び信号認識処理のプログラム
CN104508720A (zh) * 2012-08-01 2015-04-08 丰田自动车株式会社 驾驶辅助装置
KR101344056B1 (ko) * 2013-09-25 2014-01-16 주식회사 피엘케이 테크놀로지 차량의 출발 정지 지원 장치 및 그 방법
CN104827968A (zh) * 2015-04-08 2015-08-12 上海交通大学 一种基于安卓的低成本停车等待驾驶提醒系统
CN106611512A (zh) * 2015-10-23 2017-05-03 杭州海康威视数字技术股份有限公司 前车起步的处理方法、装置和系统
CN111179608A (zh) * 2019-12-25 2020-05-19 广州方纬智慧大脑研究开发有限公司 一种路口溢出检测方法、系统及存储介质

Also Published As

Publication number Publication date
CN115884910A (zh) 2023-03-31

Similar Documents

Publication Publication Date Title
JP6188471B2 (ja) 車両後側方警報装置、車両後側方警報方法および立体物検出装置
US9711049B2 (en) Collision probability determination apparatus and program
CN108877269B (zh) 一种交叉路口车辆状态检测及v2x广播方法
US11351997B2 (en) Collision prediction apparatus and collision prediction method
JP6628189B2 (ja) 検出装置および検出方法
WO2015053335A1 (ja) 駐車車両検出装置、車両管理システム及び制御方法
JPWO2008053912A1 (ja) 走行支援システムおよび走行支援方法
US11738747B2 (en) Server device and vehicle
CN110858405A (zh) 车载摄像头的姿态估计方法、装置和系统及电子设备
US20180224296A1 (en) Image processing system and image processing method
US10846546B2 (en) Traffic signal recognition device
CN111052201B (zh) 碰撞预测装置、碰撞预测方法以及存储介质
CN112455430A (zh) 无车位线的斜列车位的检测方法、泊车方法及泊车系统
CN112461257A (zh) 一种车道线信息的确定方法及装置
CN115762139A (zh) 交叉路口预测轨迹的过滤方法、装置、设备及存储介质
CN112735163B (zh) 确定目标物体静止状态的方法、路侧设备、云控平台
WO2022178802A1 (zh) 前车起步检测方法及装置
JP2020047210A (ja) 物体検出装置
JP2010107435A (ja) 地物位置認識装置
JP2009146153A (ja) 移動体検出装置、移動体検出方法および移動体検出プログラム
JP2008052399A (ja) 周辺監視システム
KR102444675B1 (ko) 주변 객체의 차로 변경 예측 장치 및 방법
WO2019078211A1 (ja) 移動物体認識装置
JP2015004543A (ja) 車両位置認識装置
JP2007153098A (ja) 周辺車両位置検出装置および周辺車両の位置予測方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927240

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927240

Country of ref document: EP

Kind code of ref document: A1