WO2020098004A1 - Lane traffic status reminder method and device - Google Patents

Lane traffic status reminder method and device Download PDF

Info

Publication number
WO2020098004A1
WO2020098004A1 PCT/CN2018/118619 CN2018118619W WO2020098004A1 WO 2020098004 A1 WO2020098004 A1 WO 2020098004A1 CN 2018118619 W CN2018118619 W CN 2018118619W WO 2020098004 A1 WO2020098004 A1 WO 2020098004A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lane
target
information
feature
Prior art date
Application number
PCT/CN2018/118619
Other languages
French (fr)
Chinese (zh)
Inventor
谢伟龙
李少鹏
陈少彬
黄广进
Original Assignee
惠州市德赛西威汽车电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 惠州市德赛西威汽车电子股份有限公司 filed Critical 惠州市德赛西威汽车电子股份有限公司
Publication of WO2020098004A1 publication Critical patent/WO2020098004A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits

Definitions

  • the invention relates to the field of data processing, in particular to a target tracking processing method and electronic equipment.
  • targets vehicles, pedestrians, etc.
  • target recognition is responsible for detecting the types and positions of various targets in each frame of the scene, and tracking is to associate the detected targets in the two frames before and after, giving an identity, and the detection and tracking cooperation can also be more accurate Estimate the target location.
  • the invention provides a target tracking processing method, which can improve the data processing efficiency of the device.
  • An embodiment of the present invention provides a target tracking processing method, which is applied to an electronic device including at least three processing units.
  • the method includes:
  • the first processing unit acquire a plurality of corner points of the target image, track the corresponding positions of the corner points in the target image in multiple frames, and obtain tracking data of the corner points;
  • the second processing unit detects the target object in the target image and determines the image range corresponding to the target object
  • the third processing unit determines the target range of the target object according to the tracking data of the corner point and the image range of the target object.
  • the acquiring multiple corner points of the target image and tracking the corresponding positions of the corner points in the multi-frame target image to obtain the tracking data of the corner points include:
  • the acquiring multiple corner points of the target image and tracking the corresponding positions of the corner points in the target image in multiple frames includes:
  • the determining the target range of the target object according to the tracking data of the corner point and the image range of the target object includes:
  • the tracking data of the corner point includes a tracking range corresponding to each corner point in the corner point set
  • the determining the target range of the target object according to the tracking data of the target corner point and the image range of the target object includes:
  • the target range of the target object is determined according to a preset weighting value, respectively.
  • the determining the target corner point corresponding to the target object according to the tracking data of the corner point and the image range of the target object includes:
  • mapping relationship determine a plurality of mapped pixels of the image range in the target image
  • the corner point located within the pixel range is taken as the target corner point.
  • the method before acquiring the preset mapping relationship between the target image and the image range, the method further includes: constructing, by the first processing unit, pixels in the target image and the image range Mapping relationship.
  • the first processing unit is a vector operation processor.
  • the invention also provides an electronic device.
  • the electronic device includes at least three processing units, wherein:
  • the first processing unit is configured to acquire multiple corner points of the target image, track the corresponding positions of the corner points in the target image in multiple frames, and obtain tracking data of the corner points;
  • the second processing unit is configured to detect a target object in the target image and determine an image range corresponding to the target object;
  • the third processing unit is configured to determine the target range of the target object according to the tracking data of the corner point and the image range of the target object.
  • the first processing unit is a vector operation processor.
  • FIG. 1 is an implementation flowchart of a target tracking processing method provided by an embodiment of the present invention.
  • FIG. 2 is a flowchart of implementing corner tracking data provided by an embodiment of the present invention.
  • FIG. 3 is a flowchart of determining a target range provided by an embodiment of the present invention.
  • FIG. 4 is a flowchart of determining a target corner provided by an embodiment of the present invention.
  • FIG. 5 is a structural frame diagram of an electronic device provided by an embodiment of the present invention.
  • FIG. 1 shows an implementation process of a target tracking processing method provided by an embodiment of the present invention.
  • a target tracking processing method is applied to an electronic device.
  • the electronic device includes at least three processing units.
  • the method includes:
  • the first processing unit acquire a plurality of corner points of the target image, track the corresponding positions of the corner points in the multi-frame target image, and obtain tracking data of the corner points.
  • the corner point may be a feature point, that is, an image part with certain characteristics in the image, such as a person's face or hand in the image, which can be set as required.
  • the target image may be acquired by an image sensor of an electronic device, such as a camera.
  • multiple corner points of the target image are obtained, and feature parts in the image can be extracted through a preset algorithm, and these feature parts are matched through a database to determine the corner point positions.
  • the corner point is identified by the fast9 corner detection algorithm.
  • tracking the corresponding position of the corner point in the multi-frame target image to obtain the tracking data of the corner point first of all, it is possible to determine the position of a corner point in the target image of a frame (such as the first frame), and then pass The preset target tracking algorithm tracks the change of the relative position of the corner point in other frames to obtain the tracking data of the corner point.
  • the tracking data of the corner point may be parameters such as the movement trajectory and relative displacement distance of the corner point.
  • the target tracking algorithm may use a pyramid LK optical flow tracking algorithm, by acquiring the image pyramid of the target image, and then tracking the optical flow of the corner points in the image pyramid through the pyramid LK optical flow tracking algorithm to track the movement of the corner points.
  • This step may be implemented in the first processing unit of the electronic device, and the first processing unit may be a processing unit for vector operations, such as a full-time vector operation processor for vector operations, such as a GPU (Graphics Processing Unit, visual processor) ).
  • a processing unit for vector operations such as a full-time vector operation processor for vector operations, such as a GPU (Graphics Processing Unit, visual processor)
  • the tracking algorithm is relatively simple, but there are a large number of pixel-level operations, so when the 101 step is executed by the vector operation processor, the operation efficiency can be greatly improved.
  • this step can also use CPU (Central Processing Unit, core processor) or DSP (Digital Signal Processing, digital signal processor) and other devices to run calculations.
  • CPU Central Processing Unit, core processor
  • DSP Digital Signal Processing, digital signal processor
  • the second processing unit Through the second processing unit, detect and identify the target object in the target image, and determine the image range corresponding to the target object.
  • the target object may be one or more, such as a human body, a vehicle, or other obstacles. Identifying the target object can be achieved by a preset recognition algorithm, such as a neural network algorithm, etc. The specific recognition algorithm can be selected according to the actual situation.
  • the target object can be marked in advance through a recognition algorithm, and the outline range of the marked object or the relative position in the target image can be framed with a detection frame To determine the image range corresponding to the target object.
  • This step may be implemented in a second processing unit of the electronic device, the second processing unit and the first processing unit are relatively independent of each other, so that step 102 and step 101 are in different process states, and the second processing unit may be General-purpose computing processor, such as CPU (Central Processing Unit, core processor) or DSP (Digital Signal Processing, digital signal processor) and other devices to run calculations
  • CPU Central Processing Unit, core processor
  • DSP Digital Signal Processing, digital signal processor
  • the third processing unit determines the target range of the target object according to the tracking data of the corner points and the image range of the target object.
  • the target range may be the range occupied by the target object in the image.
  • the target range frames the maximum width and maximum height of the human body.
  • the displacement of each corner point in the target image can be obtained.
  • the corner points in the image range can be selected through the image range frame of the target object, and then the corner points of the target object in the image range The displacement situation.
  • the image range of the target object and the displacement of the corner points within the target object range are weighted to obtain the target range of the target object.
  • This step may be implemented in the third processing unit of the electronic device, the third processing unit is relatively independent of the second processing unit and the first processing unit, so that the step 103, the step 102 and the step 101 are in different process states To distribute tasks to different cores to improve data processing efficiency.
  • the third processing unit and the second processing unit may also be the same processing unit.
  • FIG. 2 shows an implementation process of obtaining corner tracking data provided by an embodiment of the present application.
  • the acquiring multiple corner points of the target image, tracking the corresponding positions of the corner points in the target image in multiple frames, and obtaining the tracking data of the corner points include:
  • the preset area may be a certain area or the entire area in the target image manually preset to reduce redundant data of the target image and improve algorithm efficiency.
  • the preset area may be an image area that is two-thirds of the height / width in the middle of the target image. Or by identifying the possible moving direction of the host vehicle in the target image, and determining the position of the preset area in the target image according to the moving direction.
  • the size, position, etc. of the area can also be set according to actual needs.
  • an image pyramid of 4 or more layers can be constructed for the target image, and the corner points of the images in these pyramids are extracted by an algorithm to obtain the position of the corner point. Then track the corresponding positions of the above corner points in the multi-frame target image to obtain the tracking data of the corner points.
  • the tracking algorithm may be a fast9 corner tracking algorithm to provide more accurate tracking data.
  • other tracking algorithms can be used in addition.
  • tracking the corresponding position of the corner point in the multi-frame target image to obtain the tracking data of the corner point includes:
  • the forward pyramid LK tracking and the reverse pyramid LK tracking are performed on the corner points respectively to obtain preliminary tracking data; the preliminary tracking data is cross-validated to obtain the tracking data of corner points satisfying preset conditions among multiple corner points.
  • the tracking is considered invalid and the relevant parameters of the corner are deleted.
  • the above method can greatly improve the accuracy of tracking data of corners.
  • corners with poor tracking effects can be removed, which further reduces waste of resources and improves data processing efficiency.
  • FIG. 3 shows an implementation process of determining a target range provided by an embodiment of the present application.
  • the determination of the target range of the target object based on the tracking data of the corner points and the image range of the target object includes:
  • the corner points in the image range of the target object can be determined, and the corner points in the image range are defined as the target corner points.
  • the image range A of the target object is acquired, and if the target image has a corner point B and a corner point C within the image range A, the corner points B and C can be used as the target corner point. If there are image ranges of N target objects, the image ranges of the N target objects may be associated with corner points in the corresponding ranges.
  • 302. Determine the target range of the target object according to the tracking data of the target corner point and the image range of the target object.
  • the target corner After obtaining the target corner, you can determine the more accurate movement and shape change of the target object (such as vehicle turning, sideways walking, etc.) according to the tracking of the corner and the position change of the image range, and then determine the target object Target range.
  • the target object such as vehicle turning, sideways walking, etc.
  • the target object can be selected by a tracking frame representing the target range.
  • the displacement value of the tracking frame can be obtained from the average displacement of the corner coordinates of the next frame and the corner coordinates of the previous frame, and the size change of the tracking frame can be obtained from the average distance between the target corner points of the next frame.
  • the position of the image range A can be combined with the position change of the corner points B and C to use
  • the width represents the current posture and movement of the human body.
  • the length / width of the tracking frame can be changed according to the change of the posture and movement of the human body.
  • the desired corner point is selected as the target corner point, and the target corner point is tracked, so that the device does not need to perform image range recognition multiple times and diagonal points within the image range
  • the recognition action greatly reduces the time complexity of this embodiment and improves the data processing efficiency.
  • step 302 may include:
  • the tracking range of the corner point is the approximate area that the corner point occupies in the target image.
  • the approximate area can be framed by the tracking frame.
  • the tracking range of the corner point may be used as a basis to determine the target range of the target object in combination with the preset weighting value; If the confidence of the tracking range of the corner point is low, and the confidence of the image range of the target object is high, the target range of the target object can be determined in combination with the preset weighting value based on the image range of the target object.
  • the specific weighting method can be determined according to the actual situation.
  • the time complexity is the linear time complexity, which improves the operation efficiency.
  • FIG. 4 shows an implementation process of determining a target corner point provided by an embodiment of the present application.
  • the determination of the target corner corresponding to the target object based on the tracking data of the corner point and the image range of the target object includes:
  • mapping relationship determine a plurality of mapped pixels whose image range is in the target image.
  • the image range can be a parameter representing the relative position, including the approximate outline range of the target object.
  • the pixel range enclosed by the image range in the target image can be obtained therefrom, so as to better determine the position of the target object in the target image, and then use the target object in the target image Select the corresponding corner point as the target corner point in the rough outline range box in to achieve the combination of the image range and the tracking data of the corner point to obtain the tracking data of the corner point in the image range of the target object.
  • the embodiment in FIG. 4 quickly determines the corner points within the image range through the mapping relationship, so that only a small amount of resources is required to obtain a better tracking effect on the target object and improve the calculation efficiency.
  • the method further includes:
  • the first processing unit Through the first processing unit, a mapping relationship between pixels in the target image and the image range is constructed.
  • the first processing unit is a vector operation processor, and the mapping relationship between the pixels in the target image and the image range is constructed by the vector operation processor, and the processing efficiency can be greatly improved by taking advantage of the parallel operation.
  • FIG. 5 shows a structural framework of an electronic device provided by an embodiment of the present application.
  • the electronic device 50 includes at least three processing units, wherein:
  • the first processing unit 51 is configured to acquire multiple corner points of the target image, track the corresponding positions of the corner points in the multi-frame target image, and obtain tracking data of the corner points;
  • the second processing unit 52 is used to detect the target object in the target image and determine the image range corresponding to the target object;
  • the third processing unit 53 is used to determine the target range of the target object based on the tracking data of the corner point and the image range of the target object.
  • the electronic device 50 may be an in-vehicle electronic device 50, such as an ADAS (Advanced Driver Assistance System, advanced driving assistance system).
  • the electronic device 50 may also include a memory.
  • the processing unit is electrically connected to the memory.
  • the processing unit is the control center of the electronic device 50, which uses various interfaces and lines to connect the various parts of the entire electronic device 50, executes the electronic device 50 by running or loading computer programs stored in the memory, and calling data stored in the memory Various functions and processing data to monitor the electronic device 50 as a whole.
  • the first processing unit 51 is a vector operation processor.
  • the processor unit in the electronic device 50 will load the instructions corresponding to the process of one or more computer programs into the memory according to the following steps, and the processing unit runs the computer stored in the memory Program to achieve various functions, such as:
  • the first processing unit 51 acquire a plurality of corner points of the target image, and track the corresponding positions of the corner points in the target image in multiple frames to obtain tracking data of the corner points; through the second processing Unit 52, detecting and identifying the target object in the target image to determine the image range corresponding to the target object; through the third processing unit 53, according to the tracking data of the corner point and the image range of the target object, Determine the target range of the target object.
  • a storage medium stores a plurality of instructions.
  • the instructions are suitable to be loaded by the processing unit to perform any of the above target tracking processing methods, for example:
  • the first processing unit 51 acquire a plurality of corner points of the target image, and track the corresponding positions of the corner points in the target image in multiple frames to obtain tracking data of the corner points; through the second processing Unit 52, detecting and identifying the target object in the target image to determine the image range corresponding to the target object; through the third processing unit 53, according to the tracking data of the corner point and the image range of the target object, Determine the target range of the target object.
  • the program may be stored in a computer-readable storage medium, and the storage medium may include: Read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • the electronic device and the target tracking processing method in the above embodiment belong to the same concept, and any method steps provided in the target tracking processing method embodiment may be run on the electronic device
  • any method steps provided in the target tracking processing method embodiment may be run on the electronic device
  • any combination can be used to form an optional embodiment of the present application, which will not be repeated here.
  • TECHNICAL FIELD [0001] The present application relates to the field of traffic data processing, and in particular, to a method and device for reminding traffic lane status.
  • BACKGROUND [0002] With the increasing number of cars, traffic accidents and violations become more and more frequent, most of which are at the intersection of traffic lights. Therefore, signal lights are permanently installed at intersections to reduce traffic accidents. [0003] In order to facilitate the driver's viewing, signal lights are generally installed at intersections at high places or on the side of the road where it is easier to observe.
  • the present application provides a lane traffic state reminding method and device, which can remind the driver of the current signal lights.
  • the present application provides a lane passing state reminding method, which is applied to an electronic device and includes: acquiring a forward-looking image of a vehicle, extracting image features of the forward-looking image; identifying the forward-looking image based on the image features Lane information and signal light information in; determining the current traffic state of the vehicle according to the lane information and signal light information; generating reminder information according to the traffic state, and displaying the reminder information.
  • identifying lane information and signal information in the forward-looking image based on the image features includes: acquiring a lane reference feature; comparing the image feature with the lane reference feature to determine Whether there is a target feature matching the lane reference feature among the image features, wherein the target feature includes a lane direction feature; if so, extracting information corresponding to the target feature to obtain the lane information, where The lane information includes lane direction information.
  • the target feature further includes a lane line feature; the image feature is compared with the lane reference feature to determine whether there is a match between the image feature and the lane reference feature
  • the target features include: judging whether the lane line feature exists in the image feature; if the lane line feature exists, determining the lane area of the lane based on the lane line feature; judging in the lane area Whether the lane direction feature exists among the image features.
  • the lane direction feature is related to the shape feature of the driving arrow of the lane.
  • the signal light information includes the lane direction type of the signal light and the corresponding signal type; determining the current traffic state of the vehicle according to the lane information and the signal light information includes: determining according to the lane direction information The direction of traffic that the vehicle is about to pass; acquiring the signal type of the signal light corresponding to the direction of traffic; determining the current traffic state of the vehicle according to the signal type, where the traffic state includes a traffic allowed state and a traffic prohibited state.
  • the traffic state includes a normal driving state and an attention reminding state
  • the current traffic state of the vehicle is determined according to the lane information and the signal light information, including: if none of the lane information can be identified and When the signal light information, the vehicle is currently in a driving state; if the lane information cannot be identified, and the signal light information is present, the vehicle is currently in a state of attention and reminder. If the lane information is present, and cannot When the signal light information is recognized, the vehicle is currently in a driving state.
  • generating reminder information according to the traffic state and displaying the reminder information includes: acquiring voice information corresponding to the traffic state, playing the voice information; The text or pattern information corresponding to the traffic state is displayed on the text or pattern information; or a level signal corresponding to the traffic state is acquired, and the preset indicator is turned on and off through the level signal.
  • the present application also provides a lane passing state reminding device, including a forward-looking camera, a processing circuit electrically connected to the forward-looking camera, and a reminding module electrically connected to the processing circuit, wherein: the forward-looking camera , For acquiring the forward-looking image of the vehicle; the processing circuit, for extracting image features of the forward-looking image; identifying lane information and signal information in the forward-looking image based on the image features; according to the Lane information and signal light information determine the current traffic state of the vehicle; generate reminder information according to the traffic state; and the reminder module, which is used to display the reminder information.
  • the forward-looking camera For acquiring the forward-looking image of the vehicle; the processing circuit, for extracting image features of the forward-looking image; identifying lane information and signal information in the forward-looking image based on the image features; according to the Lane information and signal light information determine the current traffic state of the vehicle; generate reminder information according to the traffic state; and the reminder module, which is used to display the reminder information.
  • the processing circuit is specifically configured to: obtain a lane reference feature; compare the image feature with the lane reference feature, and determine whether the image feature exists with the A target feature matching a lane reference feature, where the target feature includes a lane line feature and a lane direction feature; if so, extract information corresponding to the target feature to obtain the lane information, wherein the lane information includes the lane direction information.
  • the reminder module includes one of a sound generator, a display, or an indicator light; the sound generator is used to play voice information, and the voice information corresponds to the traffic state; A display for displaying text or pattern information, the text or pattern information corresponding to the traffic state; The indicator light is used to turn on and off under the control of the level signal of the reminder module, and the level signal corresponds to the traffic state.
  • the image features in the forward-looking image are extracted, and lane information and signal information are identified based on the image features, and the current traffic state of the vehicle is determined based on the lane information and signal information, and reminder information is generated according to the traffic state And show it.
  • FIG. 1 is a flowchart of an implementation method of a lane traffic state reminding method provided by an embodiment of the present application.
  • FIG. 2 is an application scenario diagram of a lane passing state reminding method provided by an embodiment of the present application.
  • FIG. 3 is an implementation flowchart of image feature recognition provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of obtaining lane information according to an embodiment of the present application.
  • FIG. 5 is a flowchart of an implementation of determining a passing state provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a functional structure of a passing state provided by an embodiment of the present application.
  • DETAILED DESCRIPTION [0022] The following describes the preferred embodiments of the present application in detail with reference to the accompanying drawings, so that the advantages and features of the present application can be more easily understood by those skilled in the art, thereby making the protection scope of the present application more clearly defined.
  • FIG. 1 shows an implementation process of a method for reminding a lane traffic state provided by an embodiment of the present application.
  • the lane passing state reminding method is applied to an electronic device, and the electronic device may be an on-board electronic device installed on a car.
  • the vehicle-mounted electronic device may include a front-view camera, a processing circuit, and a reminder module.
  • the forward-looking camera may be directly in front of the vehicle to obtain a forward-looking image in front of the vehicle.
  • the processing circuit may perform analysis processing on the visual field image to determine the traffic state of the vehicle based on the forward-looking image.
  • the reminder module may be a sounder, a display, or an indicator light, etc., to issue a corresponding reminder according to the traffic state. [0028] Please refer to FIG.
  • a lane passing state reminding method is applied to an electronic device.
  • the electronic device may be the electronic device described in the above embodiment.
  • the method includes: 101. Obtaining a forward-looking image of a vehicle. Extract the image features of the forward-looking image. [0030]
  • the forward-looking image may be acquired by a forward-looking camera provided on the vehicle. [0031] Among them, the image feature may be the shape, color, position and other features of each object in the image. [0032] In some embodiments, the image features may be extracted by a preset image processing algorithm.
  • the image processing process may include steps such as graying, image filtering, image edge enhancement, image edge detection, and feature extraction of the forward-looking image.
  • the specific implementation method may be determined according to the actual situation and different algorithms .
  • 102. Identify the lane information and the signal light information in the forward-looking image based on the image features.
  • the lane information may include lane direction information and lane line information, etc., so as to obtain the lane position and lane travel direction related information through the above lane information.
  • the signal light information may include the lane direction type of the signal light and the corresponding signal type, so as to obtain the lane direction associated with the signal light and the corresponding signal type through the signal light information.
  • the features of the lane and the traffic light can be identified, and the lane information and the traffic light information obtained after the recognition can be obtained.
  • the lane information and the signal light information may be obtained by a feature recognition algorithm. For example, through the feature recognition algorithm to identify the shape and contour characteristics of the signal light, in order to know whether the object is a signal light, and can know the specific position of the signal light; then, in order to obtain the signal light information, you can indicate the signal light The direction and whether the signal is red light, green light, or yellow light are determined. The obtained information is all signal light information.
  • the current traffic state of the vehicle refers to whether there is a signal light during the current driving of the vehicle; if there is a signal light, combined with the signal light information and lane information to determine whether the traffic conditions are met at this time.
  • the traffic state needs to be determined according to the lane direction of the current lane and the corresponding signal light type of the lane direction.
  • FIG. 2 illustrates an application scenario of a lane traffic state reminding method provided by an embodiment of the present application.
  • the application scenario in the figure shows a forward-looking image, which includes a lane 11 and a signal light 12 in the forward-looking image.
  • the lane 11 includes a lane line 111 and a driving arrow 112, wherein the direction of the lane of the lane is shown by the driving arrow 112 provided on the road, for example, the right turn arrow 112 in the figure.
  • the signal lamp 12 may include a plurality of lane direction types, such as a forward signal lamp in the forward direction shown in the figure and a right turn signal lamp 121 in the right turn direction.
  • a corresponding reminder operation may be performed according to the passing state.
  • 104. Generate reminder information according to the passing state, and display the reminder information.
  • the reminder information may be voice information related to the passing state of the vehicle, for example, if the front is a red light, the voice prompts "red light ahead, please wait", or text or graphics related to the passing state of the vehicle Or reminder information such as a control instruction that controls the indicator light to turn on and off, and the specific implementation manner of the reminder information may be determined according to actual conditions.
  • Displaying the reminder information may be to obtain voice information corresponding to the passing state and play the voice information.
  • the specific display method can be designed according to the needs.
  • the driver can timely know the traffic state of the current traffic and road conditions, making it difficult for the driver to ignore the reminder information.
  • the image features in the forward-looking image are extracted, the lane information and the signal light information are identified based on the image features, and the current traffic state of the vehicle is determined according to the lane information and the signal light information, and reminder information is generated according to the traffic state And show it.
  • the forward-looking image obtained by the device can be intelligently recognized at the signal lamp intersection according to the forward-looking image and remind the driver whether the vehicle can pass safely at this time, thereby improving the safety during driving.
  • FIG. 3 shows an implementation process of image feature recognition provided by an embodiment of the present application.
  • the recognition of the lane information and the signal light information in the forward-looking image based on the image features includes: 201. Acquiring a lane reference feature.
  • the lane reference feature may be a preset feature parameter, and the feature parameter may be stored in a feature database at a specific location.
  • 202 may be a preset feature parameter, and the feature parameter may be stored in a feature database at a specific location.
  • the target feature includes a lane direction feature
  • the lane direction feature is related to an object or graphic feature to which the forward direction of the lane belongs.
  • the lane direction feature may be related to the shape feature of the driving arrow of the lane. For example, by extracting the shape feature of the driving arrow in the forward-looking image, and determining whether the shape feature is consistent with the left-turn arrow, the forward arrow, or the right-turn arrow, the direction of travel of the lane can be determined.
  • the driving arrow may be a driving arrow set on the road of the lane, or may be a driving arrow indicated on a street sign. The present application does not limit the location of the driving arrow.
  • information corresponding to the target feature is extracted, and information having a mapping relationship with the target feature may be extracted from an information database related to the target feature.
  • the target feature is a lane feature, and the feature determines that the corresponding information in the information database is a "right turn lane", then the "right turn lane" is the information corresponding to the target feature.
  • the "right-turn lane" information indicating the forward direction of the lane may be the lane direction information.
  • the lane information of the lane can be quickly and accurately obtained.
  • FIG. 4 shows an implementation process of obtaining lane information provided by an embodiment of the present application.
  • the forward-looking image contains a large number of image features. If the image features need to be processed in real time during the vehicle forward process, a large amount of data calculation is required, resulting in a large delay.
  • the image features are compared with the lane reference features to determine whether the image features There are target features that match the lane reference features, including: 301. Determine whether there are lane line features among the image features.
  • the lane line is a line used for dividing the lane, and may be a line segment.
  • the lane area of the lane is determined according to the lane line feature.
  • the lane line feature corresponding to the lane line is detected, the lane area separated by the lane line can be determined.
  • two lane lines 111 are shown in the figure, and the area enclosed by the lane line 111 is a lane area.
  • the object features outside the lane area can be ignored, that is, the object features outside the lane area are not recognized, so that the number of required recognition features can be greatly reduced and the data of the recognition process can be reduced The amount of calculation saves the calculation time of data and further improves the reaction speed.
  • FIG. 5 shows an implementation process of determining a passing state provided by an embodiment of the present application.
  • the signal light information includes the lane direction type of the signal light and the corresponding signal type.
  • the current traffic state of the vehicle is determined based on the lane information and the signal light information, including: 401.
  • the obtained lane direction information is "right-turn lane”
  • it may be determined that the traffic direction of the vehicle in the lane is right-turn.
  • 402. Acquire a signal type of a signal light corresponding to a traffic direction.
  • a signal lamp corresponding to the direction of traffic through the preset relationship in the signal lamp information, for example, a right turn signal lamp 121, and determine whether the signal type of the corresponding signal lamp of the right turn signal lamp 121 is a red light or Green light.
  • the traffic state includes a traffic allowed state and a traffic prohibited state.
  • the current traffic state of the vehicle is a prohibited traffic state; if it is a green light at this time, the current traffic state of the vehicle is a permitted traffic state.
  • the traffic state may also include a normal driving state and attention reminder state, based on the lane information and signal information to determine the current traffic state of the vehicle, including: if neither lane information nor signal information can be identified , The vehicle is currently in the driving state; if the lane information cannot be recognized and the signal information is present, the vehicle is currently in the caution state; if the lane information is present and the signal information cannot be recognized, the vehicle is currently in the driving state.
  • the lane passing state reminding device 5 includes a forward-looking camera 51, a processing circuit 52 electrically connected to the forward-looking camera 51, and a reminding module 53 electrically connected to the processing circuit 52, wherein :
  • the front-view camera 51 is used to obtain a front-view image of the vehicle.
  • the front-view camera 51 may be provided with a CCD (Charged Coupled Device, charge affinity device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor) image sensor, the specific type is not limited in this application.
  • the processing circuit 52 is used to extract image features of the forward-looking image; identify lane information and signal information in the forward-looking image based on the image features; determine the current traffic state of the vehicle based on the lane information and signal information; according to the traffic state Generate reminder information.
  • the processing circuit 52 may include a processor, a memory and a corresponding circuit function module, the processor is electrically connected to the memory.
  • the memory may be used to store computer programs and data.
  • the computer program stored in the memory contains instructions executable in the processor. By calling the computer program stored in the memory, the processor can execute the method for reminding the lane traffic state as described above.
  • the processing circuit 52 is specifically configured to: obtain a lane reference feature; compare the image feature with the lane reference feature, and determine whether there is a A target feature that matches the lane reference feature, where the target feature includes a lane line feature and a lane direction feature; if so, extract information corresponding to the target feature to obtain the lane information, where the lane information includes Lane direction information.
  • the reminder module 53 is used to display reminder information.
  • the reminder module may include one of a sounder, a display, or an indicator light.
  • the sound generator is used for playing voice information, and the voice information corresponds to a passing state. For example, if the signal light in front is a red light, the generator can play the voice message "Red light in front, please wait”.
  • the display is used to display text or pattern information, and the text or pattern information corresponds to a passing state. For example, if the signal light in front is a red light, the text "Red light ahead, please wait” or a pattern representing that meaning is displayed.
  • the indicator light is used to turn on and off under the control of the level signal of the reminder module, and the level signal corresponds to the passing state.
  • the lane traffic state reminding device uses image features in the forward-looking image to extract lane information and traffic light information based on the image features, and determines the current traffic state of the vehicle based on the lane information and traffic light information. Generate reminder information and display it according to traffic status.
  • the forward-looking image obtained by the device can be intelligently recognized at the signal lamp intersection according to the forward-looking image and remind the driver whether the vehicle can pass safely at this time, thereby improving the safety during driving.
  • the lane traffic state reminding device belongs to the same concept as the lane traffic state reminding method in the above embodiment, and the lane traffic state can be run on the lane traffic state reminding device
  • the specific implementation process is described in detail in the lane passing state reminding method embodiment, and any combination may be used to form an optional embodiment of the present application, which will not be repeated here.
  • the embodiments of the present application have been described in detail above with reference to the drawings, but the present application is not limited to the above-mentioned embodiments, and within the scope of knowledge possessed by those of ordinary skill in the art, it can also be on the premise of not departing from the purpose of the application Make various changes.

Abstract

Provided in the present application are a lane traffic status reminder method and device. The method comprises: acquiring a front-view image of a vehicle, and extracting image features of the front-view image; identifying lane information and signal light information in the front-view image based on the image features; determining the current traffic status of the vehicle according to the lane information and the signal light information; and generating reminder information according to the traffic status, and displaying the reminder information. By means of the front-view image obtained by the device, the present application can, according to the front-view image, intelligently identify and remind a driver whether the vehicle can pass safely at the intersection, thereby improving safety while driving.

Description

目标跟踪处理方法、电子设备Target tracking processing method and electronic equipment 技术领域Technical field
本发明涉及数据处理领域,特别涉及一种目标跟踪处理方法、电子设备。The invention relates to the field of data processing, in particular to a target tracking processing method and electronic equipment.
背景技术Background technique
目标(车辆、行人等)的识别和跟踪是当前各种ADAS应用的关键技术。其中,目标识别负责检测出每一帧场景中各种目标的类型及位置,跟踪则是将前后两帧中检测到的目标关联起来,赋予一个身份标识,同时检测与跟踪配合也可以更精确的估算目标位置。The identification and tracking of targets (vehicles, pedestrians, etc.) are the key technologies for various ADAS applications. Among them, target recognition is responsible for detecting the types and positions of various targets in each frame of the scene, and tracking is to associate the detected targets in the two frames before and after, giving an identity, and the detection and tracking cooperation can also be more accurate Estimate the target location.
但是,现有的相关算法在同时实现对目标的识别和跟踪的过程中,计算量较大,使得设备的数据处理效率较低,不能很好地应对复杂场景。However, the existing related algorithms have a large amount of calculation in the process of simultaneously identifying and tracking the target, which makes the data processing efficiency of the device low and cannot deal well with complex scenes.
发明内容Summary of the invention
本发明提供一种目标跟踪处理方法,可以提高设备的数据处理效率。The invention provides a target tracking processing method, which can improve the data processing efficiency of the device.
本发明实施例提供一种目标跟踪处理方法,应用于包括至少三个处理单元的电子设备,所述方法包括:An embodiment of the present invention provides a target tracking processing method, which is applied to an electronic device including at least three processing units. The method includes:
通过第一处理单元,获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据;Through the first processing unit, acquire a plurality of corner points of the target image, track the corresponding positions of the corner points in the target image in multiple frames, and obtain tracking data of the corner points;
通过第二处理单元,对所述目标图像中的目标物体进行检测,确定所述目标物体对应的图像范围;The second processing unit detects the target object in the target image and determines the image range corresponding to the target object;
通过第三处理单元,根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体的目标范围。The third processing unit determines the target range of the target object according to the tracking data of the corner point and the image range of the target object.
可选的,所述获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据,包括:Optionally, the acquiring multiple corner points of the target image and tracking the corresponding positions of the corner points in the multi-frame target image to obtain the tracking data of the corner points include:
获取所述目标图像的预设区域;Acquiring a preset area of the target image;
构建所述预设区域的图像金字塔;Construct an image pyramid of the preset area;
根据所述图像金字塔确定多个角点;Determine multiple corner points according to the image pyramid;
对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据。Tracking the corresponding position of the corner point in the target image in multiple frames to obtain tracking data of the corner point.
可选的,所述获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,包括:Optionally, the acquiring multiple corner points of the target image and tracking the corresponding positions of the corner points in the target image in multiple frames includes:
分别对所述角点进行前向金字塔LK跟踪及反向金字塔LK跟踪,获得初步跟踪数据;Perform forward pyramid LK tracking and reverse pyramid LK tracking on the corner points respectively to obtain preliminary tracking data;
对所述初步跟踪数据进行交叉验证,得到所述多个角点中满足预设条件的角点的跟踪数据。Perform cross-validation on the preliminary tracking data to obtain tracking data of corner points that satisfy preset conditions among the plurality of corner points.
可选的,所述根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标 物体的目标范围,包括:Optionally, the determining the target range of the target object according to the tracking data of the corner point and the image range of the target object includes:
根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体对应的目标角点;根据所述目标角点的跟踪数据以及所述目标物体的图像范围,确定所述目标物体的目标范围。Determine the target corner point corresponding to the target object according to the tracking data of the corner point and the image range of the target object; determine the target according to the tracking data of the target corner point and the image range of the target object The target range of the object.
可选的,所述角点的跟踪数据包括所述角点集合中每一所述角点对应的跟踪范围;Optionally, the tracking data of the corner point includes a tracking range corresponding to each corner point in the corner point set;
所述根据所述目标角点的跟踪数据以及所述目标物体的图像范围,确定所述目标物体的目标范围,包括:The determining the target range of the target object according to the tracking data of the target corner point and the image range of the target object includes:
确定所述角点的跟踪范围与所述目标物体的图像范围的置信度;Determining the confidence of the tracking range of the corner point and the image range of the target object;
根据所述跟踪范围与所述图像范围的置信度,分别按预设的加权值确定所述目标物体的目标范围。According to the confidence of the tracking range and the image range, the target range of the target object is determined according to a preset weighting value, respectively.
可选的,所述根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体对应的目标角点,包括:Optionally, the determining the target corner point corresponding to the target object according to the tracking data of the corner point and the image range of the target object includes:
获取所述目标图像与所述图像范围之间预设的映射关系;Acquiring a preset mapping relationship between the target image and the image range;
根据所述映射关系,确定所述图像范围在所述目标图像中的多个映射像素;According to the mapping relationship, determine a plurality of mapped pixels of the image range in the target image;
根据所述映射像素确定所述图像范围在所述目标图像中所围成的像素范围;Determining the pixel range enclosed by the image range in the target image according to the mapped pixels;
将位于所述像素范围内的角点作为目标角点。The corner point located within the pixel range is taken as the target corner point.
可选的,在所述获取所述目标图像与所述图像范围之间预设的映射关系之前,还包括:通过所述第一处理单元,构建所述目标图像中的像素与所述图像范围之间的映射关系。Optionally, before acquiring the preset mapping relationship between the target image and the image range, the method further includes: constructing, by the first processing unit, pixels in the target image and the image range Mapping relationship.
可选的,所述第一处理单元为矢量运算处理器。Optionally, the first processing unit is a vector operation processor.
本发明还提供一种电子设备,所述电子设备包括至少三个处理单元,其中:The invention also provides an electronic device. The electronic device includes at least three processing units, wherein:
所述第一处理单元,用于获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据;The first processing unit is configured to acquire multiple corner points of the target image, track the corresponding positions of the corner points in the target image in multiple frames, and obtain tracking data of the corner points;
所述第二处理单元,用于对所述目标图像中的目标物体进行检测,确定所述目标物体对应的图像范围;The second processing unit is configured to detect a target object in the target image and determine an image range corresponding to the target object;
所述第三处理单元,用于根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体的目标范围。The third processing unit is configured to determine the target range of the target object according to the tracking data of the corner point and the image range of the target object.
可选的,所述第一处理单元为矢量运算处理器。Optionally, the first processing unit is a vector operation processor.
由上可知,通过将跟踪数据以及图像范围识别分配给不同的处理单元进行处理,并将其进行融合获得目标物体的目标范围,可以有效地利用多核处理器的性能提高数据处理效率,更好地应对复杂场景。It can be seen from the above that by allocating tracking data and image range recognition to different processing units for processing and fusing them to obtain the target range of the target object, the performance of the multi-core processor can be effectively used to improve the data processing efficiency and better Cope with complex scenes.
附图说明BRIEF DESCRIPTION
图1为本发明实施例提供的目标跟踪处理方法的实现流程图。FIG. 1 is an implementation flowchart of a target tracking processing method provided by an embodiment of the present invention.
图2为本发明实施例提供的获得角点跟踪数据的实现流程图。FIG. 2 is a flowchart of implementing corner tracking data provided by an embodiment of the present invention.
图3为本发明实施例提供的确定目标范围的实现流程图。FIG. 3 is a flowchart of determining a target range provided by an embodiment of the present invention.
图4为本发明实施例提供的确定目标角点的实现流程图。FIG. 4 is a flowchart of determining a target corner provided by an embodiment of the present invention.
图5为本发明实施例提供的电子设备的结构框架图。FIG. 5 is a structural frame diagram of an electronic device provided by an embodiment of the present invention.
具体实施方式detailed description
下面结合附图对本发明的较佳实施例进行详细阐述,以使本发明的优点和特征更易被本领域技术人员理解,从而对本发明的保护范围作出更为清楚的界定。The following describes the preferred embodiments of the present invention in detail with reference to the accompanying drawings, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, so as to make the protection scope of the present invention more clearly defined.
请参阅图1,图中示出了本发明实施例提供的目标跟踪处理方法的实现流程。Please refer to FIG. 1, which shows an implementation process of a target tracking processing method provided by an embodiment of the present invention.
如图1所示,一种目标跟踪处理方法,应用于电子设备,该电子设备包括至少三个处理单元,该方法包括:As shown in FIG. 1, a target tracking processing method is applied to an electronic device. The electronic device includes at least three processing units. The method includes:
101、通过第一处理单元,获取目标图像的多个角点,对角点在多帧目标图像中的相应位置进行跟踪,得到角点的跟踪数据。101. Through the first processing unit, acquire a plurality of corner points of the target image, track the corresponding positions of the corner points in the multi-frame target image, and obtain tracking data of the corner points.
其中,角点可以是特征点,也即图像中具有某种特征的图像部分,例如图像中人物的人脸、手部等,可以根据需要进行设定。The corner point may be a feature point, that is, an image part with certain characteristics in the image, such as a person's face or hand in the image, which can be set as required.
其中,目标图像可以是通过电子设备的图像传感器获取,例如摄像头。Among them, the target image may be acquired by an image sensor of an electronic device, such as a camera.
在一些实施例中,获取目标图像的多个角点,可以通过预设的算法提取图像中的特征部分,并经过数据库对这些特征部分进行匹配以确定角点位置。例如,通过fast9角点检测算法对该角点进行识别。In some embodiments, multiple corner points of the target image are obtained, and feature parts in the image can be extracted through a preset algorithm, and these feature parts are matched through a database to determine the corner point positions. For example, the corner point is identified by the fast9 corner detection algorithm.
在一些实施例中,对角点在多帧目标图像中的相应位置进行跟踪,得到角点的跟踪数据,首先可以确定某帧目标图像中(如首帧)的某个角点位置,然后通过预设的目标跟踪算法跟踪该角点在其他帧中的相对位置变化来获得角点的跟踪数据。其中,角点的跟踪数据可以是该角点的移动轨迹、相对位移距离等参数。In some embodiments, tracking the corresponding position of the corner point in the multi-frame target image to obtain the tracking data of the corner point, first of all, it is possible to determine the position of a corner point in the target image of a frame (such as the first frame), and then pass The preset target tracking algorithm tracks the change of the relative position of the corner point in other frames to obtain the tracking data of the corner point. Wherein, the tracking data of the corner point may be parameters such as the movement trajectory and relative displacement distance of the corner point.
可选的,目标跟踪算法可以采用金字塔LK光流跟踪算法,通过获取该目标图像的图像金字塔,然后通过金字塔LK光流跟踪算法跟踪图像金字塔中的角点的光流来跟踪角点的移动。Alternatively, the target tracking algorithm may use a pyramid LK optical flow tracking algorithm, by acquiring the image pyramid of the target image, and then tracking the optical flow of the corner points in the image pyramid through the pyramid LK optical flow tracking algorithm to track the movement of the corner points.
该步骤可以是在电子设备的第一处理单元中实现,该第一处理单元可以是用于矢量运算的处理单元,如专职矢量运算的矢量运算处理器,例如GPU(Graphics Processing Unit,视觉处理器)。因为跟踪算法相对简单,但是有大量的像素级运算,因此通过矢量运算处理器运行101步骤时,可以大大提高运算效率。当然,除了矢量运算处理器,该步骤还可以采用CPU(Central Processing Unit,核心处理器)或者DSP(Digital Signal Processing,数字信号处理器)等器件运行计算。This step may be implemented in the first processing unit of the electronic device, and the first processing unit may be a processing unit for vector operations, such as a full-time vector operation processor for vector operations, such as a GPU (Graphics Processing Unit, visual processor) ). Because the tracking algorithm is relatively simple, but there are a large number of pixel-level operations, so when the 101 step is executed by the vector operation processor, the operation efficiency can be greatly improved. Of course, in addition to vector operation processors, this step can also use CPU (Central Processing Unit, core processor) or DSP (Digital Signal Processing, digital signal processor) and other devices to run calculations.
102、通过第二处理单元,对目标图像中的目标物体进行检测识别,确定目标物体对应的图像范围。102. Through the second processing unit, detect and identify the target object in the target image, and determine the image range corresponding to the target object.
其中,该目标物体可以是一个或者多个,例如人体、车辆或者其他障碍物等。识别目标物体,可以通过预设的识别算法实现,例如神经网络算法等,具体识别算法可以根据实际情况进行选用。The target object may be one or more, such as a human body, a vehicle, or other obstacles. Identifying the target object can be achieved by a preset recognition algorithm, such as a neural network algorithm, etc. The specific recognition algorithm can be selected according to the actual situation.
在一些实施例中,确定目标物体对应的图像范围,可以预先通过识别算法将目标物体进行标示,并将标示出的物体的大致轮廓范围或者在目标图像中的相对位置用检测框来进行框选,以确定目标物体对应的图像范围。In some embodiments, to determine the image range corresponding to the target object, the target object can be marked in advance through a recognition algorithm, and the outline range of the marked object or the relative position in the target image can be framed with a detection frame To determine the image range corresponding to the target object.
该步骤可以是在电子设备的第二处理单元中实现,该第二处理单元与第一处理单元互为相对独立,使得该步骤102与步骤101处于不同的进程状态,该第二处理单元可以是通用运算处理器,例如采用CPU(Central Processing Unit,核心处理器)或者DSP(Digital Signal Processing,数字信号处理器)等器件运行计算This step may be implemented in a second processing unit of the electronic device, the second processing unit and the first processing unit are relatively independent of each other, so that step 102 and step 101 are in different process states, and the second processing unit may be General-purpose computing processor, such as CPU (Central Processing Unit, core processor) or DSP (Digital Signal Processing, digital signal processor) and other devices to run calculations
103、通过第三处理单元,根据角点的跟踪数据及目标物体的图像范围,确定目标物体的目标范围。103. The third processing unit determines the target range of the target object according to the tracking data of the corner points and the image range of the target object.
其中,目标范围可以是该目标物体在图像中所占用的范围,例如,若目标范围是人体,则该目标范围框定了人体的最大宽度以及最大高度。The target range may be the range occupied by the target object in the image. For example, if the target range is a human body, the target range frames the maximum width and maximum height of the human body.
在一些实施例中,通过角点的跟踪数据,可以获得该目标图像中各个角点的位移情况。将所检测到的目标物体的图像范围与各个角点的位移情况进行结合,可以通过目标物体的图像范围框选出在该图像范围内的角点,进而得到目标物体在图像范围内的角点的位移情况。In some embodiments, through the tracking data of the corner points, the displacement of each corner point in the target image can be obtained. Combining the detected image range of the target object with the displacement of each corner point, the corner points in the image range can be selected through the image range frame of the target object, and then the corner points of the target object in the image range The displacement situation.
然后,将上述目标物体的图像范围以及目标物体范围内的角点的位移情况进行加权,从而获得目标物体的目标范围。Then, the image range of the target object and the displacement of the corner points within the target object range are weighted to obtain the target range of the target object.
该步骤可以在电子设备的第三处理单元中实现,该第三处理单元与该第二处理单元、第一处理单元互为相对独立,使得该步骤103与步骤102以及步骤101处于不同的进程状态,以将任务分配到不同核心实现数据处理效率的提高。当然,该第三处理单元和第二处理单元还可以为同一处理单元。This step may be implemented in the third processing unit of the electronic device, the third processing unit is relatively independent of the second processing unit and the first processing unit, so that the step 103, the step 102 and the step 101 are in different process states To distribute tasks to different cores to improve data processing efficiency. Of course, the third processing unit and the second processing unit may also be the same processing unit.
在现有技术中,若需要获得目标物体所在范围内的角点,需要分别对不同的目标物体进行图像金字塔的构建,并结合目标物体的图像范围进行角点识别,使得算法需要多次构建目标图像的图像金字塔。若采用单一线程对目标物体进行识别,然后构建该目标图像的图像金字塔,并通过图像金字塔获得跟踪数据,其时间复杂度为O(N*Log(M)*(M*n^2+n^3)),其中N为检测到的目标物体个数,M为目标物体的平均尺寸,n一般为2。从中可以发现由 于是乘积的关系,因此当图像中目标较多时金字塔的构建耗时对整体时间复杂度的影响较大。In the prior art, if it is necessary to obtain the corner points within the range of the target object, it is necessary to separately construct image pyramids for different target objects, and combine the image range of the target object for corner recognition, so that the algorithm needs to construct the target multiple times Image pyramid of images. If a single thread is used to identify the target object, then the image pyramid of the target image is constructed, and the tracking data is obtained through the image pyramid, the time complexity is O (N * Log (M) * (M * n ^ 2 + n ^ 3)), where N is the number of detected target objects, M is the average size of the target objects, and n is generally 2. From this, we can find the relationship of the product, so when there are many objects in the image, the pyramid construction time-consuming has a greater impact on the overall time complexity.
但是,当通过多核处理单元分别执行上述步骤101-103,只需对目标图像构建一次图像金字塔,并对目标图像的图像金字塔进行整体的角点识别,利用所识别到的目标物体的图像范围框选出所需的角点进行跟踪即可,可以使得其时间复杂度为O(Log(W)*(W*n^2+n^3))+O(W),其中N为检测到的目标物体个数,M为目标物体的平均尺寸,n一般为2,W为ROI区域的大小,可知时间复杂度减小,数据处理效率大幅提高。However, when the above steps 101-103 are executed by the multi-core processing unit, only an image pyramid needs to be constructed once for the target image, and the overall corner recognition of the image pyramid of the target image is performed, using the image range frame of the identified target object Select the desired corner point for tracking, and the time complexity can be O (Log (W) * (W * n ^ 2 + n ^ 3)) + O (W), where N is detected The number of target objects, M is the average size of the target object, n is generally 2, W is the size of the ROI area, it can be seen that the time complexity is reduced, and the data processing efficiency is greatly improved.
由上可知,通过将跟踪数据以及图像范围识别分配给不同的处理单元进行处理,并将其进行融合获得目标物体的目标范围,可以有效地利用多核处理器的性能提高数据处理效率,更好地应对复杂场景。It can be seen from the above that by allocating tracking data and image range recognition to different processing units for processing and fusing them to obtain the target range of the target object, the performance of the multi-core processor can be effectively used to improve the data processing efficiency and better Cope with complex scenes.
请参阅图2,图中示出了本申请实施例提供的获得角点跟踪数据的实现流程。Please refer to FIG. 2, which shows an implementation process of obtaining corner tracking data provided by an embodiment of the present application.
如图2所示,所述获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据,包括:As shown in FIG. 2, the acquiring multiple corner points of the target image, tracking the corresponding positions of the corner points in the target image in multiple frames, and obtaining the tracking data of the corner points include:
201、获取目标图像的预设区域。201. Acquire a preset area of a target image.
202、构建预设区域的图像金字塔。202. Construct an image pyramid of a preset area.
203、根据图像金字塔确定多个角点。203. Determine multiple corner points according to the image pyramid.
204、对角点在多帧目标图像中的相应位置进行跟踪,得到角点的跟踪数据。204. Track the corresponding position of the corner point in the multi-frame target image to obtain the tracking data of the corner point.
其中,预设区域可以是人工预设的目标图像中的某个区域或者整个区域,以减少目标图像的冗余数据,提高算法效率。例如,预设区域可以是位于目标图像中部的三分之二高度/宽度的图像区域。或者是通过识别目标图像中本车辆可能的移动方向,根据该移动方向确定预设区域在目标图像中位置。当然,除了上述预设方式,具体还可以根据实际需要对其区域的大小、位置等进行设定。Wherein, the preset area may be a certain area or the entire area in the target image manually preset to reduce redundant data of the target image and improve algorithm efficiency. For example, the preset area may be an image area that is two-thirds of the height / width in the middle of the target image. Or by identifying the possible moving direction of the host vehicle in the target image, and determining the position of the preset area in the target image according to the moving direction. Of course, in addition to the above preset method, the size, position, etc. of the area can also be set according to actual needs.
在构建图像金字塔的过程中,可以对该目标图像构建4层或更多层的图像金字塔,并对这些金字塔中的图像通过算法提取角点,获得角点的位置。然后对上述角点在多帧目标图像中的相应位置进行跟踪,即可得到角点的跟踪数据。In the process of constructing an image pyramid, an image pyramid of 4 or more layers can be constructed for the target image, and the corner points of the images in these pyramids are extracted by an algorithm to obtain the position of the corner point. Then track the corresponding positions of the above corner points in the multi-frame target image to obtain the tracking data of the corner points.
其中,跟踪算法可以是fast9角点跟踪算法,以提供较为准确地跟踪数据。当然,除此之外还可以采用其他方式的跟踪算法。Among them, the tracking algorithm may be a fast9 corner tracking algorithm to provide more accurate tracking data. Of course, other tracking algorithms can be used in addition.
在一些实施例中,对角点在多帧目标图像中的相应位置进行跟踪,得到角点的跟踪数据,包括:In some embodiments, tracking the corresponding position of the corner point in the multi-frame target image to obtain the tracking data of the corner point includes:
分别对角点进行前向金字塔LK跟踪及反向金字塔LK跟踪,获得初步跟踪数据;对初步跟踪数据进行交叉验证,得到多个角点中满足预设条件的角点的跟踪数据。The forward pyramid LK tracking and the reverse pyramid LK tracking are performed on the corner points respectively to obtain preliminary tracking data; the preliminary tracking data is cross-validated to obtain the tracking data of corner points satisfying preset conditions among multiple corner points.
例如,如果某个角点持续几帧都出现检测丢失,则认为跟踪失效,将该角点的相关 参数删除。For example, if the detection loss of a corner lasts for several frames, the tracking is considered invalid and the relevant parameters of the corner are deleted.
上述方式可以大大提高角点的跟踪数据的准确率,通过对初步跟踪数据进行交叉验证,可以将跟踪效果较差的角点进行去除,进一步降低资源浪费,提高数据处理效率。The above method can greatly improve the accuracy of tracking data of corners. By cross-validating preliminary tracking data, corners with poor tracking effects can be removed, which further reduces waste of resources and improves data processing efficiency.
在一些实施例中,请参阅图3,图中示出了本申请实施例提供的确定目标范围的实现流程。In some embodiments, please refer to FIG. 3, which shows an implementation process of determining a target range provided by an embodiment of the present application.
如图3所示,所述根据角点的跟踪数据及目标物体的图像范围,确定目标物体的目标范围,包括:As shown in FIG. 3, the determination of the target range of the target object based on the tracking data of the corner points and the image range of the target object includes:
301、根据角点的跟踪数据及目标物体的图像范围,确定目标物体对应的目标角点。301. Determine the target corner corresponding to the target object according to the tracking data of the corner and the image range of the target object.
通过将目标图像中的所有角点的跟踪数据结合目标物体的图像范围,可以确定目标物体的图像范围内的角点,将该图像范围内的角点定义为目标角点。By combining the tracking data of all corner points in the target image with the image range of the target object, the corner points in the image range of the target object can be determined, and the corner points in the image range are defined as the target corner points.
例如,获取到目标物体的图像范围A,若该目标图像中有角点B以及角点C位于该图像范围A内,则可以将角点B、C作为目标角点。若存在N个目标物体的图像范围,则可以将该N个目标物体的图像范围与相应范围内的角点进行关联。For example, the image range A of the target object is acquired, and if the target image has a corner point B and a corner point C within the image range A, the corner points B and C can be used as the target corner point. If there are image ranges of N target objects, the image ranges of the N target objects may be associated with corner points in the corresponding ranges.
302、根据目标角点的跟踪数据以及目标物体的图像范围,确定目标物体的目标范围。302. Determine the target range of the target object according to the tracking data of the target corner point and the image range of the target object.
当获取到目标角点后,可以根据该角点的跟踪情况以及图像范围的位置变化,确定目标物体较为准确的移动情况以及形态变化情况(例如车辆转弯、人体侧身走动等),进而确定目标物体的目标范围。After obtaining the target corner, you can determine the more accurate movement and shape change of the target object (such as vehicle turning, sideways walking, etc.) according to the tracking of the corner and the position change of the image range, and then determine the target object Target range.
具体的,可以通过代表目标范围的跟踪框来框选出目标物体。该跟踪框的位移值可以由下一帧的角点坐标与上一帧的角点坐标的位移均值来得到,由下一帧该目标角点的间距离均值可以得到跟踪框的尺寸变化。Specifically, the target object can be selected by a tracking frame representing the target range. The displacement value of the tracking frame can be obtained from the average displacement of the corner coordinates of the next frame and the corner coordinates of the previous frame, and the size change of the tracking frame can be obtained from the average distance between the target corner points of the next frame.
例如,若图像范围A是人体的大致位置,角点B、C分别是人体的左右手的特征点,可以通过图像范围A的位置变化结合角点B、C的位置变化,利用跟踪框的高度/宽度来代表获知人体此时的体态以及移动情况,该跟踪框的长度/宽度可以根据该人体的体态以及移动情况的变化而变化。For example, if the image range A is the approximate position of the human body, and the corner points B and C are the characteristic points of the left and right hands of the human body, respectively, the position of the image range A can be combined with the position change of the corner points B and C to use The width represents the current posture and movement of the human body. The length / width of the tracking frame can be changed according to the change of the posture and movement of the human body.
通过所识别出的目标物体的图像范围,框选出所需要的角点作为目标角点,并对该目标角点进行跟踪,使得设备无需多次执行图像范围识别及在图像范围内对角点识别的动作,大大降低了该实施例的时间复杂度,提高了数据处理效率。Based on the image range of the identified target object, the desired corner point is selected as the target corner point, and the target corner point is tracked, so that the device does not need to perform image range recognition multiple times and diagonal points within the image range The recognition action greatly reduces the time complexity of this embodiment and improves the data processing efficiency.
在一些实施例中,为了提高目标范围的准确度,步骤302中可以包括:In some embodiments, in order to improve the accuracy of the target range, step 302 may include:
确定角点的跟踪范围与目标物体的图像范围的置信度;根据跟踪范围与图像范围的置信度,分别按预设的加权值确定目标物体的目标范围。Determine the confidence of the tracking range of the corner point and the image range of the target object; according to the confidence of the tracking range and the image range, determine the target range of the target object according to the preset weighting value.
其中,置信度可以通过常规算法获得。角点的跟踪范围也即角点在目标图像中所占 的大致区域,该大致区域可以是通过跟踪框进行框定。Among them, the confidence can be obtained by conventional algorithms. The tracking range of the corner point is the approximate area that the corner point occupies in the target image. The approximate area can be framed by the tracking frame.
具体的,若角点的跟踪范围的置信度较高,而目标物体的图像范围的置信度较低,可以以角点的跟踪范围为准来结合预设的加权值确定目标物体的目标范围;若角点的跟踪范围的置信度较低,而目标物体的图像范围的置信度较高,可以目标物体的图像范围为准来结合预设的加权值确定目标物体的目标范围。Specifically, if the confidence of the tracking range of the corner point is high and the confidence of the image range of the target object is low, the tracking range of the corner point may be used as a basis to determine the target range of the target object in combination with the preset weighting value; If the confidence of the tracking range of the corner point is low, and the confidence of the image range of the target object is high, the target range of the target object can be determined in combination with the preset weighting value based on the image range of the target object.
可以理解的,具体的加权方式可以根据实际情况而定。Understandably, the specific weighting method can be determined according to the actual situation.
将目标角点的跟踪数据与目标物体的图像范围进行结合,可以利用图像范围框选出所需的角点,无需对每一目标物体进行框选后再执行繁复的角点识别,使得实施例的时间复杂度为线型时间复杂度,提高运算效率。Combining the tracking data of the target corner points with the image range of the target object, you can use the image range frame to select the desired corner point, without having to frame each target object and then perform complicated corner point recognition, making the embodiment The time complexity is the linear time complexity, which improves the operation efficiency.
请参阅图4,图中示出了本申请实施例提供的确定目标角点的实现流程。Please refer to FIG. 4, which shows an implementation process of determining a target corner point provided by an embodiment of the present application.
如图4所示,所述根据角点的跟踪数据及目标物体的图像范围,确定目标物体对应的目标角点,包括:As shown in FIG. 4, the determination of the target corner corresponding to the target object based on the tracking data of the corner point and the image range of the target object includes:
401、获取目标图像与图像范围之间预设的映射关系。401. Acquire a preset mapping relationship between the target image and the image range.
402、根据映射关系,确定图像范围在目标图像中的多个映射像素。402. According to the mapping relationship, determine a plurality of mapped pixels whose image range is in the target image.
403、根据映射像素确定图像范围在目标图像中所围成的像素范围。403. Determine the pixel range enclosed by the image range in the target image according to the mapped pixels.
404、将位于像素范围内的角点作为目标角点。404. Use the corner points within the pixel range as the target corner points.
其中,图像范围可以为代表相对位置的参数,包括目标物体的大致轮廓范围,通过建立该图像范围与该目标图像的像素位置的映射关系,可以确定该目标物体的轮廓或者相对位置在目标图像中所映射的像素位置。The image range can be a parameter representing the relative position, including the approximate outline range of the target object. By establishing a mapping relationship between the image range and the pixel position of the target image, the outline or relative position of the target object can be determined in the target image The pixel location being mapped.
当获得该图像范围对应的映射像素后,可以从中获得该图像范围在目标图像中所围成的像素范围,从而更好地确定目标物体在目标图像中的位置,进而利用该目标物体在目标图像中的大致轮廓范围框选出相应的角点作为目标角点,以实现将图像范围与角点的跟踪数据进行结合,获得目标物体的图像范围内角点的跟踪数据。图4中的实施例通过映射关系快速确定图像范围内的角点,从而只需占用较少的资源即可获得对目标物体较好的跟踪效果,提升计算效率。When the mapped pixels corresponding to the image range are obtained, the pixel range enclosed by the image range in the target image can be obtained therefrom, so as to better determine the position of the target object in the target image, and then use the target object in the target image Select the corresponding corner point as the target corner point in the rough outline range box in to achieve the combination of the image range and the tracking data of the corner point to obtain the tracking data of the corner point in the image range of the target object. The embodiment in FIG. 4 quickly determines the corner points within the image range through the mapping relationship, so that only a small amount of resources is required to obtain a better tracking effect on the target object and improve the calculation efficiency.
在一些实施例中,为了可以进一步提高效率,在所述获取目标图像与图像范围之间预设的映射关系之前,还包括:In some embodiments, in order to further improve efficiency, before the preset mapping relationship between the acquisition target image and the image range, the method further includes:
通过所述第一处理单元,构建目标图像中的像素与所述图像范围之间的映射关系。该第一处理单元为矢量运算处理器,通过矢量运算处理器构建目标图像中的像素与所述图像范围之间的映射关系,可以利用并行运算优势大大提高处理效率。Through the first processing unit, a mapping relationship between pixels in the target image and the image range is constructed. The first processing unit is a vector operation processor, and the mapping relationship between the pixels in the target image and the image range is constructed by the vector operation processor, and the processing efficiency can be greatly improved by taking advantage of the parallel operation.
请参阅图5,图中示出了本申请实施例提供的电子设备的结构框架。Please refer to FIG. 5, which shows a structural framework of an electronic device provided by an embodiment of the present application.
如图5所示,该电子设备50包括至少三个处理单元,其中:As shown in FIG. 5, the electronic device 50 includes at least three processing units, wherein:
该第一处理单元51,用于获取目标图像的多个角点,对角点在多帧目标图像中的相应位置进行跟踪,得到角点的跟踪数据;The first processing unit 51 is configured to acquire multiple corner points of the target image, track the corresponding positions of the corner points in the multi-frame target image, and obtain tracking data of the corner points;
该第二处理单元52,用于对目标图像中的目标物体进行检测,确定目标物体对应的图像范围;The second processing unit 52 is used to detect the target object in the target image and determine the image range corresponding to the target object;
该第三处理单元53,用于根据角点的跟踪数据及目标物体的图像范围,确定目标物体的目标范围。The third processing unit 53 is used to determine the target range of the target object based on the tracking data of the corner point and the image range of the target object.
在该电子设备50中,该电子设备50可以是车载的电子设备50,如ADAS(Advanced Driver Assistance System,高级驾驶辅助系统)。该电子设备50还可以包括存储器。其中,处理单元与存储器电性连接。In the electronic device 50, the electronic device 50 may be an in-vehicle electronic device 50, such as an ADAS (Advanced Driver Assistance System, advanced driving assistance system). The electronic device 50 may also include a memory. The processing unit is electrically connected to the memory.
处理单元是电子设备50的控制中心,利用各种接口和线路连接整个电子设备50的各个部分,通过运行或加载存储在存储器内的计算机程序,以及调用存储在存储器内的数据,执行电子设备50的各种功能和处理数据,从而对电子设备50进行整体监控。The processing unit is the control center of the electronic device 50, which uses various interfaces and lines to connect the various parts of the entire electronic device 50, executes the electronic device 50 by running or loading computer programs stored in the memory, and calling data stored in the memory Various functions and processing data to monitor the electronic device 50 as a whole.
在一些实施例中,该第一处理单元51是矢量运算处理器。In some embodiments, the first processing unit 51 is a vector operation processor.
在本实施例中,电子设备50中的处理器单元会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器中,并由处理单元来运行存储在存储器中的计算机程序,从而实现各种功能,如:In this embodiment, the processor unit in the electronic device 50 will load the instructions corresponding to the process of one or more computer programs into the memory according to the following steps, and the processing unit runs the computer stored in the memory Program to achieve various functions, such as:
通过第一处理单元51,获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据;通过第二处理单元52,对所述目标图像中的目标物体进行检测识别,确定所述目标物体对应的图像范围;通过第三处理单元53,根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体的目标范围。Through the first processing unit 51, acquire a plurality of corner points of the target image, and track the corresponding positions of the corner points in the target image in multiple frames to obtain tracking data of the corner points; through the second processing Unit 52, detecting and identifying the target object in the target image to determine the image range corresponding to the target object; through the third processing unit 53, according to the tracking data of the corner point and the image range of the target object, Determine the target range of the target object.
在一些实施例中,还提供了一种存储介质,该存储介质中存储有多条指令,该指令适于由处理单元加载以执行上述任一目标跟踪处理方法,例如:In some embodiments, a storage medium is also provided. The storage medium stores a plurality of instructions. The instructions are suitable to be loaded by the processing unit to perform any of the above target tracking processing methods, for example:
通过第一处理单元51,获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据;通过第二处理单元52,对所述目标图像中的目标物体进行检测识别,确定所述目标物体对应的图像范围;通过第三处理单元53,根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体的目标范围。Through the first processing unit 51, acquire a plurality of corner points of the target image, and track the corresponding positions of the corner points in the target image in multiple frames to obtain tracking data of the corner points; through the second processing Unit 52, detecting and identifying the target object in the target image to determine the image range corresponding to the target object; through the third processing unit 53, according to the tracking data of the corner point and the image range of the target object, Determine the target range of the target object.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。A person of ordinary skill in the art may understand that all or part of the steps in the various methods of the foregoing embodiments may be completed by instructing relevant hardware through a program. The program may be stored in a computer-readable storage medium, and the storage medium may include: Read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
本申请实施例中,所述电子设备与上文实施例中的一种目标跟踪处理方法属于同一构思,在所述电子设备上可以运行所述目标跟踪处理方法实施例中提供的任一方法步骤,其具体实现过程详见所述目标跟踪处理方法实施例,并可以采用任意结合形成本申请的可选实施例,此处不再赘述。In the embodiment of the present application, the electronic device and the target tracking processing method in the above embodiment belong to the same concept, and any method steps provided in the target tracking processing method embodiment may be run on the electronic device For the specific implementation process, please refer to the embodiment of the target tracking processing method, and any combination can be used to form an optional embodiment of the present application, which will not be repeated here.
上面结合附图对本发明的实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The embodiments of the present invention have been described in detail above with reference to the drawings, but the present invention is not limited to the above-mentioned embodiments. Within the scope of knowledge possessed by those of ordinary skill in the art, various embodiments can be made without departing from the spirit of the present invention Kind of change.
发明名称: 车道通行状态提醒方法及设备Name of invention: Method and equipment for reminding traffic lanes
技术领域[0001]本申请涉及交通数据处理领域,特别涉及一种车道通行状态提醒方法及设备。
背景技术[0002]随着汽车数量日益增长,交通事故及违章变得越来越频繁其中绝大多数情况都出在交通信号灯路口。因此,路口处常设制信号灯,以减少交通事故发生。
[0003]为了方便驾驶员的观看,路口处一般都会在高处或者马路一侧较为容易观察处设置信号灯。但是,如果车辆路经路口处时,驾驶员注意力不集中,或者由于车辆跟随在大车后面而发生视线被遮挡,往往会导致驾驶员无法及时接受到信号灯信息,从而对汽车行驶安全造成很大的威胁。
[0004]申请内容本申请提供一种车道通行状态提醒方法及设备,可以提醒驾驶员当前信号灯的情况。
[0005]本申请提供一种车道通行状态提醒方法,应用于电子设备,包括:获取车辆的前视图像,提取所述前视图像的图像特征;基于所述图像特征识别出所述前视图像中的车道信息以及信号灯信息;根据所述车道信息以及信号灯信息确定所述车辆当前的通行状态;根据所述通行状态生成提醒信息,对所述提醒信息进行展示。
[0006]可选的,基于所述图像特征识别出所述前视图像中的车道信息以及信号灯信息,包括:获取车道参考特征;将所述图像特征与所述车道参考特征进行比对,判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,其中所述目标特征包括车道方向特征;若是,则将所述目标特征对应的信息进行提取,获得所述车道信息,其中所述车道信息包括车道方向信息。
[0007]可选的,所述目标特征还包括车道线特征;所述将所述图像特征与所述车道参考特征进行比对,判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,包括:判断所述图像特征中是否存在所述车道线特征;若存在所述车道线特征,则根据所述车道线特征确定所述车道的车道区域;在所述车道区域中判断所述图像特征中是否存在所述车道方向特征。
[0008]可选的,所述车道方向特征与所述车道的行驶箭头的形状特征相关。
[0009]可选的,所述信号灯信息包括信号灯的车道方向类型以及对应的信号类型;根据所述车道信息以及所述信号灯信息,确定车辆当前的通行状态,包括:根据所述车道方向信息确定所述车辆即将通行的通行方向;获取与所述通行方向对应的信号灯的信号类型;根据所述信号类型确定所述车辆当前的通行状态,其中所述通行状态包括允许通行状态以及禁止通行状态。
[0010]可选的,所述通行状态包括正常行驶状态以及注意提醒状态,根据所述车道信息以及所述信号灯信息,确定车辆当前的通行状态,包括:若均无法识别出所述车道信息以及所述信号灯信息时,则所述车辆当前处于行驶状态;若无法识别出所述车道信息,且存在所述信号灯信息时,则所述车辆当前处于注意提醒状态若存在所述车道信息,且无法识别出所述信号灯信息时,则所述车辆当前处于行驶状态。
[0011]可选的,所述根据所述通行状态生成提醒信息,对所述提醒信息进行展示,包括:获取与所述通行状态对应的语音信息,对所述语音信息进行播放;获取与所述通行状态对应的文字、图案信息,对所述文字或图案信息进行显示;或者获取与所述通行状态对应的电平信号,通过所述电平信号控制预设指示灯的亮灭。
[0012]本申请还提供一种车道通行状态提醒设备,包括前视摄像头、与所述前视摄像头电连接的处理电路以及与所述处理电路电连接的提醒模块,其中:所述前视摄像头,用于获取车辆的前视图像;所述处理电路,用于提取所述前视图像的图像特征;基于所述图像特征识别出所述前视图像中的车道信息以及信号灯信息;根据所述车道信息以及信号灯信息确定所述车辆当前的通行状态;根据所述通行状态生成提醒信息;以及所述提醒模块,用于对所述提醒信息进行展示。
[0013]可选的,所述处理电路,具体用于:获取车道参考特征;将所述图像特征与所述车道参考特征进行比对,所述判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,其中所述目标特征包括车道线特征以及车道方向特征;若是,则将所述目标特征对应的信息进行提取,获得所述车道信息,其中所述车道信息包括车道方向信息。
[0014]可选的,所述提醒模块包括发声器、显示器或者指示灯的其中一种;所述发声器,用于对语音信息进行播放,所述语音信息与所述通行状态对应;所述显示器,用于对文字或图案信息进行显示,所述文字或图案信息与所述通行状态对应;
所述指示灯,用于受所述提醒模块的电平信号控制进行亮灭,所述电平信号与所述通行状态对应。
[0015]由上可知,利用提取前视图像中的图像特征,基于该图像特征识别出车道信息以及信号灯信息,并根据这些车道信息以及信号灯信息确定车辆当前的通行状态,根据通行状态生成提醒信息并对其进行展示。本申请可以通过设备所获得的前视图像,根据该前视图像在信号灯路口智能识别并提醒驾驶员此时车辆是否可以安全通行,提高了驾驶过程中的安全性。
附图说明[0016]图1为本申请实施例提供的车道通行状态提醒方法的实现流程图。
[0017]图2为本申请实施例提供的车道通行状态提醒方法的应用场景图。
[0018]图3为本申请实施例提供的图像特征识别的实现流程图。
[0019]图4为本申请实施例提供的获得车道信息的实现流程图。
[0020]图5为本申请实施例提供的确定通行状态的实现流程图。
[0021]图6为本申请实施例提供的通行状态功能结构示意图。
具体实施方式[0022]下面结合附图对本申请的较佳实施例进行详细阐述,以使本申请的优点和特征更易被本领域技术人员理解,从而对本申请的保护范围作出更为清楚的界定。
[0023]请参阅图1,图中示出了本申请实施例提供的车道通行状态提醒方法的实现流程。
[0024]该车道通行状态提醒方法应用于电子设备,该电子设备可以为安装于汽车上的车载电子设备。该车载电子设备可以包括前视摄像头、处理电路以及提醒模块。
[0025]该前视摄像头可以是正对着车辆前方,以获取车辆前方的前视图像。
[0026]该处理电路可以对该视野图像进行分析处理,以根据该前视图像对车辆的通行状态进行判断。
[0027]该提醒模块,可以是发声器、显示器或者指示灯等,以根据通行状态发出相应提醒。
[0028]请参阅图1,图中示出了本申请实施例提供的车道通行状态提醒方法的实现流程。
[0029]如图1所示,一种车道通行状态提醒方法,应用于电子设备中,该电子设备可以为如上实施例所述的电子设备,该方法包括:101、获取车辆的前视图像,提取前视图像的图像特征。
[0030]该前视图像可以是通过设置在车辆上的前视摄像头进行获取。
[0031]其中,图像特征可以是图像中各个物体的形状、颜色、位置等特征。
[0032]在一些实施例中,该图像特征可以通过预设的图像处理算法来实现提取。具体的,在该图像处理过程中,可以包括对前视图像进行灰度化、图像滤波、图像边缘增强、图像边缘检测,特征提取等步骤,具体实现方式可以根据实际情况以及不同的算法而定。
[0033]102、基于图像特征识别出前视图像中的车道信息以及信号灯信息。
[0034]其中,该车道信息可以包括车道方向信息以及车道线信息等,以通过上述车道信息获得车道位置以及车道行进方向的相关信息。
[0035]该信号灯信息,可以包括信号灯的车道方向类型以及对应的信号类型,以通过上述信号灯信息获得信号灯所关联的车道方向及其相应的信号类型。
[0036]具体的,当获得前视图像的图像特征后,可以对其中车道以及信号灯的特征进行识别,获得识别后得到的车道信息以及信号灯信息。
[0037]在一些实施例中,车道信息以及信号灯信息可以通过特征识别算法进行获取。例如,通过特征识别算法对信号灯的外形轮廓特征进行识别,以此来获知该物体是否为信号灯,并且可以知道该信号灯的具体位置;然后,为了获取信号灯信息,可以对该位置中的信号灯的指示方向以及信号为红灯、绿灯抑或是黄灯等信息进行确定,所获取到的这些信息均为信号灯信息。
[0038]103、根据车道信息以及信号灯信息确定车辆当前的通行状态。
[0039]其中,车辆当前的通行状态指的是当前车辆在行驶过程中,是否存在信号灯的情况;若存在信号灯的情况下,结合信号灯信息以及车道信息确定此时是否满足通行条件。
[0040]在一些实施例中,需要根据当前车道的车道方向以及该车道方向相应的信号灯类型来对通行状态进行判断。
[0041]请参阅图2,该图示出了本申请实施例提供的车道通行状态提醒方法的应用场景。
[0042]如图2所示,该图中的应用场景示出了一幅前视图像,在该前视图像中包括车道11以及信号灯12。其中,该车道11包括车道线111以及行驶箭头112,其中,该车道的车道方向通过设置在道路上的行驶箭头112示出,例如图中的右转箭头112。该信号灯12可以包括多个车道方向类型,例如图中所示的前进方向的前进信号灯以及右转弯方向的右转信号灯121。
[0043]当识别出该前视图像中的车道信息以及信号灯信息后,可以先确定该车道11的车道方向为右转方向,然后对右转信号灯121(也即用于控制右转方向车道的信号灯)的信号类型进行判断。
[0044]若此时右转信号灯121的信号类型为红灯,则该车辆的通行状态为禁止通行状态;若此时右转信号灯121的信号类型为绿灯,则该车辆的通行状态为允许通行状态。
[0045]可以理解的,除了上述实现方式,还可以根据实际情况执行不同的判断方式,例如若不存在右转信号灯时。
[0046]当获得通行状态后,可以根据通行状态执行相应的提醒操作。
[0047]104、根据通行状态生成提醒信息,对提醒信息进行展示。
[0048]其中,提醒信息可以是与车辆的通行状态相关的语音信息,例如若前方为红灯,则语音提示“前方红灯,请等待”,或者是与车辆的通行状态相关的文字、图形或者控制指示灯亮灭的控制指令等方式的提醒信息,该提醒信息的具体实现方式可以根据实际情况而定。
[0049]对提醒信息进行展示,可以是获取与通行状态对应的语音信息,对语音信息进行播放。或者是获取与通行状态对应的文字、图案信息,对文字或图案信息进行显示。或者是获取与通行状态对应的电平信号,通过电平信号控制预设指示灯的亮灭。当然,具体的展示方式可以根据需求进行设计。
[0050]通过上述展示方法,驾驶员可以及时获知当前交通路况的通行状态,使得驾驶员不易忽视该提醒信息。
[0051]由上可知,利用提取前视图像中的图像特征,基于该图像特征识别出车道信息以及信号灯信息,并根据这些车道信息以及信号灯信息确定车辆当前的通行状态,根据通行状态生成提醒信息并对其进行展示。本申请可以通过设备所获得的前视图像,根据该前视图像在信号灯路口智能识别并提醒驾驶员此时车辆是否可以安全通行,提高了驾驶过程中的安全性。
[0052]请参阅图3,图中示出了本申请实施例提供的图像特征识别的实现流程。
[0053]如图3所示,该基于所述图像特征识别出所述前视图像中的车道信息以及信号灯信息,包括:201、获取车道参考特征。
[0054]其中,车道参考特征可以是预设的特征参数,该特征参数可以存储于特定位置的特征数据库。
[0055]202、将图像特征与车道参考特征进行比对,判断在图像特征中是否存在与车道参考特征匹配的目标特征。
[0056]其中,目标特征包括车道方向特征,该车道方向特征与该车道的前进方向所属的物体或者图形特征相关。
[0057]在一些实施例中,该车道方向特征可以是与车道的行驶箭头的形状特征相关。例如,可以通过提取该前视图像中的行驶箭头的形状特征,并确定该形状特征与左转箭头、前进箭头或者是右转箭头是否符合,以确定该车道的行进方向。
[0058]当然,该行驶箭头可以是该车道在道路上所设置的行驶箭头,也可以是路牌上所指示的行驶箭头,本申请对该行驶箭头的所在位置不作限定。
[0059]203、若是,则将目标特征对应的信息进行提取,获得车道信息,其中车道信息包括车道方向信息。
[0060]其中,提取与该目标特征对应的信息,可以从与该目标特征相关的信息数据库中提取与该目标特征具有映射关系的信息。
[0061]例如,若该目标特征为车道特征,且该特征在信息数据库中确定与其相对应的信息为“右转车道”,则该“右转车道”则为该目标特征对应的信息。当然,该指示车道的前进方向的“右转车道”信息则可以为所述车道方向信息。
[0062]由上可知,通过获取车道参考特征,并将图像特征与该车道参考特征进行比对,可以快速、准确地获知该车道的车道信息。
[0063]请参阅图4,图中示出了本申请实施例提供的获得车道信息的实现流程。
[0064]一般情况下,前视图像包含大量的图像特征,在车辆前进过程中若需要对这些图像特征进行实时处理需要耗费大量的数据计算量,导致容易出现较大的延迟。为了提高车道特征的识别率,减少识别过程的数据计算量,进而节省数据计算时间,提高反应速度,该将所述图像特征与所述车道参考特征进行比对,判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,包括:301、判断图像特征中是否存在车道线特征。
[0065]其中,车道线为用于划分车道所用的线,可以是线段。
[0066]302、若存在车道线特征,则根据车道线特征确定车道的车道区域。
[0067]若检测到车道线对应的车道线特征,则可以判断出车道线所间隔出的车道区域。
[0068]结合图2,图中示出了两处车道线111,该车道线111所围成的区域则为车道区域。
[0069]303、在车道区域中判断图像特征中是否存在车道方向特征。
[0070]当确定车道的车道区域后,可以对车道区域以外的物体特征进行忽略,也即不对车道区域以外的物体特征进行识别,如此可以大幅降低所需识别的特征数量,减少识别过程的数据计算量,节省数据计算时间,进而提高反应速度。
[0071]请参阅图5,图中示出了本申请实施例提供的确定通行状态的实现流程。
[0072]在一些实施例中,该信号灯信息包括信号灯的车道方向类型以及对应的信号类型。
[0073]结合图2,其中,根据车道信息以及信号灯信息,确定车辆当前的通行状态,包括:401、根据车道方向信息确定车辆即将通行的通行方向。
[0074]例如,若所得到的车道方向信息为“右转车道”,则可以确定该车道中车辆的通行方向为右转。
[0075]402、获取与通行方向对应的信号灯的信号类型。
[0076]其中,可以在信号灯信息中通过预设关系寻找与该通行方向对应的信号灯,例如是右转信号灯121,并确定该右转信号灯121此时对应的信号灯的信号类型是否为红灯或者绿灯。
[0077]403、根据信号类型确定车辆当前的通行状态,其中通行状态包括允许通行状态以及禁止通行状态。
[0078]若此时为红灯,则车辆当前的通行状态为禁止通行状态;若此时为绿灯,则车辆当前的通行状态为允许通行状态。
[0079]由上可知,利用所获知的车道方向信息,并将其与信号灯的信号类型结合,可以较为准确地判断车辆当前的通行状态,以给用户较为准确地通行状态提醒,保证车辆的通行安全。
[0080]在一些实施例中,该通行状态还可以包括正常行驶状态以及注意提醒状态,根据车道信息以及信号灯信息,确定车辆当前的通行状态,包括:若均无法识别出车道信息以及信号灯信息时,则车辆当前处于行驶状态;若无法识别出车道信息,且存在信号灯信息时,则车辆当前处于注意提醒状态;若存在车道信息,且无法识别出信号灯信息时,则车辆当前处于行驶状态。
[0081]根据所获得的不同的车道信息以及信号灯信息的情况进行判断,可以进一步在不同的行车环境下给予驾驶员足够的提示,进而确保车辆的通行更加安全。
[0082]请参阅图5,图中示出了本申请实施例提供的车道通行状态提醒设备。
[0083]如图5所示,该车道通行状态提醒设备5包括前视摄像头51、与所述前视摄像头51电连接的处理电路52以及与所述处理电路52电连接的提醒模块53,其中:所述前视摄像头51,用于获取车辆的前视图像。
[0084]其中,该前视摄像头51可以设有CCD(ChargedCoupledDevice,电荷親合器件)图像传感器或CMOS(ComplementaryMetalOxideSemiconductor,互补金属氧化物半导体)图像传感器,具体类型本申请不作限定。
[0085]所述处理电路52,用于提取前视图像的图像特征;基于图像特征识别出前视图像中的车道信息以及信号灯信息;根据车道信息以及信号灯信息确定车辆当前的通行状态;根据通行状态生成提醒信息。
[0086]其中,该处理电路52,可以包括处理器、存储器以及相应的电路功能模块,该处理器与存储器电连接。
[0087]具体的,存储器可用于存储计算机程序和数据。存储器存储的计算机程序中包含有可在处理器中执行的指令。该处理器通过调用该存储器中存储的计算机程序,可以执行如上所述的车道通行状态提醒方法。
[0088]在一些实施例中,该处理电路52,具体用于:获取车道参考特征;将所述图像特征与所述车道参考特征进行比对,所述判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,其中所述目标特征包括车道线特征以及车道方向特征;若是,则将所述目标特征对应的信息进行提取,获得所述车道信息,其中所述车道信息包括车道方向信息。
[0089]所述提醒模块53,用于对提醒信息进行展示。
[0090]在一些实施例中,该提醒模块,可以包括发声器、显示器或者指示灯的其中一种。
[0091]该发声器,用于对语音信息进行播放,该语音信息与通行状态对应。例如,若前方的信号灯为红灯,则可以通过该发生器播放“前方红灯,请等待”的语音信息。
[0092]该显示器,用于对文字或图案信息进行显示,该文字或图案信息与通行状态对应。
例如,若前方的信号灯为红灯,则显示“前方红灯,请等待”的文字或代表该意思的图案。
[0093]该指示灯,用于受提醒模块的电平信号控制进行亮灭,该电平信号与该通行状态对应。例如,设有三个与不同信号灯对应的指示灯灯,若前方的信号灯为红灯,则控制与红灯对应的指示灯进行点亮,如此类推。
[0094]由上可知,该车道通行状态提醒设备利用提取前视图像中的图像特征,基于该图像特征识别出车道信息以及信号灯信息,并根据这些车道信息以及信号灯信息确定车辆当前的通行状态,根据通行状态生成提醒信息并对其进行展示。本申请可以通过设备所获得的前视图像,根据该前视图像在信号灯路口智能识别并提醒驾驶员此时车辆是否可以安全通行,提高了驾驶过程中的安全性。
[0095]本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,ReadOnlyMemory)、随机存取记忆体(RAM,RandomAccessMemory)、磁盘或光盘等。
[0096]本申请实施例中,所述车道通行状态提醒设备与上文实施例中的一种车道通行状态提醒方法属于同一构思,在所述车道通行状态提醒设备上可以运行所述车道通行状态提醒方法实施例中提供的任一方法步骤,其具体实现过程详见车道通行状态提醒方法实施例,并可以采用任意结合形成本申请的可选实施例,此处不再赘述。
[0097]上面结合附图对本申请的实施方式作了详细说明,但是本申请并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本申请宗旨的前提下作出各种变化。
TECHNICAL FIELD [0001] The present application relates to the field of traffic data processing, and in particular, to a method and device for reminding traffic lane status.
BACKGROUND [0002] With the increasing number of cars, traffic accidents and violations become more and more frequent, most of which are at the intersection of traffic lights. Therefore, signal lights are permanently installed at intersections to reduce traffic accidents.
[0003] In order to facilitate the driver's viewing, signal lights are generally installed at intersections at high places or on the side of the road where it is easier to observe. However, if the driver's attention is not concentrated when the vehicle passes by the intersection, or the vehicle's sight is blocked due to the vehicle following behind the big car, it will often cause the driver to fail to receive the signal light information in time, which will cause great driving safety Big threat.
[0004] Application content The present application provides a lane traffic state reminding method and device, which can remind the driver of the current signal lights.
[0005] The present application provides a lane passing state reminding method, which is applied to an electronic device and includes: acquiring a forward-looking image of a vehicle, extracting image features of the forward-looking image; identifying the forward-looking image based on the image features Lane information and signal light information in; determining the current traffic state of the vehicle according to the lane information and signal light information; generating reminder information according to the traffic state, and displaying the reminder information.
[0006] Optionally, identifying lane information and signal information in the forward-looking image based on the image features includes: acquiring a lane reference feature; comparing the image feature with the lane reference feature to determine Whether there is a target feature matching the lane reference feature among the image features, wherein the target feature includes a lane direction feature; if so, extracting information corresponding to the target feature to obtain the lane information, where The lane information includes lane direction information.
[0007] Optionally, the target feature further includes a lane line feature; the image feature is compared with the lane reference feature to determine whether there is a match between the image feature and the lane reference feature The target features include: judging whether the lane line feature exists in the image feature; if the lane line feature exists, determining the lane area of the lane based on the lane line feature; judging in the lane area Whether the lane direction feature exists among the image features.
[0008] Optionally, the lane direction feature is related to the shape feature of the driving arrow of the lane.
[0009] Optionally, the signal light information includes the lane direction type of the signal light and the corresponding signal type; determining the current traffic state of the vehicle according to the lane information and the signal light information includes: determining according to the lane direction information The direction of traffic that the vehicle is about to pass; acquiring the signal type of the signal light corresponding to the direction of traffic; determining the current traffic state of the vehicle according to the signal type, where the traffic state includes a traffic allowed state and a traffic prohibited state.
[0010] Optionally, the traffic state includes a normal driving state and an attention reminding state, and the current traffic state of the vehicle is determined according to the lane information and the signal light information, including: if none of the lane information can be identified and When the signal light information, the vehicle is currently in a driving state; if the lane information cannot be identified, and the signal light information is present, the vehicle is currently in a state of attention and reminder. If the lane information is present, and cannot When the signal light information is recognized, the vehicle is currently in a driving state.
[0011] Optionally, generating reminder information according to the traffic state and displaying the reminder information includes: acquiring voice information corresponding to the traffic state, playing the voice information; The text or pattern information corresponding to the traffic state is displayed on the text or pattern information; or a level signal corresponding to the traffic state is acquired, and the preset indicator is turned on and off through the level signal.
[0012] The present application also provides a lane passing state reminding device, including a forward-looking camera, a processing circuit electrically connected to the forward-looking camera, and a reminding module electrically connected to the processing circuit, wherein: the forward-looking camera , For acquiring the forward-looking image of the vehicle; the processing circuit, for extracting image features of the forward-looking image; identifying lane information and signal information in the forward-looking image based on the image features; according to the Lane information and signal light information determine the current traffic state of the vehicle; generate reminder information according to the traffic state; and the reminder module, which is used to display the reminder information.
[0013] Optionally, the processing circuit is specifically configured to: obtain a lane reference feature; compare the image feature with the lane reference feature, and determine whether the image feature exists with the A target feature matching a lane reference feature, where the target feature includes a lane line feature and a lane direction feature; if so, extract information corresponding to the target feature to obtain the lane information, wherein the lane information includes the lane direction information.
[0014] Optionally, the reminder module includes one of a sound generator, a display, or an indicator light; the sound generator is used to play voice information, and the voice information corresponds to the traffic state; A display for displaying text or pattern information, the text or pattern information corresponding to the traffic state;
The indicator light is used to turn on and off under the control of the level signal of the reminder module, and the level signal corresponds to the traffic state.
[0015] As can be seen from the above, the image features in the forward-looking image are extracted, and lane information and signal information are identified based on the image features, and the current traffic state of the vehicle is determined based on the lane information and signal information, and reminder information is generated according to the traffic state And show it. In this application, the forward-looking image obtained by the device can be intelligently recognized at the signal lamp intersection according to the forward-looking image and remind the driver whether the vehicle can pass safely at this time, thereby improving the safety during driving.
BRIEF DESCRIPTION OF THE DRAWINGS [0016] FIG. 1 is a flowchart of an implementation method of a lane traffic state reminding method provided by an embodiment of the present application.
[0017] FIG. 2 is an application scenario diagram of a lane passing state reminding method provided by an embodiment of the present application.
[0018] FIG. 3 is an implementation flowchart of image feature recognition provided by an embodiment of the present application.
[0019] FIG. 4 is a flowchart of obtaining lane information according to an embodiment of the present application.
[0020] FIG. 5 is a flowchart of an implementation of determining a passing state provided by an embodiment of the present application.
[0021] FIG. 6 is a schematic diagram of a functional structure of a passing state provided by an embodiment of the present application.
DETAILED DESCRIPTION [0022] The following describes the preferred embodiments of the present application in detail with reference to the accompanying drawings, so that the advantages and features of the present application can be more easily understood by those skilled in the art, thereby making the protection scope of the present application more clearly defined.
[0023] Please refer to FIG. 1, which shows an implementation process of a method for reminding a lane traffic state provided by an embodiment of the present application.
[0024] The lane passing state reminding method is applied to an electronic device, and the electronic device may be an on-board electronic device installed on a car. The vehicle-mounted electronic device may include a front-view camera, a processing circuit, and a reminder module.
[0025] The forward-looking camera may be directly in front of the vehicle to obtain a forward-looking image in front of the vehicle.
[0026] The processing circuit may perform analysis processing on the visual field image to determine the traffic state of the vehicle based on the forward-looking image.
[0027] The reminder module may be a sounder, a display, or an indicator light, etc., to issue a corresponding reminder according to the traffic state.
[0028] Please refer to FIG. 1, which shows an implementation process of the lane traffic state reminding method provided by the embodiment of the present application.
[0029] As shown in FIG. 1, a lane passing state reminding method is applied to an electronic device. The electronic device may be the electronic device described in the above embodiment. The method includes: 101. Obtaining a forward-looking image of a vehicle. Extract the image features of the forward-looking image.
[0030] The forward-looking image may be acquired by a forward-looking camera provided on the vehicle.
[0031] Among them, the image feature may be the shape, color, position and other features of each object in the image.
[0032] In some embodiments, the image features may be extracted by a preset image processing algorithm. Specifically, the image processing process may include steps such as graying, image filtering, image edge enhancement, image edge detection, and feature extraction of the forward-looking image. The specific implementation method may be determined according to the actual situation and different algorithms .
[0033] 102. Identify the lane information and the signal light information in the forward-looking image based on the image features.
[0034] Wherein, the lane information may include lane direction information and lane line information, etc., so as to obtain the lane position and lane travel direction related information through the above lane information.
[0035] The signal light information may include the lane direction type of the signal light and the corresponding signal type, so as to obtain the lane direction associated with the signal light and the corresponding signal type through the signal light information.
[0036] Specifically, after the image features of the forward-looking image are obtained, the features of the lane and the traffic light can be identified, and the lane information and the traffic light information obtained after the recognition can be obtained.
[0037] In some embodiments, the lane information and the signal light information may be obtained by a feature recognition algorithm. For example, through the feature recognition algorithm to identify the shape and contour characteristics of the signal light, in order to know whether the object is a signal light, and can know the specific position of the signal light; then, in order to obtain the signal light information, you can indicate the signal light The direction and whether the signal is red light, green light, or yellow light are determined. The obtained information is all signal light information.
[0038] 103, according to the lane information and signal information to determine the current traffic state of the vehicle.
[0039] Wherein, the current traffic state of the vehicle refers to whether there is a signal light during the current driving of the vehicle; if there is a signal light, combined with the signal light information and lane information to determine whether the traffic conditions are met at this time.
[0040] In some embodiments, the traffic state needs to be determined according to the lane direction of the current lane and the corresponding signal light type of the lane direction.
[0041] Please refer to FIG. 2, which illustrates an application scenario of a lane traffic state reminding method provided by an embodiment of the present application.
[0042] As shown in FIG. 2, the application scenario in the figure shows a forward-looking image, which includes a lane 11 and a signal light 12 in the forward-looking image. Wherein, the lane 11 includes a lane line 111 and a driving arrow 112, wherein the direction of the lane of the lane is shown by the driving arrow 112 provided on the road, for example, the right turn arrow 112 in the figure. The signal lamp 12 may include a plurality of lane direction types, such as a forward signal lamp in the forward direction shown in the figure and a right turn signal lamp 121 in the right turn direction.
[0043] After identifying the lane information and signal information in the forward-looking image, you can first determine the lane direction of the lane 11 is the right turn direction, and then turn the right turn signal 121 (that is, used to control the right turn direction lane Signal lights) to determine the signal type.
[0044] If the signal type of the right turn signal light 121 is red light at this time, the traffic state of the vehicle is a prohibited traffic state; if the signal type of the right turn signal light 121 is green light at this time, the traffic state of the vehicle is permitted traffic status.
[0045] It can be understood that, in addition to the above implementation manners, different judgment manners may be performed according to actual conditions, for example, if there is no right turn signal light.
[0046] After the passing state is obtained, a corresponding reminder operation may be performed according to the passing state.
[0047] 104. Generate reminder information according to the passing state, and display the reminder information.
[0048] Among them, the reminder information may be voice information related to the passing state of the vehicle, for example, if the front is a red light, the voice prompts "red light ahead, please wait", or text or graphics related to the passing state of the vehicle Or reminder information such as a control instruction that controls the indicator light to turn on and off, and the specific implementation manner of the reminder information may be determined according to actual conditions.
[0049] Displaying the reminder information may be to obtain voice information corresponding to the passing state and play the voice information. Or it is to obtain text and pattern information corresponding to the passing state and display the text or pattern information. Or it is to obtain a level signal corresponding to the passing state, and use the level signal to control the on and off of the preset indicator light. Of course, the specific display method can be designed according to the needs.
[0050] Through the above display method, the driver can timely know the traffic state of the current traffic and road conditions, making it difficult for the driver to ignore the reminder information.
[0051] It can be seen from the above that the image features in the forward-looking image are extracted, the lane information and the signal light information are identified based on the image features, and the current traffic state of the vehicle is determined according to the lane information and the signal light information, and reminder information is generated according to the traffic state And show it. In this application, the forward-looking image obtained by the device can be intelligently recognized at the signal lamp intersection according to the forward-looking image and remind the driver whether the vehicle can pass safely at this time, thereby improving the safety during driving.
[0052] Please refer to FIG. 3, which shows an implementation process of image feature recognition provided by an embodiment of the present application.
[0053] As shown in FIG. 3, the recognition of the lane information and the signal light information in the forward-looking image based on the image features includes: 201. Acquiring a lane reference feature.
[0054] The lane reference feature may be a preset feature parameter, and the feature parameter may be stored in a feature database at a specific location.
[0055] 202. Compare the image feature with the lane reference feature to determine whether there is a target feature that matches the lane reference feature in the image feature.
[0056] Wherein, the target feature includes a lane direction feature, and the lane direction feature is related to an object or graphic feature to which the forward direction of the lane belongs.
[0057] In some embodiments, the lane direction feature may be related to the shape feature of the driving arrow of the lane. For example, by extracting the shape feature of the driving arrow in the forward-looking image, and determining whether the shape feature is consistent with the left-turn arrow, the forward arrow, or the right-turn arrow, the direction of travel of the lane can be determined.
[0058] Of course, the driving arrow may be a driving arrow set on the road of the lane, or may be a driving arrow indicated on a street sign. The present application does not limit the location of the driving arrow.
[0059] 203. If yes, extract information corresponding to the target feature to obtain lane information, where the lane information includes lane direction information.
[0060] Wherein, information corresponding to the target feature is extracted, and information having a mapping relationship with the target feature may be extracted from an information database related to the target feature.
[0061] For example, if the target feature is a lane feature, and the feature determines that the corresponding information in the information database is a "right turn lane", then the "right turn lane" is the information corresponding to the target feature. Of course, the "right-turn lane" information indicating the forward direction of the lane may be the lane direction information.
[0062] It can be known from the above that by acquiring the lane reference feature and comparing the image feature with the lane reference feature, the lane information of the lane can be quickly and accurately obtained.
[0063] Please refer to FIG. 4, which shows an implementation process of obtaining lane information provided by an embodiment of the present application.
[0064] Under normal circumstances, the forward-looking image contains a large number of image features. If the image features need to be processed in real time during the vehicle forward process, a large amount of data calculation is required, resulting in a large delay. In order to improve the recognition rate of lane features, reduce the amount of data calculation in the recognition process, thereby saving data calculation time, and improve the reaction speed, the image features are compared with the lane reference features to determine whether the image features There are target features that match the lane reference features, including: 301. Determine whether there are lane line features among the image features.
[0065] Wherein, the lane line is a line used for dividing the lane, and may be a line segment.
[0066] 302. If there is a lane line feature, the lane area of the lane is determined according to the lane line feature.
[0067] If the lane line feature corresponding to the lane line is detected, the lane area separated by the lane line can be determined.
[0068] With reference to FIG. 2, two lane lines 111 are shown in the figure, and the area enclosed by the lane line 111 is a lane area.
[0069] 303. In the lane area, determine whether there is a lane direction feature among the image features.
[0070] After the lane area of the lane is determined, the object features outside the lane area can be ignored, that is, the object features outside the lane area are not recognized, so that the number of required recognition features can be greatly reduced and the data of the recognition process can be reduced The amount of calculation saves the calculation time of data and further improves the reaction speed.
[0071] Please refer to FIG. 5, which shows an implementation process of determining a passing state provided by an embodiment of the present application.
[0072] In some embodiments, the signal light information includes the lane direction type of the signal light and the corresponding signal type.
[0073] With reference to FIG. 2, wherein the current traffic state of the vehicle is determined based on the lane information and the signal light information, including: 401. Determine the traffic direction of the vehicle that is about to pass according to the lane direction information.
[0074] For example, if the obtained lane direction information is "right-turn lane", it may be determined that the traffic direction of the vehicle in the lane is right-turn.
[0075] 402. Acquire a signal type of a signal light corresponding to a traffic direction.
[0076] Among them, it is possible to find a signal lamp corresponding to the direction of traffic through the preset relationship in the signal lamp information, for example, a right turn signal lamp 121, and determine whether the signal type of the corresponding signal lamp of the right turn signal lamp 121 is a red light or Green light.
[0077] 403. Determine the current traffic state of the vehicle according to the signal type, where the traffic state includes a traffic allowed state and a traffic prohibited state.
[0078] If it is a red light at this time, the current traffic state of the vehicle is a prohibited traffic state; if it is a green light at this time, the current traffic state of the vehicle is a permitted traffic state.
[0079] As can be seen from the above, using the obtained lane direction information and combining it with the signal type of the signal light, the current traffic state of the vehicle can be judged more accurately, so as to remind the user of the traffic state more accurately and ensure the traffic of the vehicle Safety.
[0080] In some embodiments, the traffic state may also include a normal driving state and attention reminder state, based on the lane information and signal information to determine the current traffic state of the vehicle, including: if neither lane information nor signal information can be identified , The vehicle is currently in the driving state; if the lane information cannot be recognized and the signal information is present, the vehicle is currently in the caution state; if the lane information is present and the signal information cannot be recognized, the vehicle is currently in the driving state.
[0081] Judging according to the obtained different lane information and signal light information, it is possible to further give the driver sufficient prompts under different driving environments, thereby ensuring safer passage of the vehicle.
[0082] Please refer to FIG. 5, which shows a lane passing state reminding device provided by an embodiment of the present application.
[0083] As shown in FIG. 5, the lane passing state reminding device 5 includes a forward-looking camera 51, a processing circuit 52 electrically connected to the forward-looking camera 51, and a reminding module 53 electrically connected to the processing circuit 52, wherein : The front-view camera 51 is used to obtain a front-view image of the vehicle.
[0084] Wherein, the front-view camera 51 may be provided with a CCD (Charged Coupled Device, charge affinity device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor) image sensor, the specific type is not limited in this application.
[0085] The processing circuit 52 is used to extract image features of the forward-looking image; identify lane information and signal information in the forward-looking image based on the image features; determine the current traffic state of the vehicle based on the lane information and signal information; according to the traffic state Generate reminder information.
[0086] Wherein, the processing circuit 52 may include a processor, a memory and a corresponding circuit function module, the processor is electrically connected to the memory.
[0087] Specifically, the memory may be used to store computer programs and data. The computer program stored in the memory contains instructions executable in the processor. By calling the computer program stored in the memory, the processor can execute the method for reminding the lane traffic state as described above.
[0088] In some embodiments, the processing circuit 52 is specifically configured to: obtain a lane reference feature; compare the image feature with the lane reference feature, and determine whether there is a A target feature that matches the lane reference feature, where the target feature includes a lane line feature and a lane direction feature; if so, extract information corresponding to the target feature to obtain the lane information, where the lane information includes Lane direction information.
[0089] The reminder module 53 is used to display reminder information.
[0090] In some embodiments, the reminder module may include one of a sounder, a display, or an indicator light.
[0091] The sound generator is used for playing voice information, and the voice information corresponds to a passing state. For example, if the signal light in front is a red light, the generator can play the voice message "Red light in front, please wait".
[0092] The display is used to display text or pattern information, and the text or pattern information corresponds to a passing state.
For example, if the signal light in front is a red light, the text "Red light ahead, please wait" or a pattern representing that meaning is displayed.
[0093] The indicator light is used to turn on and off under the control of the level signal of the reminder module, and the level signal corresponds to the passing state. For example, there are three indicator lights corresponding to different signal lights. If the signal light in front is a red light, the indicator light corresponding to the red light is controlled to light up, and so on.
[0094] As can be seen from the above, the lane traffic state reminding device uses image features in the forward-looking image to extract lane information and traffic light information based on the image features, and determines the current traffic state of the vehicle based on the lane information and traffic light information. Generate reminder information and display it according to traffic status. In this application, the forward-looking image obtained by the device can be intelligently recognized at the signal lamp intersection according to the forward-looking image and remind the driver whether the vehicle can pass safely at this time, thereby improving the safety during driving.
[0095] Persons of ordinary skill in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, the storage medium Can include: read-only memory (ROM, ReadOnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disk or optical disk, etc.
[0096] In the embodiment of the present application, the lane traffic state reminding device belongs to the same concept as the lane traffic state reminding method in the above embodiment, and the lane traffic state can be run on the lane traffic state reminding device For any method steps provided in the reminding method embodiment, the specific implementation process is described in detail in the lane passing state reminding method embodiment, and any combination may be used to form an optional embodiment of the present application, which will not be repeated here.
[0097] The embodiments of the present application have been described in detail above with reference to the drawings, but the present application is not limited to the above-mentioned embodiments, and within the scope of knowledge possessed by those of ordinary skill in the art, it can also be on the premise of not departing from the purpose of the application Make various changes.

Claims (11)

  1. 一种目标跟踪处理方法,应用于包括至少三个处理单元的电子设备,其特征在于,所述方法包括:A target tracking processing method, applied to an electronic device including at least three processing units, characterized in that the method includes:
    通过第一处理单元,获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据;Through the first processing unit, acquire a plurality of corner points of the target image, track the corresponding positions of the corner points in the target image in multiple frames, and obtain tracking data of the corner points;
    通过第二处理单元,对所述目标图像中的目标物体进行检测识别,确定所述目标物体对应的图像范围;Through the second processing unit, detect and identify the target object in the target image to determine the image range corresponding to the target object;
    通过第三处理单元,根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体的目标范围。The third processing unit determines the target range of the target object according to the tracking data of the corner point and the image range of the target object.
  2. 如权利要求1所述的目标跟踪处理方法,其特征在于,所述获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据,包括:The target tracking processing method according to claim 1, wherein the multiple corners of the target image are acquired, and the corresponding positions of the corner points in the target image in multiple frames are tracked to obtain Tracking data for the corner points, including:
    获取所述目标图像的预设区域;Acquiring a preset area of the target image;
    构建所述预设区域的图像金字塔;Construct an image pyramid of the preset area;
    根据所述图像金字塔确定多个角点;Determine multiple corner points according to the image pyramid;
    对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据。Tracking the corresponding position of the corner point in the target image in multiple frames to obtain tracking data of the corner point.
  3. 如权利要求2所述的目标跟踪处理方法,其特征在于,所述对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据,包括:The target tracking processing method according to claim 2, wherein the tracking the corresponding position of the corner point in the multi-frame target image to obtain the tracking data of the corner point includes:
    分别对所述角点进行前向金字塔LK跟踪及反向金字塔LK跟踪,获得初步跟踪数据;Perform forward pyramid LK tracking and reverse pyramid LK tracking on the corner points respectively to obtain preliminary tracking data;
    对所述初步跟踪数据进行交叉验证,得到所述多个角点中满足预设条件的角点的跟踪数据。Perform cross-validation on the preliminary tracking data to obtain tracking data of corner points that satisfy preset conditions among the plurality of corner points.
  4. 如权利要求1所述的目标跟踪处理方法,其特征在于,所述根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体的目标范围,包括:The target tracking processing method according to claim 1, wherein the determining the target range of the target object according to the tracking data of the corner point and the image range of the target object includes:
    根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体对应的目标角点;根据所述目标角点的跟踪数据以及所述目标物体的图像范围,确定所述目标物体的目标范围。Determine the target corner point corresponding to the target object according to the tracking data of the corner point and the image range of the target object; determine the target according to the tracking data of the target corner point and the image range of the target object The target range of the object.
  5. 如权利要求4所述的目标跟踪处理方法,其特征在于,所述角点的跟踪数据包括所述角点集合中每一所述角点对应的跟踪范围;The target tracking processing method according to claim 4, wherein the tracking data of the corner points includes a tracking range corresponding to each of the corner points in the corner point set;
    所述根据所述目标角点的跟踪数据以及所述目标物体的图像范围,确定所述目标物体的目标范围,包括:The determining the target range of the target object according to the tracking data of the target corner point and the image range of the target object includes:
    确定所述角点的跟踪范围与所述目标物体的图像范围的置信度;Determining the confidence of the tracking range of the corner point and the image range of the target object;
    根据所述跟踪范围与所述图像范围的置信度,分别按预设的加权值确定所述目标物体的目标范围。According to the confidence of the tracking range and the image range, the target range of the target object is determined according to a preset weighting value, respectively.
  6. 如权利要求4所述的目标跟踪处理方法,其特征在于,所述根据所述角点的跟踪数据及 所述目标物体的图像范围,确定所述目标物体对应的目标角点,包括:The target tracking processing method according to claim 4, wherein the determining the target corner point corresponding to the target object according to the tracking data of the corner point and the image range of the target object includes:
    获取所述目标图像与所述图像范围之间预设的映射关系;Acquiring a preset mapping relationship between the target image and the image range;
    根据所述映射关系,确定所述图像范围在所述目标图像中的多个映射像素;According to the mapping relationship, determine a plurality of mapped pixels of the image range in the target image;
    根据所述映射像素确定所述图像范围在所述目标图像中所围成的像素范围;Determining the pixel range enclosed by the image range in the target image according to the mapped pixels;
    将位于所述像素范围内的角点作为目标角点。The corner point located within the pixel range is taken as the target corner point.
  7. 如权利要求6所述的目标跟踪处理方法,其特征在于,在所述获取所述目标图像与所述图像范围之间预设的映射关系之前,还包括:The target tracking processing method according to claim 6, wherein before the acquiring the preset mapping relationship between the target image and the image range, the method further comprises:
    通过所述第一处理单元,构建所述目标图像中的像素与所述图像范围之间的映射关系。Through the first processing unit, a mapping relationship between pixels in the target image and the image range is constructed.
  8. 如权利要求1-7任意一项所述的目标跟踪处理方法,其特征在于,所述第一处理单元为矢量运算处理器。The target tracking processing method according to any one of claims 1-7, wherein the first processing unit is a vector operation processor.
  9. 一种电子设备,其特征在于,所述电子设备包括至少三个处理单元,其中:An electronic device, characterized in that the electronic device includes at least three processing units, wherein:
    所述第一处理单元,用于获取所述目标图像的多个角点,对所述角点在多帧所述目标图像中的相应位置进行跟踪,得到所述角点的跟踪数据;The first processing unit is configured to acquire multiple corner points of the target image, track the corresponding positions of the corner points in the target image in multiple frames, and obtain tracking data of the corner points;
    所述第二处理单元,用于对所述目标图像中的目标物体进行检测,确定所述目标物体对应的图像范围;The second processing unit is configured to detect a target object in the target image and determine an image range corresponding to the target object;
    所述第三处理单元,用于根据所述角点的跟踪数据及所述目标物体的图像范围,确定所述目标物体的目标范围。The third processing unit is configured to determine the target range of the target object according to the tracking data of the corner point and the image range of the target object.
  10. 如权利要求9所述的电子设备,其特征在于,所述第一处理单元为矢量运算处理器。The electronic device according to claim 9, wherein the first processing unit is a vector operation processor.
  11. 1.一种车道通行状态提醒方法,应用于电子设备,其特征在于,包括:获取车辆的前视图像,提取所述前视图像的图像特征;基于所述图像特征识别出所述前视图像中的车道信息以及信号灯信息;根据所述车道信息以及信号灯信息确定所述车辆当前的通行状态;根据所述通行状态生成提醒信息,对所述提醒信息进行展示。
    2.如权利要求1所述的车道通行状态提醒方法,其特征在于,基于所述图像特征识别出所述前视图像中的车道信息以及信号灯信息,包括:获取车道参考特征;将所述图像特征与所述车道参考特征进行比对,判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,其中所述目标特征包括车道方向特征;若是,则将所述目标特征对应的信息进行提取,获得所述车道信息,其中所述车道信息包括车道方向信息。
    3.如权利要求2所述的车道通行状态提醒方法,其特征在于,所述目标特征还包括车道线特征;所述将所述图像特征与所述车道参考特征进行比对,判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,包括:判断所述图像特征中是否存在所述车道线特征;若存在所述车道线特征,则根据所述车道线特征确定所述车道的车道区域;在所述车道区域中判断所述图像特征中是否存在所述车道方向特征。
    4.如权利要求3所述的车道通行状态提醒方法,其特征在于,所述车道方向特征与所述车道的行驶箭头的形状特征相关。
    5.如权利要求2所述的车道通行状态提醒方法,其特征在于,所述信号灯信息包括信号灯的车道方向类型以及对应的信号类型根据所述车道信息以及所述信号灯信息,确定车辆当前的通行状态,包括:根据所述车道方向信息确定所述车辆即将通行的通行方向;获取与所述通行方向对应的信号灯的信号类型;根据所述信号类型确定所述车辆当前的通行状态,其中所述通行状态包括允许通行状态以及禁止通行状态。
    6.如权利要求2所述的车道通行状态提醒方法,其特征在于,所述通行状态包括正常行驶状态以及注意提醒状态,根据所述车道信息以及所述信号灯信息,确定车辆当前的通行状态,包括:若均无法识别出所述车道信息以及所述信号灯信息时,则所述车辆当前处于行驶状态;
    若无法识别出所述车道信息,且存在所述信号灯信息时,则所述车辆当前处于注意提醒状态若存在所述车道信息,且无法识别出所述信号灯信息时,则所述车辆当前处于行驶状态。
    7.如权利要求1-6任意一项所述的车道通行状态提醒方法,其特征在于,所述根据所述通行状态生成提醒信息,对所述提醒信息进行展示,包括:获取与所述通行状态对应的语音信息,对所述语音信息进行播放;获取与所述通行状态对应的文字、图案信息,对所述文字或图案信息进行显示;或者获取与所述通行状态对应的电平信号,通过所述电平信号控制预设指示灯的亮灭。
    8.—种车道通行状态提醒设备,其特征在于,包括前视摄像头、与所述前视摄像头电连接的处理电路以及与所述处理电路电连接的提醒模块,其中:所述前视摄像头,用于获取车辆的前视图像;所述处理电路,用于提取所述前视图像的图像特征;基于所述图像特征识别出所述前视图像中的车道信息以及信号灯信息;根据所述车道信息以及信号灯信息确定所述车辆当前的通行状态;根据所述通行状态生成提醒信息;以及所述提醒模块,用于对所述提醒信息进行展示。
    9.如权利要求8所述的车道通行状态提醒设备,其特征在于,所述处理电路,具体用于:获取车道参考特征;将所述图像特征与所述车道参考特征进行比对,所述判断在所述图像特征中是否存在与所述车道参考特征匹配的目标特征,其中所述目标特征包括车道线特征以及车道方向特征:若是,则将所述目标特征对应的信息进行提取,获得所述车道信息,其中所述车道信息包括车道方向信息。
    10.如权利要求8或9所述的车道通行状态提醒设备,其特征在于,所述提醒模块包括发声器、显示器或者指示灯的其中一种;所述发声器,用于对语音信息进行播放,所述语音信息与所述通行状态对应:所述显示器,用于对文字或图案信息进行显示,所述文字或图案信息与所述通行状态对应;所述指示灯,用于受所述提醒模块的电平信号控制进行亮灭,所述电平信号与所述通行状态对应。
    1. A lane passing state reminding method, applied to an electronic device, characterized by comprising: acquiring a forward-looking image of a vehicle, extracting image features of the forward-looking image; identifying the forward-looking image based on the image features Lane information and signal light information in; determining the current traffic state of the vehicle according to the lane information and signal light information; generating reminder information according to the traffic state, and displaying the reminder information.
    2. The lane traffic state reminding method according to claim 1, wherein identifying lane information and signal information in the forward-looking image based on the image features includes: obtaining a lane reference feature; converting the image The feature is compared with the lane reference feature to determine whether there is a target feature matching the lane reference feature among the image features, wherein the target feature includes a lane direction feature; if so, the target feature is mapped Information is extracted to obtain the lane information, where the lane information includes lane direction information.
    3. The lane traffic state reminding method according to claim 2, wherein the target feature further includes a lane line feature; and the image feature is compared with the lane reference feature to determine Whether there is a target feature matching the lane reference feature in the image feature includes: determining whether the lane line feature exists in the image feature; if the lane line feature exists, determining the lane feature according to the lane line feature The lane area of the lane; in the lane area, it is determined whether the lane direction feature exists among the image features.
    4. The lane traffic state reminding method according to claim 3, wherein the lane direction feature is related to the shape feature of the driving arrow of the lane.
    5. The method for reminding lane traffic status according to claim 2, wherein the signal light information includes a lane direction type of the signal light and a corresponding signal type to determine the current traffic of the vehicle according to the lane information and the signal light information The status includes: determining the traffic direction of the vehicle that is about to pass according to the lane direction information; acquiring the signal type of the signal light corresponding to the traffic direction; determining the current traffic state of the vehicle according to the signal type, wherein the The pass state includes the pass-through state and the pass-through state.
    6. The lane traffic state reminding method according to claim 2, wherein the traffic state includes a normal driving state and an attention reminding state, and the current traffic state of the vehicle is determined according to the lane information and the signal light information, Including: if none of the lane information and the signal information can be identified, the vehicle is currently in a driving state;
    If the lane information cannot be recognized, and the signal light information is present, the vehicle is currently in a cautionary state. If the lane information is present, and the signal light information cannot be recognized, the vehicle is currently driving status.
    The method for reminding the lane traffic status according to any one of claims 1-6, wherein the generating of the reminder information according to the traffic status and displaying the reminder information includes: acquiring and Voice information corresponding to the state, playing the voice information; obtaining text and pattern information corresponding to the traffic state, displaying the text or pattern information; or obtaining a level signal corresponding to the traffic state, The level signal controls the lighting of the preset indicator.
    8. A lane passing state reminding device, characterized by comprising a forward-looking camera, a processing circuit electrically connected to the forward-looking camera, and a reminding module electrically connected to the processing circuit, wherein: the forward-looking camera, Used to obtain a forward-looking image of the vehicle; the processing circuit is used to extract image features of the forward-looking image; identify lane information and signal information in the forward-looking image based on the image features; according to the lane The information and the signal light information determine the current traffic state of the vehicle; generate reminder information according to the traffic state; and the reminder module for displaying the reminder information.
    9. The lane traffic state reminding device according to claim 8, wherein the processing circuit is specifically used to: obtain a lane reference feature; compare the image feature with the lane reference feature, the Determine whether there is a target feature matching the lane reference feature among the image features, wherein the target feature includes a lane line feature and a lane direction feature: if so, extract information corresponding to the target feature to obtain The lane information, wherein the lane information includes lane direction information.
    10. The lane passing state reminding device according to claim 8 or 9, wherein the reminding module includes one of a sound generator, a display, or an indicator light; the sound generator is used to play voice information , The voice information corresponds to the traffic state: the display is used to display text or pattern information, the text or pattern information corresponds to the traffic state; the indicator light is used to receive the reminder The level signal of the module is controlled to turn on and off, and the level signal corresponds to the passing state.
PCT/CN2018/118619 2018-11-13 2019-01-17 Lane traffic status reminder method and device WO2020098004A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811347025.3 2018-11-13
CN201811347025.3A CN109859509A (en) 2018-11-13 2018-11-13 Lane state based reminding method and equipment

Publications (1)

Publication Number Publication Date
WO2020098004A1 true WO2020098004A1 (en) 2020-05-22

Family

ID=66890031

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118619 WO2020098004A1 (en) 2018-11-13 2019-01-17 Lane traffic status reminder method and device

Country Status (2)

Country Link
CN (1) CN109859509A (en)
WO (1) WO2020098004A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114475429A (en) * 2022-02-21 2022-05-13 重庆长安汽车股份有限公司 Traffic light reminding method and system combining with driving intention of user and automobile

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859509A (en) * 2018-11-13 2019-06-07 惠州市德赛西威汽车电子股份有限公司 Lane state based reminding method and equipment
CN110562266A (en) * 2019-08-28 2019-12-13 北京小马慧行科技有限公司 Vehicle running control method and device, storage medium and processor
CN110745141A (en) * 2019-10-28 2020-02-04 上海博泰悦臻网络技术服务有限公司 Driving assistance method and device
CN110827556A (en) * 2019-11-12 2020-02-21 北京小米移动软件有限公司 Indication state prompting method and device of traffic signal lamp and storage medium
CN111554108A (en) * 2020-04-30 2020-08-18 深圳市金溢科技股份有限公司 Traffic signal lamp display method, vehicle-mounted unit, road side unit and system
CN112001235A (en) * 2020-07-13 2020-11-27 浙江大华汽车技术有限公司 Vehicle traffic information generation method and device and computer equipment
CN111967368B (en) * 2020-08-12 2022-03-11 广州小鹏自动驾驶科技有限公司 Traffic light identification method and device
CN112327855A (en) * 2020-11-11 2021-02-05 东软睿驰汽车技术(沈阳)有限公司 Control method and device for automatic driving vehicle and electronic equipment
CN117690115A (en) * 2024-02-04 2024-03-12 杭州海康威视系统技术有限公司 Image processing method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236178A (en) * 2013-04-07 2013-08-07 江苏物联网研究发展中心 Signal lamp mode recognition reminding system and method
CN104217598A (en) * 2014-09-04 2014-12-17 苏州大学 Lane direction prompting device
CN105185140A (en) * 2015-09-30 2015-12-23 上海修源网络科技有限公司 Auxiliary driving method and system
US20170154527A1 (en) * 2015-11-30 2017-06-01 Denso Corporation Apparatus and method for driving assistance
CN107316485A (en) * 2017-07-07 2017-11-03 深圳中泰智丰物联网科技有限公司 Reminding method, suggestion device and the terminal device of road state
CN109859509A (en) * 2018-11-13 2019-06-07 惠州市德赛西威汽车电子股份有限公司 Lane state based reminding method and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009057553A1 (en) * 2009-12-09 2011-06-16 Conti Temic Microelectronic Gmbh A method for assisting the driver of a road-bound vehicle in the vehicle guidance
CN102117546B (en) * 2011-03-10 2013-05-01 上海交通大学 On-vehicle traffic light assisting device
CN105930791B (en) * 2016-04-19 2019-07-16 重庆邮电大学 The pavement marking recognition methods of multi-cam fusion based on DS evidence theory
CN107978165A (en) * 2017-12-12 2018-05-01 南京理工大学 Intersection identifier marking and signal lamp Intellisense method based on computer vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236178A (en) * 2013-04-07 2013-08-07 江苏物联网研究发展中心 Signal lamp mode recognition reminding system and method
CN104217598A (en) * 2014-09-04 2014-12-17 苏州大学 Lane direction prompting device
CN105185140A (en) * 2015-09-30 2015-12-23 上海修源网络科技有限公司 Auxiliary driving method and system
US20170154527A1 (en) * 2015-11-30 2017-06-01 Denso Corporation Apparatus and method for driving assistance
CN107316485A (en) * 2017-07-07 2017-11-03 深圳中泰智丰物联网科技有限公司 Reminding method, suggestion device and the terminal device of road state
CN109859509A (en) * 2018-11-13 2019-06-07 惠州市德赛西威汽车电子股份有限公司 Lane state based reminding method and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114475429A (en) * 2022-02-21 2022-05-13 重庆长安汽车股份有限公司 Traffic light reminding method and system combining with driving intention of user and automobile
CN114475429B (en) * 2022-02-21 2024-03-22 重庆长安汽车股份有限公司 Traffic light reminding method and system combined with user driving intention and automobile

Also Published As

Publication number Publication date
CN109859509A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
WO2020098004A1 (en) Lane traffic status reminder method and device
US11694430B2 (en) Brake light detection
US11897471B2 (en) Intersection detection and classification in autonomous machine applications
CN113168505B (en) Regression-based line detection for autonomous driving machines
CN110494863B (en) Determining drivable free space of an autonomous vehicle
CN111133447B (en) Method and system for object detection and detection confidence for autonomous driving
CN108571974B (en) Vehicle positioning using a camera
WO2020000251A1 (en) Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN103105174B (en) A kind of vehicle-mounted outdoor scene safety navigation method based on AR augmented reality
US8976040B2 (en) Intelligent driver assist system based on multimodal sensor fusion
CN113785302A (en) Intersection attitude detection in autonomous machine applications
JP5966640B2 (en) Abnormal driving detection device and program
CN112965504A (en) Remote confirmation method, device and equipment based on automatic driving and storage medium
CN114902295A (en) Three-dimensional intersection structure prediction for autonomous driving applications
JP2018142309A (en) Virtual roadway generating apparatus and method
CN112347829A (en) Determining lane allocation of objects in an environment using obstacle and lane detection
JP5774770B2 (en) Vehicle periphery monitoring device
WO2021227520A1 (en) Visual interface display method and apparatus, electronic device, and storage medium
CN109506664A (en) Device and method is provided using the guidance information of crossing recognition result
JP2024023319A (en) Emergency vehicle detection
CN114270294A (en) Gaze determination using glare as input
CN110386065A (en) Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
CN115136148A (en) Projecting images captured using a fisheye lens for feature detection in autonomous machine applications
JPH01265400A (en) Recognizing device for vehicle sign
JP4752158B2 (en) Environment complexity calculation device, environment recognition degree estimation device and obstacle alarm device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940089

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 200821)

122 Ep: pct application non-entry in european phase

Ref document number: 18940089

Country of ref document: EP

Kind code of ref document: A1