CN115713748A - Traffic light detection result processing method and device based on time sequence - Google Patents

Traffic light detection result processing method and device based on time sequence Download PDF

Info

Publication number
CN115713748A
CN115713748A CN202211485122.5A CN202211485122A CN115713748A CN 115713748 A CN115713748 A CN 115713748A CN 202211485122 A CN202211485122 A CN 202211485122A CN 115713748 A CN115713748 A CN 115713748A
Authority
CN
China
Prior art keywords
traffic light
detection result
image
detection
light state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211485122.5A
Other languages
Chinese (zh)
Inventor
高强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuzhi Suzhou Intelligent Technology Co ltd
Jiuzhizhixing Beijing Technology Co ltd
Original Assignee
Jiuzhi Suzhou Intelligent Technology Co ltd
Jiuzhizhixing Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiuzhi Suzhou Intelligent Technology Co ltd, Jiuzhizhixing Beijing Technology Co ltd filed Critical Jiuzhi Suzhou Intelligent Technology Co ltd
Priority to CN202211485122.5A priority Critical patent/CN115713748A/en
Publication of CN115713748A publication Critical patent/CN115713748A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a traffic light detection result processing method and device based on time sequence, and relates to the technical field of automatic driving. One embodiment of the method comprises: inputting a plurality of frames of images acquired by a vehicle into a target detection model, obtaining a detection result of each frame of image, determining a projection point of a traffic light in the image according to the position of the traffic light at an intersection where the vehicle is located, determining whether the projection point is matched with the detection result of the image where the projection point is located according to the projection point and a preset matching algorithm, if not, updating the detection result of the image where the projection point is located, and determining whether the traffic light state in the detection result of the current frame of image needs to be corrected or not according to the time sequence relation in the traffic light transfer process aiming at each frame of image, and if so, correcting the traffic light state in the detection result of the current frame of image according to the traffic light state in the detection results of other frames of image. The embodiment solves the problem that the traffic light state after the false detection result is corrected cannot meet the time sequence relation.

Description

Traffic light detection result processing method and device based on time sequence
Technical Field
The invention relates to the technical field of automatic driving, in particular to a traffic light detection result processing method and device based on time sequence.
Background
The detection and identification of traffic lights in an autopilot system is a very important part, and vehicles can be helped to make correct driving decisions according to the detected traffic light state. The detection result is affected by the disturbance, and false detection occurs.
For the situation of false detection, the prior art generally adopts a maximum value smoothing method, takes the detection results of a plurality of recent frames, respectively counts the number of traffic light states in the plurality of recent frames, and selects the traffic light state with the largest number as the traffic light state after the current frame is smoothed.
In view of this, a traffic light detection result processing method based on time sequence is needed to solve the problem that the corrected traffic light state cannot satisfy the time sequence relationship.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method for processing a traffic light detection result based on a time sequence, which can correct a traffic light state detected by mistake based on a time sequence relationship in a traffic light transfer process.
In a first aspect, an embodiment of the present invention provides a traffic light detection result processing method based on a time sequence, including:
inputting a plurality of frames of images acquired by a vehicle into a pre-trained target detection model to obtain the detection result of each frame of image;
determining a projection point of the traffic light in the image according to the position of the traffic light at the intersection where the vehicle is located;
for each of the proxels: determining whether the detection result of the projection point is matched with the detection result of the image where the projection point is located, and if not, updating the detection result of the image where the projection point is located;
for each frame image containing the traffic light: and determining whether the traffic light state in the detection result of the current frame image needs to be corrected or not based on the time sequence relation in the traffic light state transfer process, and if so, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images.
In a second aspect, an embodiment of the present invention provides a traffic light detection result processing apparatus based on a time sequence, including:
the detection module is configured to input a plurality of frames of images acquired by the vehicle into a pre-trained target detection model to obtain a detection result of each frame of image;
the projection module is configured to determine a projection point of the traffic light in the image according to the position of the traffic light at the intersection where the vehicle is located;
a matching module configured to, for each of the proxels: determining whether the detection results of the projection point and the image where the projection point is located are matched, if not, updating the detection result of the image where the projection point is located;
a correction module configured to, for each frame image containing the traffic light: and determining whether the traffic light state in the detection result of the current frame image needs to be corrected or not based on the time sequence relation in the traffic light state transfer process, and if so, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and when the processor executes the program, the method according to any of the above embodiments is implemented.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a program is stored, and the program, when executed by a processor, implements the method according to any one of the embodiments.
One embodiment of the above invention has the following advantages or benefits: the method comprises the steps of inputting a plurality of frames of images collected by vehicles into a target detection model, obtaining detection results of the frames of images, determining projection points of traffic lights in the images according to the positions of the traffic lights at intersections where the vehicles are located, determining whether the projection points are matched with the detection results of the images where the projection points are located according to the projection points and a preset matching algorithm, updating the detection results of the images where the projection points are located if the projection points are not matched with the detection results of the images where the projection points are located, determining whether the traffic light state in the detection results of the current frame of images needs to be corrected or not according to the timing relation in the traffic light transfer process aiming at the frames of the images, if the correction is needed, correcting the traffic light state in the detection results of other frames of images according to the traffic light state in the detection results of the other frames of images, enabling the traffic light state corrected by the method to meet the timing relation in the traffic light transfer process, solving the problem that the traffic light state cannot meet the timing relation after the misdetection results are corrected, and improving the safety of automatic driving.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a flowchart of a method for processing a traffic light detection result based on a time sequence according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a traffic light detection result processing apparatus based on time sequence according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The detection and identification of traffic lights are important components in an automatic driving system, the detection result often has the condition of false detection due to the existence of detected interference factors, the current traffic light state correction method aiming at the false detection condition is a maximum value smoothing algorithm, and the method cannot effectively process the false detection that continuous multiframes do not accord with time sequence transfer because the method cannot consider the time sequence relation in the traffic light transfer process.
In view of this, according to fig. 1, an embodiment of the present invention provides a method for processing a traffic light detection result based on a time sequence, including:
step 101, inputting a plurality of frames of images acquired by a vehicle into a pre-trained target detection model, and obtaining a detection result of each frame of image.
The method comprises the steps that images are acquired by vehicle sensors, the sensors comprise cameras, laser radars and the like, common general models such as YOLOV3, YOLOV5 and Faster-Rcnn are selected as target detection models, a plurality of frames of images are input into the target detection models, and detection results of the frames of images are obtained, wherein the detection results comprise positions of traffic lights and state information of the traffic lights.
And 102, determining a projection point of the traffic light in the image according to the position of the traffic light at the intersection where the vehicle is located.
And combining the topological information of the map, obtaining the position information of all traffic lights in a certain preset radius range of the intersection where the vehicle is located from the map, and projecting the position information of the traffic lights into a coordinate system where each frame of image is located to obtain a projection point.
Step 103, for each projection point: and determining whether the detection result of the projection point is matched with the detection result of the image where the projection point is located, and updating the detection result of the image where the projection point is located when the detection result of the projection point is not matched with the detection result of the image where the projection point is located.
And matching the detection results of the projection points and the images according to a preset matching algorithm for each projection point, and updating the detection result of a frame of image through the detection result matched with the projection point in other frames of images for the projection points which are not matched in the frame of image. The preset matching Algorithm can be a KM Algorithm (Kuhn-Munkres Algorithm), a Hungary Algorithm (Hungarian Algorithm) and other matching algorithms, and the KM Algorithm is preferred in the embodiment of the invention.
Step 104, aiming at each frame image containing the traffic lights: and when the traffic light state needs to be corrected, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images.
And based on the time sequence relation in the traffic light state transfer process, if the traffic light state in the detection result of the current frame image needs to be corrected, the traffic light state in the detection result of the current frame image is corrected according to the traffic light state in the detection results of other frame images, so that the time sequence relation of the traffic light state transfer is met.
For example, 5 frames of images are detected in total, the traffic light state at a certain position in the detection result of the 5 frames of images is respectively a green light, a yellow light and a green light, and if the traffic light state in the detection result of the fifth frame of images does not meet the time sequence relation of traffic light state transition, the traffic light state needs to be corrected, the traffic light state of the current frame is corrected according to the traffic light states of other frames, for example, the traffic light state of the current frame can be corrected from the green light to the red light, or the traffic light state of the fourth frame can be corrected from the yellow light to the red light.
In the embodiment of the invention, a plurality of frames of images collected by a vehicle are input into a target detection model to obtain the detection result of each frame of image, the projection point of the traffic light in the image is determined according to the position of the traffic light at the intersection of the vehicle, whether the projection point is matched with the detection result of the image where the projection point is located is determined according to the projection point and a preset matching algorithm, if the projection point is not matched with the detection result of the image where the projection point is located, the detection result of the image where the projection point is located is updated, aiming at each frame of image, whether the traffic light state in the detection result of the current frame of image needs to be corrected is determined based on the time sequence relation in the traffic light transfer process, if the correction is needed, the traffic light state in the detection result of other frames of images is corrected according to the traffic light state in the detection result of the current frame of image, the traffic light state corrected by the method meets the time sequence relation in the traffic light transfer process, the problem that the traffic light state corrected by the false detection result cannot meet the time sequence relation is solved, and the safety of automatic driving is improved.
In one embodiment of the present invention, the detection result includes: detecting frame information;
determining whether the projection point is matched with the detection result of the image where the projection point is located, including:
acquiring a central point of the detection frame according to the detection frame information;
and determining whether the projection point is matched with the central point or not according to a preset matching algorithm.
The preset matching algorithm can be selected from a KM algorithm, a Hungarian algorithm and other matching algorithms, and the KM algorithm is preferably selected in the embodiment of the invention. And the detection result of the image comprises traffic light detection frame information, the central point of the detection frame is obtained according to the detection frame information, whether the projection point is matched with the central point or not is determined according to a KM algorithm, and a one-to-one matching pair of the projection point and the central point is obtained.
According to the projection point and the central point, the KM algorithm is utilized to match the projection point with the central point to obtain a matched pair of the projection point and the central point, and according to the matched pair, the corresponding relation between the position of each traffic light in the detection result and the position of a certain traffic light on the actual map can be obtained.
In the embodiment of the invention, the central point of the detection frame is obtained through the detection frame information in the detection result, the one-to-one matching pair of the projection point and the central point is obtained by utilizing a preset matching algorithm according to the projection point and the central point, and the corresponding relation between the position of each traffic light in the detection result and the position of the traffic light on the actual map can be obtained according to the matching pair.
In an embodiment of the present invention, determining whether the projection point matches the central point according to a preset matching algorithm includes:
and calculating the distance between the projection point and the central point, wherein if the distance is greater than a set threshold value, the projection point is not matched with the central point.
The first step of determining whether the projection point is matched with the center point according to a preset matching algorithm is to calculate the distance between the projection point and the center point, calculate the Euclidean distance between the projection point and the center point according to the coordinate information of the projection point and the center point, if the distance is greater than a set threshold, it is indicated that the projection point is far away from the center point, the projection point is considered to be not matched with the center point, for example, the set threshold is 20, and if the distance between the projection point and the center point is 30 and is greater than 20, the projection point is considered to be not matched with the center point.
In the embodiment of the invention, the distance between the projection point and the central point is calculated according to a preset matching algorithm, if the distance is greater than a set threshold value, the projection point is not matched with the central point, and the projection point and the central point which do not meet the condition are filtered out by calculating the distance and comparing the distance with the set threshold value, so that the calculation speed of the matching algorithm can be improved.
In one embodiment of the invention, the method further comprises: if the distance is not larger than the set threshold value, generating a matching combination according to the projection point and the central point;
generating a plurality of matching schemes according to the matching combination;
screening out a target matching scheme with the least unmatched projection points from the plurality of matching schemes;
and determining a central point matched with the projection point and unmatched projection points according to the target matching scheme.
Calculating the distance between the projection point and the central point by using a preset matching algorithm according to the projection point and the central point, if the distance is not greater than a set threshold value, the projection point and the central point have a matching relationship to generate a matching combination, and calculating the distance between each projection point and each central point to obtain a plurality of matching combinations; according to each matching combination, combining a preset matching algorithm, and carrying out permutation and combination on each matching combination to obtain a plurality of matching schemes, wherein in each matching scheme, the projection points and the central points are matched in a one-to-one manner, namely, one projection point can only be matched with one central point; counting the number of unmatched projection points in the scheme aiming at each matching scheme, and selecting the matching scheme with the least number of unmatched projection points as a target matching scheme; and determining the central point matched with each projection point and unmatched projection points according to the target matching scheme.
For example, there are 3 projection points a, B, and C,3 central points a, B, and C, the set threshold is 10, the distance between each projection point and each central point is calculated, the distances between a and a, a and B, B and a, and C are all found to be less than 10 by comparing the distances with the set threshold, then 5 matching combinations of a and a, a and B, B and a, C and a, and C are obtained, and 5 matching combinations are arranged and combined to obtain a first matching scheme: a is matched with a, and C is matched with C; and a second matching scheme: a is matched with B, B is matched with a, C is matched with C, and the matching scheme is three: and B and C are matched, in the three matching schemes, the number of unmatched projection points is 1, 0 and 1 respectively, the scheme II with the minimum number of unmatched projection points is selected as a target matching scheme, and finally the central points matched with the projection point A, the projection point B and the projection point C are determined to be a central point B, a central point a and a central point C respectively.
In the embodiment of the invention, the projection points and the central point are matched in a one-to-one manner according to a preset matching algorithm, so that an optimal matching scheme can be obtained, the number of the matched projection points and the central point obtained in the matching scheme is the largest, namely the number of unmatched projection points is the smallest, and the central point matched with the projection points is found by the most projection points.
In one embodiment of the invention, the method further comprises:
acquiring steering information of the traffic light corresponding to the projection point according to the matching relation between the projection point and the central point;
and adding the steering information into the detection result of the image where the projection point is located.
And after the one-to-one matching relation between the projection point and the central point is obtained, the steering information of the traffic light corresponding to the projection point is obtained according to the projection point information, and the steering information is added into the detection result of the image where the central point matched with the projection point is located. The projection point is obtained by projecting the traffic lights on the map to the coordinate system of the image, and the projection point information comprises the position information of the traffic lights on the map and the steering information of the traffic lights. For example, if the traffic light turning information of the projection point is obtained as a left turn according to the projection point information, the turning information of the traffic light is added to the detection result of the image where the central point matched with the projection point is located, and the left turn is obtained.
In the embodiment of the invention, the steering information of the traffic light corresponding to the projection point is added to the detection result of the image where the projection point is located through the matching relationship between the projection point and the central point, so that the detection result of the image can be enriched, the detection result of the traffic light is enriched, and the vehicle can conveniently perform steering operation by combining the state information of the traffic light.
In an embodiment of the present invention, updating the detection result of the image where the projection point is located includes:
determining whether other frame images have detection frames matched with the projection points, if so, predicting the detection frame of the current frame image where the projection points are located according to the detection frames of the other frame images, extracting the detection image according to the detection frame of the current frame image, converting the detection image into an HSV space, and determining the traffic light state in the current frame image according to the value of a pixel point in the detection image in the HSV space; and the detection frame and the traffic light state of the current frame image form a detection result of the updated current frame image.
If the projection point does not have a detection result matched with the projection point in the current frame image, determining whether other frame images have a detection frame matched with the projection point, if not, ignoring the projection point, if so, predicting the position of the detection frame matched with the projection point in the current frame image according to the positions of the detection frames matched with the projection point in the other frame images, extracting a detection image according to the position of the detection frame of the current frame image, converting the detection image into an HSV space according to a digital image processing technology, determining the traffic light state in the current frame image according to the value of the image in the HSV space, and if the values of the three channels h, s and v are within a certain preset range, determining the traffic light state as red. And updating the detection result of the current frame image according to the detection frame and the traffic light state of the current frame image, wherein the detection result comprises the detection frame and the traffic light state.
In the embodiment of the invention, if the detection frame matched with the projection point cannot be found in the current frame image, the detection frame matched with the projection point in the current frame image is determined according to the position of the detection frame matched with the projection point in other frame images, so that the condition that the detection frame matched with the projection point cannot be found by the projection point due to the missing detection of the algorithm is avoided.
In an embodiment of the present invention, determining whether a traffic light state in a detection result of a current frame image needs to be corrected based on a time sequence relationship in a traffic light state transition process, and if so, correcting the traffic light state in the detection result of the current frame image according to a traffic light state in a detection result of other frame images, including:
when the traffic light state in the detection result of the current frame image is black, counting the total number of the black traffic light states in the detection results of the other frame images;
and determining whether the total number is greater than a preset threshold value, and if not, taking the traffic light state in the detection result of the previous frame of image as the traffic light state of the current frame of image.
And if the traffic light state does not meet the time sequence relation, the traffic light state in the detection result of the current frame image is corrected according to the traffic light state in the detection results of other frame images.
And if the traffic light state in the detection result of the current frame image is black, calculating the total number of the black traffic light states in the detection results of other frame images, and correcting the traffic light state of the current frame image according to the size relation between the total number and a preset threshold value. If the total 5 frames of images are detected, the preset threshold value is 3, the traffic light state in the detection result of the current frame of image is black, the number of the black traffic light states in the detection results of other frames of images is 2, and the number is smaller than the preset threshold value 3, the traffic light state of the current frame of image is the traffic light state of the current frame of image after the correction. If the preset threshold value is 1, the number of the traffic lights in the detection results of the other frame images which are black is 2, and if the number is greater than the preset threshold value 1, the traffic lights in the current frame image are black.
In the embodiment of the invention, whether the state of the traffic light in the detection result of the current frame image needs to be corrected or not is determined according to the time sequence relation in the state transfer process of the traffic light, and if the state of the traffic light in the detection result of the current frame image is black, the state of the traffic light of the current frame image is corrected according to whether the number of the black states in the detection results of other frame images is larger than a preset threshold value or not, so that the corrected state of the traffic light meets the time sequence relation of the state transfer of the traffic light, and the correction result is more accurate.
In an embodiment of the present invention, determining whether a traffic light state in a detection result of a current frame image needs to be corrected based on a timing relationship in a traffic light state transition process, and if so, correcting the traffic light state in the detection result of the current frame image according to a traffic light state in a detection result of another frame image, includes:
when the traffic light state in the detection result of the current frame image is one of red, green and yellow, adjusting the traffic light state in the detection results of the plurality of frame images into a sequence meeting a time sequence relation, and counting the adjustment times;
and determining the sequence with the minimum adjustment times as a target sequence, and taking the traffic light state of the last frame in the target sequence as the traffic light state of the current frame image.
If the traffic light state in the detection result of the current frame image does not meet the time sequence relation in the traffic light state transfer process and the traffic light state in the detection result of the current frame image is one of red, green and yellow, adjusting the traffic light state in the individual frame image by combining the traffic light states in the detection results of other frame images to form a sequence meeting the time sequence relation, counting the adjustment times, determining the sequence with the minimum adjustment times as a target sequence in all the sequences meeting the time sequence relation, and taking the traffic light state of the last frame in the target sequence as the traffic light state after the current frame image is corrected.
For example, there are 4 frames of images in total, the traffic light states in the detection result of a certain position are green, yellow and green, the timing relationship of the traffic light state transition is known, if the green light of the current frame does not satisfy the timing relationship, the traffic light state of the current frame image is corrected, and a sequence satisfying the timing relationship is obtained by adjusting the traffic light states in the individual frame images, and the sequence one: green, yellow and red, the adjustment times are once, and the sequence II: and adjusting the times twice, namely taking one sequence with the least adjusting times from the two sequences, namely taking the sequence I as a target sequence, and taking the traffic light state of the last frame of image in the target sequence as the traffic light state of the current frame of image after correction, namely taking the traffic light state of the current frame of image after correction as red.
In the embodiment of the invention, the traffic light state in the detection result of the current frame image is corrected based on the time sequence relation in the traffic light state transfer process, so that the traffic light state meets the time sequence relation, the problem that the traffic light state after the correction of the false detection result cannot meet the time sequence relation is solved, and the safety of automatic driving is improved.
As shown in fig. 2, an embodiment of the present invention provides a traffic light detection result processing apparatus based on a time sequence, including:
the detection module 201 is configured to input a plurality of frames of images acquired by a vehicle into a pre-trained target detection model to obtain a detection result of each frame of image;
the projection module 202 is configured to determine a projection point of a traffic light in the image according to the position of the traffic light at the intersection where the vehicle is located;
a matching module 203 configured to, for each of the projection points: determining whether the detection results of the projection point and the image where the projection point is located are matched, if not, updating the detection result of the image where the projection point is located;
a correction module 204 configured to, for each frame image including the traffic light: and determining whether the traffic light state in the detection result of the current frame image needs to be corrected or not based on the time sequence relation in the traffic light state transfer process, and if so, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images.
In one embodiment of the present invention, the detection result includes: detecting frame information;
the matching module 203 is configured to acquire a central point of the detection frame according to the detection frame information; and determining whether the projection point is matched with the central point or not according to a preset matching algorithm.
In an embodiment of the present invention, the matching module 203 is configured to calculate a distance between the projection point and the central point, and if the distance is greater than a set threshold, the projection point is not matched with the central point.
In an embodiment of the present invention, the matching module 203 is further configured to generate a matching combination according to the projection point and the central point if the distance is not greater than the set threshold;
generating a plurality of matching schemes according to the matching combination;
screening out a target matching scheme with the least unmatched projection points from the plurality of matching schemes;
and determining a central point matched with the projection point and unmatched projection points according to the target matching scheme.
In an embodiment of the present invention, the matching module 203 is further configured to obtain steering information of a traffic light corresponding to the projection point according to a matching relationship between the projection point and the central point;
and adding the steering information into the detection result of the image where the projection point is located.
In an embodiment of the present invention, the matching module 203 is configured to determine whether other frame images have a detection frame matched with the projection point, if so, predict a detection frame of a current frame image where the projection point is located according to the detection frame of the other frame images, extract a detection image according to the detection frame of the current frame image, convert the detection image into an HSV space, and determine a traffic light state in the current frame image according to a value of a pixel point in the detection image in the HSV space; and the detection frame and the traffic light state of the current frame image form a detection result of the updated current frame image.
In an embodiment of the present invention, the modifying module 204 is configured to count a total number of black traffic light states in the detection results of the other frame images when the traffic light state in the detection result of the current frame image is black;
and determining whether the total number is greater than a preset threshold value, and if not, taking the traffic light state in the detection result of the previous frame of image as the traffic light state of the current frame of image.
In an embodiment of the present invention, the modifying module 204 is configured to, when the traffic light state in the detection result of the current frame image is one of red, green and yellow, adjust the traffic light state in the detection results of the plurality of frame images to a sequence satisfying a time sequence relationship, and count the adjustment times;
and determining the sequence with the minimum adjusting times as a target sequence, and taking the traffic light state of the last frame in the target sequence as the traffic light state of the current frame image.
Referring now to FIG. 3, shown is a block diagram of a computer system 300 suitable for use with a terminal device implementing embodiments of the present invention. The terminal device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 3, the computer system 300 includes a Central Processing Unit (CPU) 301 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the system 300 are also stored. The CPU 301, ROM 302, and RAM 303 are connected to each other via a bus 303. An input/output (I/O) interface 305 is also connected to bus 303.
The following components are connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, and the like; an output section 307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 308 including a hard disk and the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. A drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 as necessary, so that the computer program read out therefrom is mounted into the storage section 308 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 309, and/or installed from the removable medium 311. The above-described functions defined in the system of the present invention are executed when the computer program is executed by the Central Processing Unit (CPU) 301.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a sending module, an obtaining module, a determining module, and a first processing module. The names of these modules do not constitute a limitation to the module itself in some cases, for example, the sending module may also be described as a "module sending a picture acquisition request to a connected server".
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A traffic light detection result processing method based on time sequence is characterized in that,
inputting a plurality of frames of images acquired by a vehicle into a pre-trained target detection model to obtain the detection result of each frame of image;
determining a projection point of the traffic light in the image according to the position of the traffic light at the intersection where the vehicle is located;
for each of the proxels: determining whether the detection results of the projection point and the image where the projection point is located are matched, if not, updating the detection result of the image where the projection point is located;
for each frame image containing the traffic light: and determining whether the traffic light state in the detection result of the current frame image needs to be corrected or not based on the time sequence relation in the traffic light state transfer process, and if so, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images.
2. The method of claim 1,
the detection result comprises: detecting frame information;
determining whether the projection point is matched with the detection result of the image where the projection point is located, including:
acquiring a central point of the detection frame according to the detection frame information;
and determining whether the projection point is matched with the central point or not according to a preset matching algorithm.
3. The method of claim 2,
determining whether the projection point is matched with the central point according to a preset matching algorithm, wherein the step of determining whether the projection point is matched with the central point comprises the following steps:
and calculating the distance between the projection point and the central point, wherein if the distance is greater than a set threshold value, the projection point is not matched with the central point.
4. The method of claim 3, further comprising:
if the distance is not larger than the set threshold value, generating a matching combination according to the projection point and the central point;
generating a plurality of matching schemes according to the matching combination;
screening out a target matching scheme with the least unmatched projection points from the plurality of matching schemes;
and determining a central point matched with the projection point and unmatched projection points according to the target matching scheme.
5. The method of claim 4, further comprising:
acquiring steering information of the traffic light corresponding to the projection point according to the matching relation between the projection point and the central point;
and adding the steering information into the detection result of the image where the projection point is located.
6. The method of claim 1,
updating the detection result of the image where the projection point is located, including:
determining whether other frame images have detection frames matched with the projection points, if so, predicting the detection frame of the current frame image where the projection points are located according to the detection frames of the other frame images, extracting the detection image according to the detection frame of the current frame image, converting the detection image into an HSV space, and determining the traffic light state in the current frame image according to the value of pixel points in the detection image in the HSV space; and the detection frame and the traffic light state of the current frame image form the detection result of the updated current frame image.
7. The method of claim 1,
determining whether the traffic light state in the detection result of the current frame image needs to be corrected or not based on the time sequence relation in the traffic light state transfer process, if so, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images, and the method comprises the following steps:
when the traffic light state in the detection result of the current frame image is black, counting the total number of the black traffic light states in the detection results of the other frame images;
and determining whether the total number is greater than a preset threshold value, and if not, taking the traffic light state in the detection result of the previous frame of image as the traffic light state of the current frame of image.
8. The method of claim 1,
determining whether the traffic light state in the detection result of the current frame image needs to be corrected or not based on the time sequence relation in the traffic light state transfer process, if so, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images, and the method comprises the following steps:
when the traffic light state in the detection result of the current frame image is one of red, green and yellow, adjusting the traffic light state in the detection results of the plurality of frame images into a sequence meeting a time sequence relation, and counting the adjustment times;
and determining the sequence with the minimum adjusting times as a target sequence, and taking the traffic light state of the last frame in the target sequence as the traffic light state of the current frame image.
9. A traffic light detection result processing device based on time sequence is characterized in that,
the detection module is configured to input a plurality of frames of images acquired by a vehicle into a pre-trained target detection model to obtain a detection result of each frame of image;
the projection module is configured to determine a projection point of a traffic light in the image according to the position of the traffic light at the intersection where the vehicle is located;
a matching module configured to, for each of the proxels: determining whether the detection result of the projection point is matched with the detection result of the image where the projection point is located, and if not, updating the detection result of the image where the projection point is located;
a correction module configured to, for each frame image containing the traffic light: and determining whether the traffic light state in the detection result of the current frame image needs to be corrected or not based on the time sequence relation in the traffic light state transfer process, and if so, correcting the traffic light state in the detection result of the current frame image according to the traffic light state in the detection results of other frame images.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202211485122.5A 2022-11-24 2022-11-24 Traffic light detection result processing method and device based on time sequence Pending CN115713748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211485122.5A CN115713748A (en) 2022-11-24 2022-11-24 Traffic light detection result processing method and device based on time sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211485122.5A CN115713748A (en) 2022-11-24 2022-11-24 Traffic light detection result processing method and device based on time sequence

Publications (1)

Publication Number Publication Date
CN115713748A true CN115713748A (en) 2023-02-24

Family

ID=85234547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211485122.5A Pending CN115713748A (en) 2022-11-24 2022-11-24 Traffic light detection result processing method and device based on time sequence

Country Status (1)

Country Link
CN (1) CN115713748A (en)

Similar Documents

Publication Publication Date Title
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
CN109508580B (en) Traffic signal lamp identification method and device
EP4152204A1 (en) Lane line detection method, and related apparatus
CN112883819A (en) Multi-target tracking method, device, system and computer readable storage medium
CN113869293B (en) Lane line recognition method and device, electronic equipment and computer readable medium
CN107886048A (en) Method for tracking target and system, storage medium and electric terminal
KR20210080459A (en) Lane detection method, apparatus, electronic device and readable storage medium
CN115540894B (en) Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN112200884B (en) Lane line generation method and device
CN110119725B (en) Method and device for detecting signal lamp
WO2021189889A1 (en) Text detection method and apparatus in scene image, computer device, and storage medium
CN114612616A (en) Mapping method and device, electronic equipment and storage medium
CN114926766A (en) Identification method and device, equipment and computer readable storage medium
CN115641359B (en) Method, device, electronic equipment and medium for determining movement track of object
CN114279433A (en) Map data automatic production method, related device and computer program product
CN113723229A (en) Signal lamp detection method and device and computer readable storage medium
CN114550116A (en) Object identification method and device
CN112597995B (en) License plate detection model training method, device, equipment and medium
CN112990009A (en) End-to-end-based lane line detection method, device, equipment and storage medium
CN115620264B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN115713748A (en) Traffic light detection result processing method and device based on time sequence
CN115731526A (en) Lane line recognition method, lane line recognition device, electronic equipment and computer readable medium
CN112465859A (en) Method, device, equipment and storage medium for detecting fast moving object
CN115187957A (en) Ground element detection method, device, equipment, medium and product
CN117774963B (en) Forward collision early warning method and device, electronic equipment and intelligent driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 100 meters west of Changhou Road, Changxingzhuang Village, Xiaotangshan Town, Changping District, Beijing 102211, 2nd Floor, Silk Road Style (Beijing) Hotel Management Service Co., Ltd. 821645

Applicant after: Beijing Feichi Era Technology Co.,Ltd.

Applicant after: Jiuzhi (Suzhou) Intelligent Technology Co.,Ltd.

Address before: 100 meters west of Changhou Road, Changxingzhuang Village, Xiaotangshan Town, Changping District, Beijing 102211, 2nd Floor, Silk Road Style (Beijing) Hotel Management Service Co., Ltd. 821645

Applicant before: Jiuzhizhixing (Beijing) Technology Co.,Ltd.

Country or region before: China

Applicant before: Jiuzhi (Suzhou) Intelligent Technology Co.,Ltd.

CB02 Change of applicant information