CN110287828B - Signal lamp detection method and device and electronic equipment - Google Patents

Signal lamp detection method and device and electronic equipment Download PDF

Info

Publication number
CN110287828B
CN110287828B CN201910503231.7A CN201910503231A CN110287828B CN 110287828 B CN110287828 B CN 110287828B CN 201910503231 A CN201910503231 A CN 201910503231A CN 110287828 B CN110287828 B CN 110287828B
Authority
CN
China
Prior art keywords
signal lamp
target
determining
image frame
outer frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910503231.7A
Other languages
Chinese (zh)
Other versions
CN110287828A (en
Inventor
刘审川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910503231.7A priority Critical patent/CN110287828B/en
Publication of CN110287828A publication Critical patent/CN110287828A/en
Application granted granted Critical
Publication of CN110287828B publication Critical patent/CN110287828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a signal lamp detection method, a signal lamp detection device and electronic equipment, wherein a specific implementation mode of the method comprises the following steps: acquiring an acquired image frame; in response to a target device detecting a signal lamp based on the acquired image frames, determining a physical dimension of an outer frame of the signal lamp; determining the size and the position of a central point of a target pixel of an outer frame of the signal lamp in a current image frame; and determining a target area corresponding to the next image frame based on the physical size, the target pixel size and the central point position so as to detect the signal lamp in the target area of the next image frame. According to the embodiment, the signal lamp can be detected from the target area when the next image frame is detected, the detection range is narrowed, and the detection target is more targeted, so that the false detection rate of signal lamp detection is reduced, the calculated amount is reduced, and the efficiency of signal lamp detection is improved.

Description

Signal lamp detection method and device and electronic equipment
Technical Field
The application relates to the technical field of intelligent driving, in particular to a signal lamp detection method and device and electronic equipment.
Background
At present, intelligent driving technology is rapidly developed, and signal lamp detection becomes more important. In the related art, the whole image recognition is generally performed on the captured image frames, and the signal light is detected according to the color and the shape. However, the image frames are subjected to full-image recognition, so that the pertinence to the detection target is poor, and the interference factors in the image frames are more, so that detection errors are easy to occur, the false detection rate of signal lamp detection is increased, meanwhile, the calculation amount is increased, and the detection efficiency of the signal lamp is reduced.
Disclosure of Invention
In order to solve one of the above technical problems, the present application provides a method and an apparatus for detecting a signal lamp, and an electronic device.
According to a first aspect of the embodiments of the present application, there is provided a method for detecting a signal lamp, including:
acquiring an acquired image frame;
in response to a target device detecting a signal lamp based on the acquired image frames, determining a physical dimension of an outer frame of the signal lamp;
determining the size and the position of a central point of a target pixel of an outer frame of the signal lamp in a current image frame;
and determining a target area corresponding to the next image frame based on the physical size, the target pixel size and the central point position so as to detect the signal lamp in the target area of the next image frame.
Optionally, the determining the physical size of the outer frame of the signal lamp includes:
determining a first image frame and a second image frame; the difference between the first moment of acquiring the first image frame and the second moment of acquiring the second image frame is preset duration;
and determining the physical size of the outer frame of the signal lamp according to the first image frame and the second image frame.
Optionally, the determining a physical size of an outer frame of the signal lamp according to the first image frame and the second image frame includes:
determining a running distance of the target device between the first time and the second time;
determining a first pixel size of an outer frame of the signal lamp in the first image frame and a second pixel size in the second image frame;
and determining the physical size of the outer frame of the signal lamp based on the running distance, the first pixel size and the second pixel size.
Optionally, the determining a target area corresponding to a next image frame based on the physical size, the target pixel size, and the center point position includes:
determining a camera focal length and a camera internal parameter matrix corresponding to a camera device for acquiring image frames;
determining a target distance between a center point of an outer frame of the current signal lamp and the camera equipment based on the physical size, the target pixel size and the camera focal length;
determining the height difference between the central point of the outer frame of the signal lamp and the camera equipment;
determining a target coordinate of a central point of an outer frame of the current signal lamp in a coordinate system of the camera equipment based on the height difference, the target distance, the central point position and the camera internal reference matrix;
determining the current target speed of the target equipment;
and determining a target area corresponding to the next image frame according to the target speed and the target coordinate.
Optionally, the determining a target area corresponding to a next image frame according to the target speed and the target coordinate includes:
determining a first component speed and a second component speed of the target speed; the first component speed and the second component speed are component speeds in two coordinate axis directions parallel to the ground under a coordinate system of the camera equipment;
multiplying the first division speed and the second division speed by a frame rate respectively to obtain a first difference value and a second difference value;
correcting the target coordinates by using the first difference value and the second difference value to obtain an estimated position of a central point of an outer frame of the signal lamp in a next image frame;
and determining a preset range around the estimated position as the target area.
Optionally, the method further includes:
determining the ID and status data of each detected signal lamp;
constructing a signal lamp state sequence corresponding to each ID based on the IDs and the state data; the signal light state sequence is used for driving decisions.
Optionally, the method further includes:
based on a preset sequence rule, checking a signal lamp state sequence corresponding to each ID to obtain a checking result; the inspection result is used for indicating an error signal lamp state sequence;
and correcting the error signal lamp state sequence indicated by the inspection result.
According to a second aspect of the embodiments of the present application, there is provided a detection apparatus for a signal lamp, including:
the acquisition module is used for acquiring the acquired image frames;
a first determination module, configured to determine a physical size of an outer frame of a signal lamp in response to a target device detecting the signal lamp based on the acquired image frames;
the second determination module is used for determining the target pixel size and the central point position of the outer frame of the signal lamp in the current image frame;
and the prediction module is used for determining a target area corresponding to the next image frame based on the physical size, the target pixel size and the central point position so as to detect the signal lamp in the target area of the next image frame.
According to a third aspect of embodiments herein, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of any one of the above first aspects.
According to a fourth aspect of embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of the first aspect when executing the program.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the signal lamp detection method and device provided by the embodiment of the application, the collected image frames are obtained, the signal lamp is detected by the target equipment based on the collected image frames, the physical size of the outer frame of the signal lamp is determined, the target pixel size and the center point position of the outer frame of the signal lamp in the current image frame are determined, and the target area corresponding to the next image frame is determined based on the physical size, the target pixel size and the center point position, so that the signal lamp is detected in the target area of the next image frame. Because the embodiment can determine the target area corresponding to the next image frame based on the physical size of the outer frame of the signal lamp, the target pixel size and the central point position of the outer frame of the signal lamp in the current image frame, when the next image frame is detected, the signal lamp can be detected from the target area, the detection range is reduced, the detection target is more targeted, the false detection rate of signal lamp detection is reduced, the calculated amount is reduced, and the efficiency of signal lamp detection is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1A is a flow chart illustrating a method for signal light detection according to an exemplary embodiment of the present application;
FIG. 1B is a schematic view of a signal light housing according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating another method of signal light detection according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating another method of signal light detection according to an exemplary embodiment of the present application;
FIG. 4 is a flow chart illustrating another method of signal light detection according to an exemplary embodiment of the present application;
FIG. 5 is a block diagram of a signal light detection apparatus shown in accordance with an exemplary embodiment of the present application;
FIG. 6 is a block diagram of another signal light detection apparatus shown in accordance with an exemplary embodiment of the present application;
FIG. 7 is a block diagram of another signal light detection apparatus shown in accordance with an exemplary embodiment of the present application;
FIG. 8 is a block diagram of another signal light detection apparatus shown in accordance with an exemplary embodiment of the present application;
FIG. 9 is a block diagram of another signal light detection apparatus shown in accordance with an exemplary embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device shown in the present application according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As shown in fig. 1A, fig. 1A is a flowchart illustrating a method for detecting a signal lamp according to an exemplary embodiment, which may be applied to a target device, which is an intelligent driving device. As can be understood by those skilled in the art, the intelligent driving device can be an unmanned device, a manned device with an intelligent auxiliary function, and the like. Among other things, unmanned devices may include, but are not limited to, unmanned vehicles, unmanned robots, unmanned drones, unmanned boats, and the like. Manned devices with intelligent assistance may include, but are not limited to, semi-autonomous vehicles, semi-autonomous aircraft, and the like. The method comprises the following steps:
in step 101, acquired image frames are acquired.
In step 102, in response to the target device detecting a signal lamp based on the captured image frames, a physical dimension of an outer frame of the signal lamp is determined.
In the present embodiment, the signal lamp may be a signal indicator lamp whose state can be changed. The target equipment is intelligent driving equipment which needs to detect signal lamps at present, and the target equipment can be unmanned equipment or manned equipment with an intelligent auxiliary function. It is to be understood that the present application is not limited to the particular type of target device. Since the state of the signal lamp can be changed continuously, the target device needs to detect the signal lamp continuously, so as to obtain the latest state of the signal lamp in time and make a correct driving decision in time.
In this embodiment, the target device is at least provided with a camera device for acquiring image frames, and the camera device can acquire the image frames of the environment around the target device in real time. The target device can acquire the image frames acquired by the camera device in real time, and performs image recognition on each image frame acquired by the camera device by adopting an image recognition technology to recognize whether the image frame has a signal lamp image or not so as to judge whether the signal lamp is detected or not. It is to be understood that any image recognition technique known in the art and that may occur in the future may be applied to the present application, and that the present application is not limited in its specific context.
In this embodiment, if the target apparatus detects a signal lamp based on an image frame captured by the image capturing apparatus, the physical size of the detected outer frame of the signal lamp may be further determined. The outer frame of the signal lamp is a frame for mounting the signal lamp, and as shown in fig. 1B, the outer frame 111 is the outer frame of the signal lamp. It should be noted that the physical size of the outer frame of the signal lamp may be any one of the following: the actual size of signal lamp frame width, the actual size of signal lamp frame length, the actual size of signal lamp frame diagonal.
In one implementation, because the same area (e.g., the same street, or the same administrative district, etc.) is typically installed with more or less standard signal lights, the outer frame of signal lights located in the same area is typically more or less standard. The physical size of the signal lamp outer frame in the area can be stored aiming at different areas in advance, and when the physical size of the signal lamp outer frame in the area needs to be determined, the physical size of the signal lamp outer frame in the corresponding area can be searched from the prestored data according to the positioning information of the target equipment.
In another implementation, a first image frame and a second image frame may also be determined. The first image frame may be a first image frame in which a signal lamp is detected, or an image frame acquired after the signal lamp is detected, and the second image frame may be an image frame acquired after the signal lamp is detected. The difference between the first moment of acquiring the first image frame and the second moment of acquiring the second image frame is preset duration. Then, the physical size of the outer frame of the signal lamp is determined according to the first image frame and the second image frame.
It is to be understood that the physical dimensions of the outer frame of the signal lamp may be determined in any other reasonable manner, and the present application is not limited in this respect.
In step 103, the target pixel size and center point position of the outer frame of the signal lamp in the current image frame are determined.
In this embodiment, after the physical size of the outer frame of the signal lamp is determined, a current image frame is acquired, and the current image frame is the acquired image frame currently acquired by the image pickup apparatus. Then, an outer frame of the signal lamp in the current image frame is identified by adopting an image identification technology, and the target pixel size and the central point position of the outer frame of the signal lamp in the current image frame are determined. The target pixel size is the pixel size of the outer frame of the signal lamp in the current image frame, and the target pixel size is the size expressed in pixel units. The central point position is the pixel coordinate of the central point of the outer frame of the signal lamp in the current image frame.
It should be noted that the target pixel size may be any one of the following: the width of the signal lamp outer frame is the pixel size of the current image frame, the length of the signal lamp outer frame is the pixel size of the current image frame, and the diagonal of the signal lamp outer frame is the pixel size of the current image frame. The target pixel size corresponds to the determined physical size of the signal lamp outer frame. For example, if the target pixel size represents the pixel size of the signal light outline border width in the current image frame, the physical size of the signal light outline border represents the actual size of the signal light outline border width. If the target pixel size represents the pixel size of the signal light frame length in the current image frame, the physical size of the signal light frame represents the actual size of the signal light frame length. If the target pixel size represents the pixel size of the diagonal of the outline border of the signal lamp in the current image frame, the physical size of the outline border of the signal lamp represents the actual size of the diagonal of the outline border of the signal lamp.
In step 104, a target area corresponding to the next image frame is determined based on the physical size, the target pixel size and the center point position, so as to detect a signal lamp in the target area of the next image frame.
In this embodiment, since the state of the signal lamp is constantly changed, after the target device detects the signal lamp, the state of the signal lamp needs to be continuously tracked and detected. Therefore, the target device needs to detect the signal light in each frame of image captured by the camera device. Specifically, after determining the target pixel size and the center point position of the outer frame of the signal lamp in the current image frame, a target area in which an image of the signal lamp may exist in the next image frame may be predicted based on the detected physical size of the outer frame of the signal lamp, the target pixel size and the center point position of the outer frame of the signal lamp in the current image frame to detect the signal lamp in the target area.
Specifically, the target area corresponding to the next image frame may be determined as follows: firstly, determining a camera focal length and a camera internal reference matrix corresponding to the camera equipment for collecting the image frame, and determining the target distance between the central point of the outer frame of the current signal lamp and the camera equipment based on the physical size, the target pixel size and the camera focal length. Then, the height difference between the center point of the outer frame of the signal lamp and the image pickup equipment is determined, and the target coordinate of the center point of the outer frame of the current signal lamp in the coordinate system of the image pickup equipment is determined based on the height difference, the target distance, the center point position and the camera internal reference matrix. And finally, determining the target speed of the current target equipment, and determining a target area corresponding to the next image frame according to the target speed and the target coordinate.
In the method for detecting a signal lamp provided by the above embodiment of the present application, by acquiring the collected image frames, responding to a target device that detects a signal lamp based on the collected image frames, determining a physical size of an outer frame of the signal lamp, determining a target pixel size and a center point position of the outer frame of the signal lamp in a current image frame, and determining a target area corresponding to a next image frame based on the physical size, the target pixel size and the center point position, so as to detect the signal lamp in the target area of the next image frame. Because the embodiment can determine the target area corresponding to the next image frame based on the physical size of the outer frame of the signal lamp, the target pixel size and the central point position of the outer frame of the signal lamp in the current image frame, when the next image frame is detected, the signal lamp can be detected from the target area, the detection range is reduced, the detection target is more targeted, the false detection rate of signal lamp detection is reduced, the calculated amount is reduced, and the efficiency of signal lamp detection is improved.
Fig. 2 is a flow chart illustrating another method for detecting a signal lamp according to an exemplary embodiment, which describes a process of determining a physical size of an outer frame of a signal lamp, and the method may be applied to a target device, which is an intelligent driving device, as shown in fig. 2, and includes the following steps:
in step 201, acquired image frames are acquired.
In step 202, a first image frame and a second image frame are determined in response to the target device detecting a signal light based on the acquired image frames.
In this embodiment, if the target device detects a signal lamp based on image frames captured by the image capturing device, the physical size of the detected outer frame of the signal lamp may be further determined, specifically, the first image frame and the second image frame need to be determined first. The first image frame may be a first image frame in which a signal lamp is detected, or an image frame acquired after the signal lamp is detected, and the second image frame may be an image frame acquired after the signal lamp is detected. The difference between the first moment of acquiring the first image frame and the second moment of acquiring the second image frame is preset duration.
In step 203, the physical size of the outer frame of the signal lamp is determined according to the first image frame and the second image frame.
In this embodiment, the physical size of the outer frame of the signal lamp may be determined from the first image frame and the second image frame. Specifically, first, a running distance of the target apparatus between a first time at which the first image frame is captured and a second time at which the second image frame is captured, which is also a running distance of the image pickup apparatus mounted to the target apparatus, may be determined. For example, the running distance may be obtained by calculating an average speed of the target device between the first time and the second time according to data collected by a speed sensor installed in the target device, and multiplying the average speed by a preset time period of a difference between the first time and the second time. For another example, the positioning information of the target device between the first time and the second time may be obtained, a movement track of the target device between the first time and the second time may be obtained, and the movement distance may be determined according to the movement track.
Next, a pixel size of an outer frame of the signal light in the first image frame is determined as a first pixel size, and a pixel size of the outer frame of the signal light in the second image frame is determined as a second pixel size. And determining the physical size of the outer frame of the signal lamp based on the running distance of the target equipment between the first time and the second time, the first pixel size and the second pixel size. Specifically, the physical size of the signal lamp outer frame can be roughly estimated according to the following formula:
Figure BDA0002090933200000101
wherein L isαIs the physical size of the signal lamp frame, D is the running distance of the target equipment between the first moment and the second moment, F is the focal length of a camera corresponding to the camera equipment installed on the target equipment, and L is1For a first pixel size, L, of the signal lamp housing in a first image frame2The second pixel size of the outer frame of the signal lamp in the second image frame. It should be noted that, if the physical size of the signal light frame represents the actual size of the width of the signal light frame, the first pixel size and the second pixel size both represent the signalThe pixel size of the width of the lamp housing. If the physical size of the signal light frame represents the actual size of the length of the signal light frame, the first pixel size and the second pixel size both represent the pixel size of the length of the signal light frame. If the physical size of the signal light frame represents the actual size of the diagonal line of the signal light frame, the first pixel size and the second pixel size both represent the pixel size of the diagonal line of the signal light frame.
In step 204, the target pixel size and center point position of the outer frame of the signal lamp in the current image frame are determined.
In step 205, a target area corresponding to the next image frame is determined based on the physical size, the target pixel size and the center point position, so as to detect a signal lamp in the target area of the next image frame.
It should be noted that, for the same steps as in the embodiment of fig. 1A, details are not repeated in the embodiment of fig. 2, and related contents may refer to the embodiment of fig. 1A.
The signal lamp detection method provided by the above embodiment of the application, by acquiring the collected image frames, determining the first image frame and the second image frame in response to the target device detecting the signal lamp based on the collected image frames, determining the physical size of the outer frame of the signal lamp according to the first image frame and the second image frame, determining the target pixel size and the center point position of the outer frame of the signal lamp in the current image frame, and determining the target area corresponding to the next image frame based on the physical size, the target pixel size and the center point position, so as to detect the signal lamp in the target area of the next image frame. Since the present embodiment can determine the physical size of the outer frame of the signal lamp according to the first image frame and the second image frame after the signal lamp is detected based on the collected image frames, the obtained physical size of the outer frame of the signal lamp is more accurate. Therefore, the method is beneficial to reducing the false detection rate of signal lamp detection, and further improves the efficiency of signal lamp detection.
As shown in fig. 3, fig. 3 is a flowchart illustrating another method for detecting a signal lamp according to an exemplary embodiment, which describes in detail a process of determining a target area corresponding to a next image frame, and the method may be applied to a target device, which is an intelligent driving device, and includes the following steps:
in step 301, acquired image frames are acquired.
In step 302, in response to the target device detecting a signal lamp based on the captured image frames, a physical dimension of an outer frame of the signal lamp is determined.
In step 303, the target pixel size and center point position of the outer frame of the signal lamp in the current image frame are determined.
In step 304, a camera focal length and a camera internal parameter matrix corresponding to the image capturing device for capturing the image frame are determined.
In step 305, a target distance between a center point of the outer frame of the current signal lamp and the image pickup apparatus is determined based on the physical size, the target pixel size, and the camera focal length.
In this embodiment, the camera focal length and the camera internal parameter matrix corresponding to the image capturing device for capturing the image frame may be acquired from the pre-stored data. Then, based on the physical size of the signal lamp outer frame, the target pixel size of the signal lamp outer frame in the current image frame and the camera focal length corresponding to the camera device, the target distance between the center point of the current signal lamp outer frame and the camera device is determined.
Specifically, the target distance between the center point of the outer frame of the current signal lamp and the image pickup apparatus can be roughly estimated according to the following formula:
Figure BDA0002090933200000111
wherein L isαThe method comprises the steps of obtaining a signal lamp frame, obtaining a target image frame, obtaining a physical size of the signal lamp frame, obtaining a camera focal length corresponding to the camera equipment installed on the target equipment, obtaining a target distance between a central point of the current signal lamp frame and the camera equipment, and obtaining a target pixel size of the signal lamp frame in the current image frame.
In step 306, the difference in height between the center point of the outer frame of the signal lamp and the image pickup apparatus is determined.
In step 307, target coordinates of the center point of the outer frame of the current signal lamp in the coordinate system of the image pickup apparatus are determined based on the height difference, the target distance, the center point position, and the camera internal reference matrix.
In general, the height of the traffic light is relatively fixed, and the mounting height of the image pickup device on the target device is also known, so that the height difference between the center point of the outer frame of the traffic light and the image pickup device can be calculated in advance from the height of the traffic light and the mounting height on the target device, and stored. When the height difference needs to be determined, the height difference can be directly taken out from the pre-stored data.
In this embodiment, the target coordinates of the central point of the outer frame of the current signal lamp in the coordinate system of the image pickup apparatus may be determined based on the height difference, the target distance, the central point position, and the camera internal reference matrix. Specifically, if the target coordinates are (X, Y, Z), the target coordinates can be calculated by solving the following equation set:
Figure BDA0002090933200000121
wherein X and Y are unknowns of the equation set, Z is the height difference between the center point of the signal lamp outer frame and the camera device installed on the target device, d is the target distance between the center point of the current signal lamp outer frame and the camera device, and f is the distance between the center point of the current signal lamp outer frame and the target devicexAnd fyThe parameters in the camera internal reference matrix corresponding to the camera device are obtained, wherein the camera internal reference matrix is
Figure BDA0002090933200000122
PxAnd PyIs the coordinate value of the central point position of the outer frame of the signal lamp in the current image frame, wherein the coordinate of the central point position is (P)x,Py)。
In step 308, a target speed of the current target device is determined.
In step 309, a target area corresponding to the next image frame is determined according to the target speed and the target coordinates.
In this embodiment, the current target speed of the target device may be obtained, for example, the current speed may be acquired as the target speed by using a speed sensor installed in the target device. And then, determining a target area corresponding to the next image frame according to the target speed and the target coordinates.
Specifically, first, a first partial velocity and a second partial velocity of the target velocity may be determined, the first partial velocity and the second partial velocity being respectively partial velocities in two coordinate axis directions parallel to the ground in the coordinate system of the image pickup apparatus. For example, two coordinate axes parallel to the ground in the coordinate system of the image pickup apparatus are an x-axis and a y-axis, respectively. The direction of the first component speed is the same as the direction of the x axis, and the direction of the second component speed is the same as the direction of the y axis. Alternatively, the direction of the second component velocity is the same as the x-axis direction, and the direction of the first component velocity is the same as the y-axis direction.
Then, the first and second divided speeds are multiplied by the frame rate to obtain a first difference value and a second difference value. Wherein the frame rate is the number of frames per second of image frames collected by the camera device. And correcting the target coordinates by using the first difference value and the second difference value according to the movement direction of the target equipment to obtain the estimated position of the central point of the outer frame of the signal lamp in the next image frame, and determining the preset range around the estimated position as the target area. The preset range may be any reasonable range, and the present application is not limited in this respect.
It should be noted that, for the same steps as in the embodiment of fig. 1A and fig. 2, details are not repeated in the embodiment of fig. 3, and related contents may refer to the embodiment of fig. 1A and fig. 2.
According to the detection method for the signal lamp provided by the embodiment of the application, the physical size of the outer frame of the signal lamp is determined, the size of the target pixel and the position of the central point of the outer frame of the signal lamp in the current image frame are determined, and the camera focal length and the camera internal reference matrix corresponding to the camera equipment for collecting the image frame are determined. And determining the target distance between the center point of the outer frame of the signal lamp and the camera equipment at present based on the physical size, the target pixel size and the camera focal length, and determining the height difference between the center point of the outer frame of the signal lamp and the camera equipment. And determining the target coordinates of the central point of the outer frame of the current signal lamp in the coordinate system of the camera equipment based on the height difference, the target distance, the central point position and the camera internal reference matrix. And determining the target speed of the current target equipment, and determining a target area corresponding to the next image frame according to the target speed and the target coordinates. Therefore, the target area corresponding to the next image frame can be more accurately determined, the false detection rate of signal lamp detection is further reduced, the calculated amount is reduced, and the signal lamp detection efficiency is improved.
As shown in fig. 4, fig. 4 is a flowchart illustrating another signal lamp detection method according to an exemplary embodiment, which describes a process of constructing a signal lamp state sequence, and the method may be applied to a target device, which is a smart driving device, and includes the following steps:
in step 401, acquired image frames are acquired.
In step 402, in response to the target device detecting a signal lamp based on the captured image frames, a physical dimension of an outer frame of the signal lamp is determined.
In step 403, the target pixel size and center point position of the outer frame of the signal lamp in the current image frame are determined.
In step 404, a target area corresponding to the next image frame is determined based on the physical size, the target pixel size and the center point position, so as to detect a signal lamp in the target area of the next image frame.
In step 405, the ID and status data for each detected signal lamp is determined.
In this embodiment, the ID and status data for each detected signal lamp may be determined. The ID of the signal lamp can be a unique identifier of the signal lamp, and is used for distinguishing different signal lamps. Different signal lights are detected with different IDs during the same driving. Specifically, after a signal lamp is detected for the first time, a corresponding ID may be set for the signal lamp, and a target area of a next image frame is continuously predicted in the process of continuously acquiring image frames. If a signal light is detected in the target area of the next image frame, the ID of the detected signal light is not changed. When a traffic light is detected in a region other than the target region of the next image frame, an ID is set for the new detected traffic light.
In the present embodiment, the status data of the signal lights is used to provide indication information for driving, and the status data of the signal lights may include, but is not limited to, color data of the signal lights, indicator data, and the like.
In step 406, based on the ID and status data of each detected signal, a signal status sequence corresponding to each ID is constructed, and the signal status sequence is used for driving decision.
In this embodiment, for each detected signal lamp, a signal lamp state sequence corresponding to the ID may be constructed based on the ID and the state data of the signal lamp. The signal lamp state sequence is obtained by arranging the state data of the signal lamp according to the time sequence, and can represent the state change condition of the signal lamp. The signal lamp state sequence can be used for driving decision, and a driver or unmanned equipment can make and execute corresponding driving decision according to the state change condition represented by the signal lamp state sequence.
In the signal lamp detection method provided by the above embodiment of the application, the ID and the state data of each detected signal lamp are determined, and a signal lamp state sequence corresponding to each ID is constructed based on the ID and the state data of each detected signal lamp, and the signal lamp state sequence is used for driving decision. Because the embodiment can determine the ID and the state data of each detected signal lamp and can construct the signal lamp state sequence corresponding to each ID, when the vehicle turns and detects a plurality of signal lamps, wrong driving decisions can be avoided, and the accuracy of the driving decisions is improved.
It should be noted that although in the above embodiments, the operations of the methods of the present application were described in a particular order, this does not require or imply that these operations must be performed in that particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
In some optional implementations, the method according to the embodiment shown in fig. 4 may further include: and based on a preset sequence rule, checking the signal lamp state sequence corresponding to each ID to obtain a check result, wherein the check result is used for indicating the signal lamp state sequence with errors, and correcting the signal lamp state sequence with errors indicated by the check result.
In this embodiment, the sequence of signal light states needs to satisfy certain rules, for example, yellow light must be red light, not green light. Therefore, a sequence rule to be satisfied by the signal lamp status sequence may be set in advance, and then the signal lamp status sequence corresponding to any one ID may be checked by using the sequence rule. And if the obtained inspection result indicates that the signal lamp state sequence is wrong, correcting the wrong signal lamp state sequence so as to obtain a correct signal lamp state sequence.
The embodiment can check the signal lamp state sequence corresponding to each ID based on the preset sequence rule, and correct the error signal lamp state sequence indicated by the checked check result. Therefore, the accuracy of the signal lamp state sequence can be improved, so that wrong driving decisions are further avoided, and the accuracy of the driving decisions is improved.
Corresponding to the embodiment of the detection method of the signal lamp, the application also provides an embodiment of a detection device of the signal lamp.
As shown in fig. 5, fig. 5 is a block diagram of a signal lamp detection apparatus according to an exemplary embodiment of the present application, where the apparatus may include: an acquisition module 501, a first determination module 502, a second determination module 503, and a prediction module 504.
The obtaining module 501 is configured to obtain a collected image frame.
A first determining module 502 for determining a physical size of an outer frame of a signal lamp in response to the target device detecting the signal lamp based on the captured image frames.
And a second determining module 503, configured to determine a target pixel size and a center point position of the outer frame of the signal lamp in the current image frame.
And the prediction module 504 is configured to determine a target area corresponding to a next image frame based on the physical size, the target pixel size, and the center point position, so as to detect the signal lamp in the target area of the next image frame.
As shown in fig. 6, fig. 6 is a block diagram of another signal lamp detection apparatus shown in the present application according to an exemplary embodiment, where on the basis of the foregoing embodiment shown in fig. 5, the first determining module 502 may include: a first determination submodule 601 and a second determination submodule 602.
The first determining submodule 601 is configured to determine a first image frame and a second image frame, where a difference between a first time at which the first image frame is acquired and a second time at which the second image frame is acquired is a preset time duration.
The second determining submodule 602 is configured to determine a physical size of an outer frame of the signal lamp according to the first image frame and the second image frame.
In some optional embodiments, the second determining submodule 602 is configured to: the method comprises the steps of determining a running distance of a target device between a first time and a second time, determining a first pixel size of an outer frame of the signal lamp in a first image frame and a second pixel size of the outer frame of the signal lamp in a second image frame, and determining a physical size of the outer frame of the signal lamp based on the running distance, the first pixel size and the second pixel size.
As shown in fig. 7, fig. 7 is a block diagram of another signal lamp detection apparatus according to an exemplary embodiment of the present application, where on the basis of the foregoing embodiment shown in fig. 5, the prediction module 504 may include: a parameter determination sub-module 701, a distance determination sub-module 702, a height determination sub-module 703, a coordinate determination sub-module 704, a velocity determination sub-module 705, and a prediction sub-module 706.
The parameter determining submodule 701 is configured to determine a camera focal length and a camera internal parameter matrix corresponding to the image capturing device for acquiring the image frame.
And a distance determining submodule 702, configured to determine a target distance between a center point of the outer frame of the current signal lamp and the image capturing apparatus based on the physical size, the target pixel size, and the camera focal length.
And a height determining submodule 703 for determining a height difference between a center point of the outer frame of the signal lamp and the image pickup apparatus.
And a coordinate determination submodule 704, configured to determine a target coordinate of the center point of the outer frame of the current signal lamp in a coordinate system of the image capturing apparatus based on the height difference, the target distance, the center point position, and the camera internal reference matrix.
A speed determination submodule 705 is used to determine the target speed of the current target device.
And the prediction sub-module 706 is configured to determine a target area corresponding to the next image frame according to the target speed and the target coordinates.
In further alternative embodiments, the prediction sub-module 706 is configured to: and determining a first component speed and a second component speed of the target speed, wherein the first component speed and the second component speed are component speeds in two coordinate axis directions parallel to the ground under a coordinate system of the camera equipment. And multiplying the first division speed and the second division speed by the frame rate respectively to obtain a first difference value and a second difference value, correcting the target coordinates by using the first difference value and the second difference value to obtain an estimated position of the central point of the outer frame of the signal lamp in the next image frame, and determining a preset range around the estimated position as a target area.
As shown in fig. 8, fig. 8 is a block diagram of another signal lamp detection apparatus according to an exemplary embodiment of the present application, where the apparatus may further include, on the basis of the foregoing embodiment shown in fig. 5: a third determination module 505 and a construction module 506.
The third determining module 505 is configured to determine an ID and status data of each detected signal lamp.
And a constructing module 506, configured to construct a signal lamp state sequence corresponding to each ID based on the IDs and the state data, where the signal lamp state sequence is used for driving decision.
As shown in fig. 9, fig. 9 is a block diagram of another signal lamp detection apparatus according to an exemplary embodiment of the present application, where the apparatus may further include, on the basis of the foregoing embodiment shown in fig. 8: a verification module 507 and an error correction module 508.
The checking module 507 is configured to check the signal lamp state sequence corresponding to each ID based on a preset sequence rule to obtain a checking result, where the checking result is used to indicate an erroneous signal lamp state sequence.
And the error correction module 508 is used for correcting the error signal lamp state sequence indicated by the detection result.
It should be understood that the above-mentioned apparatus may be preset in the intelligent driving device, and may also be loaded into the intelligent driving device by downloading or the like. The corresponding module in the device can be matched with the module in the intelligent driving equipment to realize a signal lamp detection scheme.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and the computer program may be used to execute the method for detecting a signal lamp provided in any one of the embodiments of fig. 1A to 4.
Corresponding to the signal lamp detection method described above, the embodiment of the present application further provides a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application, shown in fig. 10. Referring to fig. 10, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the detection device of the signal lamp on the logic level. Of course, besides the software implementation, the present application does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A method of detecting a signal lamp, the method comprising:
acquiring an acquired image frame;
in response to a target device detecting a signal lamp based on the acquired image frames, determining a physical dimension of an outer frame of the signal lamp;
determining the size and the position of a central point of a target pixel of an outer frame of the signal lamp in a current image frame;
determining a camera focal length and a camera internal parameter matrix corresponding to a camera device for acquiring image frames; determining a target distance between a center point of an outer frame of the current signal lamp and the camera equipment based on the physical size, the target pixel size and the camera focal length; determining the height difference between the central point of the outer frame of the signal lamp and the camera equipment; determining a target coordinate of a central point of an outer frame of the current signal lamp in a coordinate system of the camera equipment based on the height difference, the target distance, the central point position and the camera internal reference matrix; determining the current target speed of the target equipment; and determining a target area corresponding to the next image frame according to the target speed and the target coordinates so as to detect the signal lamp in the target area of the next image frame.
2. The method of claim 1, wherein said determining a physical dimension of an outer frame of said signal lamp comprises:
determining a first image frame and a second image frame; the difference between the first moment of acquiring the first image frame and the second moment of acquiring the second image frame is preset duration;
and determining the physical size of the outer frame of the signal lamp according to the first image frame and the second image frame.
3. The method of claim 2, wherein determining a physical size of an outer frame of the signal lamp from the first image frame and the second image frame comprises:
determining a running distance of the target device between the first time and the second time;
determining a first pixel size of an outer frame of the signal lamp in the first image frame and a second pixel size in the second image frame;
and determining the physical size of the outer frame of the signal lamp based on the running distance, the first pixel size and the second pixel size.
4. The method of claim 1, wherein determining a target region corresponding to a next image frame according to the target speed and the target coordinates comprises:
determining a first component speed and a second component speed of the target speed; the first component speed and the second component speed are component speeds in two coordinate axis directions parallel to the ground under a coordinate system of the camera equipment;
multiplying the first division speed and the second division speed by a frame rate respectively to obtain a first difference value and a second difference value;
correcting the target coordinates by using the first difference value and the second difference value to obtain an estimated position of a central point of an outer frame of the signal lamp in a next image frame;
and determining a preset range around the estimated position as the target area.
5. The method according to any one of claims 1-4, further comprising:
determining the ID and status data of each detected signal lamp;
constructing a signal lamp state sequence corresponding to each ID based on the IDs and the state data; the signal light state sequence is used for driving decisions.
6. The method of claim 5, further comprising:
based on a preset sequence rule, checking a signal lamp state sequence corresponding to each ID to obtain a checking result; the inspection result is used for indicating an error signal lamp state sequence;
and correcting the error signal lamp state sequence indicated by the inspection result.
7. A signal light detection apparatus, comprising:
the acquisition module is used for acquiring the acquired image frames;
a first determination module, configured to determine a physical size of an outer frame of a signal lamp in response to a target device detecting the signal lamp based on the acquired image frames;
the second determination module is used for determining the target pixel size and the central point position of the outer frame of the signal lamp in the current image frame;
the prediction module is used for determining a camera focal length and a camera internal parameter matrix corresponding to the camera equipment for acquiring the image frame; determining a target distance between a center point of an outer frame of the current signal lamp and the camera equipment based on the physical size, the target pixel size and the camera focal length; determining the height difference between the central point of the outer frame of the signal lamp and the camera equipment; determining a target coordinate of a central point of an outer frame of the current signal lamp in a coordinate system of the camera equipment based on the height difference, the target distance, the central point position and the camera internal reference matrix; determining the current target speed of the target equipment; and determining a target area corresponding to the next image frame according to the target speed and the target coordinates so as to detect the signal lamp in the target area of the next image frame.
8. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when being executed by a processor, carries out the method of any of the preceding claims 1-6.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-6 when executing the program.
CN201910503231.7A 2019-06-11 2019-06-11 Signal lamp detection method and device and electronic equipment Active CN110287828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910503231.7A CN110287828B (en) 2019-06-11 2019-06-11 Signal lamp detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910503231.7A CN110287828B (en) 2019-06-11 2019-06-11 Signal lamp detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110287828A CN110287828A (en) 2019-09-27
CN110287828B true CN110287828B (en) 2022-04-01

Family

ID=68004438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910503231.7A Active CN110287828B (en) 2019-06-11 2019-06-11 Signal lamp detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110287828B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131414A (en) * 2020-09-23 2020-12-25 北京百度网讯科技有限公司 Signal lamp image labeling method and device, electronic equipment and road side equipment
CN112541553B (en) * 2020-12-18 2024-04-30 深圳地平线机器人科技有限公司 Method, device, medium and electronic equipment for detecting state of target object
CN113033464B (en) * 2021-04-10 2023-11-21 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium
CN114845055B (en) * 2022-04-27 2024-03-22 北京市商汤科技开发有限公司 Shooting parameter determining method and device of image acquisition equipment and electronic equipment
CN115394103A (en) * 2022-07-29 2022-11-25 阿波罗智联(北京)科技有限公司 Method, device, equipment and storage medium for identifying signal lamp

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3100206B1 (en) * 2014-01-30 2020-09-09 Mobileye Vision Technologies Ltd. Systems and methods for lane end recognition
KR101632107B1 (en) * 2014-06-17 2016-06-21 강석헌 A combination traffic light
EP3845427A1 (en) * 2015-02-10 2021-07-07 Mobileye Vision Technologies Ltd. Sparse map for autonomous vehicle navigation
CN106909937B (en) * 2017-02-09 2020-05-19 北京汽车集团有限公司 Traffic signal lamp identification method, vehicle control method and device and vehicle
CN106837649B (en) * 2017-03-03 2018-06-22 吉林大学 Self study intelligence start stop system based on signal lamp countdown identification
CN109544633B (en) * 2017-09-22 2021-08-27 华为技术有限公司 Target ranging method, device and equipment
CN109740526B (en) * 2018-12-29 2023-06-20 清华大学苏州汽车研究院(吴江) Signal lamp identification method, device, equipment and medium

Also Published As

Publication number Publication date
CN110287828A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110287828B (en) Signal lamp detection method and device and electronic equipment
KR102483649B1 (en) Vehicle localization method and vehicle localization apparatus
JP6464783B2 (en) Object detection device
JP4702569B2 (en) Image processing apparatus for vehicle
US11157753B2 (en) Road line detection device and road line detection method
CN109949594A (en) Real-time traffic light recognition method
US10554951B2 (en) Method and apparatus for the autocalibration of a vehicle camera system
JP2014034251A (en) Vehicle traveling control device and method thereof
US10643466B2 (en) Vehicle search system, vehicle search method, and vehicle used therefor
CN102806913A (en) Novel lane line deviation detection method and device
CN105205459B (en) A kind of recognition methods of characteristics of image vertex type and device
CN111027381A (en) Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN106203381A (en) Obstacle detection method and device in a kind of driving
CN113435237B (en) Object state recognition device, recognition method, and computer-readable recording medium, and control device
JP2021128705A (en) Object state identification device
US11069049B2 (en) Division line detection device and division line detection method
JP7003972B2 (en) Distance estimation device, distance estimation method and computer program for distance estimation
AU2020257038A1 (en) Camera orientation estimation
JP2021081272A (en) Position estimating device and computer program for position estimation
CN111126154A (en) Method and device for identifying road surface element, unmanned equipment and storage medium
CN115249407B (en) Indicator light state identification method and device, electronic equipment, storage medium and product
CN112598314B (en) Method, device, equipment and medium for determining perception confidence of intelligent driving automobile
JP2018120303A (en) Object detection device
CN112797980B (en) Indoor unmanned vehicle guiding method and device and electronic equipment
US20220136859A1 (en) Apparatus and method for updating map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant