CN113033464A - Signal lamp detection method, device, equipment and storage medium - Google Patents

Signal lamp detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113033464A
CN113033464A CN202110385452.6A CN202110385452A CN113033464A CN 113033464 A CN113033464 A CN 113033464A CN 202110385452 A CN202110385452 A CN 202110385452A CN 113033464 A CN113033464 A CN 113033464A
Authority
CN
China
Prior art keywords
image
determining
signal
signal lamp
lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110385452.6A
Other languages
Chinese (zh)
Other versions
CN113033464B (en
Inventor
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202110385452.6A priority Critical patent/CN113033464B/en
Publication of CN113033464A publication Critical patent/CN113033464A/en
Application granted granted Critical
Publication of CN113033464B publication Critical patent/CN113033464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention discloses a signal lamp detection method, a signal lamp detection device, signal lamp detection equipment and a storage medium, and relates to the fields of automatic driving, intelligent transportation, road information prediction, driving path planning and the like, in particular to the field of signal lamp detection. The specific implementation scheme is as follows: acquiring an image sequence collected aiming at a signal lamp; determining a target differential image based on the image sequence; the target differential image is a differential image obtained based on an image when an indication signal sent by a signal lamp is switched; and determining the reference position of the signal lamp in the image based on the determined target differential image. According to the scheme of the disclosure, the position of the signal lamp in each frame image can be effectively detected.

Description

Signal lamp detection method, device, equipment and storage medium
Technical Field
The present disclosure relates to the fields of automatic driving, intelligent transportation, road information prediction, driving path planning, and the like, and in particular, to a signal lamp detection method, apparatus, device, and storage medium.
Background
In recent years, an Intelligent Transportation System (ITS) is a brand new technology, and it is an advanced scientific means to comprehensively process factors related to roads, Traffic, people, environment, and the like to realize Intelligent Traffic management.
The intelligent traffic management based on signal lamp plays an important role in the technical field of intelligent traffic. The position of the signal lamp in the picture collected by the image collecting device is effectively determined, and the efficiency of intelligent traffic management is influenced to a certain degree. The relative position between the image acquisition device and the signal lamp is difficult to be strictly kept unchanged, and the position of the signal lamp in the picture acquired by the image acquisition device may be changed.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for signal lamp detection.
According to an aspect of the present disclosure, there is provided a signal lamp detecting method including:
acquiring an image sequence acquired for the signal lamp;
determining a target difference image based on the image sequence; the target differential image is: a differential image obtained based on an image when the indication signal sent by the signal lamp is switched;
and determining the reference position of the signal lamp in the image based on the determined target differential image.
According to another aspect of the present disclosure, there is provided an indication signal identification method including:
acquiring a reference position, wherein the reference position is the position of a signal lamp in an image acquired by image acquisition equipment, and the reference position is obtained in advance by the signal lamp detection method in the first aspect;
acquiring an image acquired by image acquisition equipment;
determining the area of each frame of image corresponding to the signal lamp based on the acquired reference position;
identifying an indicating signal sent by the signal lamp based on the area of each frame of image corresponding to the signal lamp; wherein the indication signal comprises at least one of: light, image, graphics, text.
According to another aspect of the present disclosure, there is provided a road information display method, including:
acquiring an indicating signal sent by a signal lamp, wherein the indicating signal is obtained by the signal lamp detection method;
and generating road information based on the indication signal and the road section where the signal lamp is located, and displaying the road information.
According to another aspect of the present disclosure, there is provided a vehicle path planning method, including:
determining the current position of the vehicle;
acquiring indication signals sent by signal lamps which meet preset conditions with the current position of the vehicle, wherein the indication signals are obtained by the signal lamp detection method;
and planning a path adopted by the vehicle when the vehicle runs in the future time based on the current position of the vehicle and the acquired indication signal.
According to another aspect of the present disclosure, there is provided a vehicle path planning method, including:
acquiring a path planned for the vehicle and indicating signals sent by signal lamps along the path; the indication signal is obtained by the signal lamp detection method;
determining a driving state of the vehicle at a next moment in time on the condition that the vehicle drives along the path based on the current position of the vehicle and the indication signal, wherein the driving state comprises at least one of the following: speed, direction of speed, acceleration, direction of acceleration.
According to another aspect of the present disclosure, there is provided an obstacle detection method including:
acquiring the position of an obstacle, the current motion state of the obstacle and an indication signal sent by a signal lamp corresponding to the obstacle as available information; the indication signal is obtained by the signal lamp detection method;
and predicting at least one of the position and the motion state of the obstacle at the next moment based on the available information.
According to another aspect of the present disclosure, there is provided an electronic device for implementing the method of any one of the preceding aspects.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding aspects.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of the preceding aspects.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the preceding aspects.
According to another aspect of the present disclosure, there is provided a roadside apparatus including the foregoing electronic apparatus.
According to another aspect of the present disclosure, a cloud control platform is provided, which includes the foregoing electronic device.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a scene schematic diagram of a signal light detection method according to an embodiment of the present disclosure;
FIG. 2 is another schematic view of a scene of a signal light detection method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow diagram of a signal light detection method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a process for determining a position of a signal lamp in an image based on a sequence of images according to a signal lamp detection method of an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a process for determining a reference region according to a signal light detection method of an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a process for determining a target image based on an image sequence according to a signal light detection method of an embodiment of the present disclosure;
FIG. 7a is a schematic diagram of a signal light detection method according to an embodiment of the present disclosure, determining a position of a signal in an image based on a connected component;
FIG. 7b is a schematic diagram of a signal lamp detection method according to an embodiment of the present disclosure, determining a position range of a signal in an image based on a connected component;
FIG. 8 is a schematic diagram of a graph based on centroid determination for a signal light detection method according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a first electronic device according to an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart diagram of an indication signal identification method according to an embodiment of the present disclosure;
FIG. 11 is a schematic flow chart diagram of a road information presentation method according to an embodiment of the disclosure;
FIG. 12 is a schematic flow chart diagram of a vehicle path planning method according to an embodiment of the present disclosure;
FIG. 13 is a schematic flow chart diagram of a vehicle driving state planning method according to an embodiment of the present disclosure;
fig. 14 is a schematic flow diagram of an obstacle detection method according to an embodiment of the present disclosure;
fig. 15 is a block diagram of an electronic device for implementing any one of the signal light detection method, the indication signal identification method, the road information display method, the vehicle path planning method, the vehicle driving state planning method, and the obstacle detection method according to the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Before explaining the process in the present disclosure, a description will be given of a part of concepts and scenarios involved.
The signal light detection process in the present disclosure may be as shown in fig. 1. This process may be performed by a first electronic device, described below, which may be as shown in fig. 9. The first electronic device may be a physical device having a certain structure, or may be a virtual device implemented mainly based on a program.
The first electronic device may be applied to an electronic apparatus, so that the electronic apparatus may be used to perform the signal lamp detection process in the present disclosure.
In an optional embodiment of the present disclosure, the electronic device may include a communication interface with the image capturing device, so as to obtain the image sequence captured by the image capturing device.
In addition, in another alternative embodiment of the present disclosure, the image capturing device may be a component of an electronic device, and the electronic device may directly obtain the image sequence based on its own image capturing device and perform subsequent processing on the image sequence (for example, the processing may be processing on images, videos, data, and the like) to implement detection on the signal lamp.
The present disclosure does not limit the specific type of image capturing device, which may be a camera, a camcorder, an AI camera, or the like. Further, the image capturing device may also be a device used in conjunction with other sensors (e.g., infrared sensors, etc.). The present disclosure also does not specifically limit the type of data collected by the image capture device, which may be pictures, videos, and the like.
The electronic equipment in this disclosure can be applied to roadside equipment, and then makes this roadside equipment can be used for carrying out the signal lamp detection process in this disclosure.
In the present disclosure, the roadside apparatus may include a communication section and the like in addition to the aforementioned electronic apparatus. The present disclosure does not specifically limit the manner of cooperation between the electronic device and the communication component. In an alternative embodiment, the electronic device may be integrated with the communication component; in another alternative implementation of the present disclosure, the electronic device may be provided separately from the communication component.
The present disclosure further discloses a cloud control platform, which can perform data processing at a cloud end. The cloud control platform can comprise the electronic device in the disclosure. As can be seen, the cloud control platform can process data (e.g., an image sequence) acquired by the image acquisition device.
In an optional embodiment of the present disclosure, the cloud control platform may also be used to sense the state of various elements in the traffic environment that affect the vehicle's travel (e.g., static obstacles, dynamic obstacles, road signs, signal lights in the environment). The cloud control platform in the present disclosure may be part of the aforementioned intelligent transportation system.
It should be noted that, in different service scenarios, the cloud control platform may be referred to differently, for example, the cloud control platform may also be referred to as a vehicle route cooperative management platform, an edge computing platform, a cloud computing platform, a central system, a cloud server, and the like.
In the present disclosure, a signal lamp may be a device that emits an indication signal for a traffic environment. The state of the signal lamp can be characterized by an indication signal emitted by the signal lamp to the environment. The indication signal of the signal lamp may be a signal that transmits information by means of light, image, graphic, text, or the like. It can be seen that the existing traffic lights, traffic lights and other devices can be used as the signal lights in the present disclosure.
Further, the signal lamp may comprise at least one lamp cap. Illustratively, as shown in FIG. 2, signal lamp 1 includes A, B and C three lightheads. In the present disclosure, the lamp head may be constituted by a light emitting unit on the signal lamp emitting some indication signal (e.g., a red signal or a green signal), and the lamp head is used for emitting the indication signal; in addition, in a scene that the signal lamp transmits the indication signal through the screen, the lamp head can also be a specific area on the screen to show the indication signal through the specific area.
It should be noted that, the number of the bases included in the signal lamp is not limited in the present disclosure, and in some scenarios, the signal lamp may include only one base; in yet another scenario, the signal may include two or more lightheads.
The image acquisition equipment in the present disclosure is at least used for acquiring the indication signal sent by the signal lamp and generating an image according to the indication signal sent by the signal lamp. Illustratively, in a scene as shown in fig. 2, the image capturing device captures an image of a signal lamp and outputs an image 1, wherein the image 1 can show an indication signal emitted by the signal lamp. Thereafter, as shown in fig. 1, the image capturing device sends the output image 1 to the first electronic device, so that the signal light detection can process the image 1 to determine the position of the signal light in fig. 1, and/or to determine the position of the signal light in other images captured by the image capturing device than in fig. 1.
The present disclosure is not limited to a specific type of image capturing device, which may be a camera, a video camera, or the like. Further, the image capturing device may also be a device used in conjunction with other sensors (e.g., infrared sensors, etc.).
The images acquired by the image acquisition device are sequentially arranged according to the acquisition time to obtain a sequence comprising a plurality of images, namely the image sequence in the disclosure.
The present disclosure does not specifically limit the correspondence relationship between the image capturing device and the signal lamp. In one embodiment, the image capturing devices may correspond to the signal lamps one to one, i.e., a certain image capturing device is dedicated to image capturing for one signal lamp. In another embodiment, one image capturing apparatus may correspond to and image-capture a plurality of signal lamps. There may also be a case where one signal lamp corresponds to a plurality of image pickup apparatuses, that is, a plurality of image pickup apparatuses perform image pickup on a certain signal lamp.
Illustratively, in the scenario shown in fig. 1, the intersection is provided with four image capturing devices. The image acquisition equipment 1 acquires images of a signal lamp 1 and a signal lamp 2 in the intersection, and sends the acquired images to the first electronic device.
For convenience of description, the following description will be given taking detection of a certain signal lamp corresponding to a certain image capturing apparatus as an example. Optionally, if the signal lamp corresponding to an image capturing device is not unique, the process in the present disclosure may be executed for each signal lamp corresponding to the image capturing device.
In an alternative implementation scenario of the present disclosure, ideally, the relative position of the image capture device and the signal lamp remains unchanged. As shown in fig. 2, the image pickup device picks up an image 1 when it is at position 1, and the signal lamp is located near the upper right corner of the image 1 (i.e., area 1 in fig. 2). If the image recording device can always be in position 1, the position of the signal light in the image recorded by the image recording device should always be unchanged, i.e. the signal light is always located in region 1. The region 1 can be directly determined as the position of the signal lamp in the image without detecting the position of the signal lamp in the image.
However, if the image capturing device is adjusted to the position 2 shown in fig. 2, a change in position and/or a change in orientation of the image capturing device occurs, in which case the signal light is no longer in the region 1 in the image 2 captured by the image capturing device. Thereafter, if the region 1 is still used as the position of the traffic light in the image, other detections (e.g., detection of channel values, etc.) for the traffic light will cause serious errors.
It should be noted that the reason why the position and/or the posture of the image capturing apparatus is changed in the present disclosure is not particularly limited. In an actual scene, external environments (e.g., weather factors such as wind and rain) may cause the image acquisition device to deviate from the pose of the image acquisition device when the image acquisition device is installed; further, the posture adjustment performed actively by the administrator of the image capturing apparatus with respect to the image capturing apparatus may also cause the image capturing apparatus to deviate from the posture thereof at the time of installation.
Further, the present disclosure also does not specifically limit the durability of the image capturing apparatus from its pose when installed. For example, the image acquisition device may be a temporary, intermittent offset; or may be a permanent deviation.
To at least partially reduce the error in signal detection caused by a deviation occurring by the image acquisition device, the present disclosure provides a signal detection process that may include one or more of the following steps:
s300: and acquiring an image sequence acquired by the image acquisition equipment aiming at the signal lamp.
As can be seen from the foregoing, the image sequence in the present disclosure includes several frame images. In the example shown in fig. 4, the sequence of images includes image p1 through image pi + 2.
In an alternative embodiment of the present disclosure, the first electronic device may sort all the images acquired by the image acquisition apparatus according to the acquisition time of the images, so as to obtain an image sequence. In yet another alternative embodiment of the present disclosure, the first electronic device may sort a part of the images acquired by the image acquisition apparatus according to the acquisition time of the images to obtain the image sequence.
Specifically, the process of obtaining the image sequence from a part of the images acquired by the image acquisition device may be: and arranging the sequentially selected frames of images with the acquisition time interval of preset duration in the frames of images acquired by the image acquisition equipment according to the acquisition time of the images to obtain an image sequence.
In some scenarios, the number of images acquired by the image acquisition device may be large, and the image sequence is obtained according to a part of the images (not all the images) in each acquired image, which is beneficial to reduce the amount of data processed by the first electronic device. And errors brought to signal lamp detection by images collected when abnormal phenomena such as flicker occur in the signal lamp can be reduced.
The preset time duration can be determined according to actual service requirements. For example, the preset time period may be one second. In other alternative embodiments, the preset duration may be inversely related to the traffic flow of the road section where the signal lamp is located, that is, the more vehicles pass through the road section where the signal lamp is located in a unit time, the shorter the preset duration is, which is beneficial for the first electronic device to timely identify and detect the switching of the indication signal.
S302: a target differential image is determined based on the sequence of images.
The target differential image in this disclosure is: and obtaining a differential image based on the image acquired by the image acquisition equipment when the indication signal sent by the signal lamp is switched.
In an alternative embodiment of the present disclosure, a difference image sequence may be obtained according to the image sequence; each frame of differential image in the differential image sequence corresponds to two adjacent frames of images in the image sequence. As shown in fig. 4, for example, two adjacent images in the image sequence are differentiated to obtain a difference image corresponding to the two images, for example, the image P1 and the image P2 are differentiated to obtain a difference image d1, and so on.
As can be seen from the principle of the difference processing, if the channel values of the pixels at the same position (for example, the same coordinates in the image coordinate system) in the two frames of images are the same (or substantially the same), the channel value of the obtained difference image at the position is zero (or approaches zero), and at this time, the two frames of images both correspond to before or after a certain switching; if the channel value difference at the same position in the two frames of images is large, the channel value of the obtained difference image at the position is also large, and at the moment, the two frames of images respectively correspond to the front and the back of a certain switching.
In the image sequence shown in fig. 4, the signal light does not switch at the acquisition time of the image p1 and the image p2, that is, the region 1 in the image p1 does not differ significantly from the region 1 in the image p2, the difference image d1 obtained from the image p1 and the image p2 does not show the change of the signal light at the position corresponding to the signal light. When the image p2 is before the timing at which the indication signal of the traffic light is switched and the image p3 is after the timing at which the indication signal of the traffic light is switched, the difference image d2 obtained from the image p2 and the image p3 shows the change of the traffic light at the position corresponding to the traffic light, and the difference image d2 is the target difference image.
The present disclosure does not limit the number of target difference images determined from one image sequence. In the example shown in fig. 4, two target difference images are obtained, namely, the difference image d2 and the difference image d 3.
As can be seen, the difference processing performed on the images according to the present embodiment can directly recognize the change in the indication signal emitted from the signal lamp from the obtained difference image.
Further, in another alternative embodiment of the present disclosure, in the image sequence, each frame image corresponding to the indication signal when the switching occurs is determined (for example, the timing of the switching may be determined according to a switching period of the indication signal obtained in advance). And for each switching of the indication signal, carrying out difference processing on the two frame images before and after the switching to obtain a target difference image.
Therefore, the technical scheme in the embodiment only needs to perform differential processing on the image acquired when the indication signals are switched, and is beneficial to reducing the calculation amount of the differential processing.
S304: and determining the basic position of the signal lamp in the image acquired by the image acquisition equipment based on the determined target differential images.
The two target differential images obtained through the process can show the positions of the lamp heads with changed signal lamps in the images in different layers. As shown in fig. 4, the indication signals sent by the bases a and C in the differential image d2 are changed, so that the differential image d2 can show at least the positions of the bases a and C in the image. Likewise, the differential image d3 can show at least the positions of lighthead a and lighthead B in the image. Then, on the basis of determining the position of the lamp holder in the image, the position of the signal lamp in the image can be determined according to the position of the lamp holder. The determined position of the traffic light in the image may be obtained from the region showing the bases in each target difference image (for example, a circumscribed rectangle showing each region of each base in the target difference image may be used as the position of the traffic light in the image), as shown in region 2 in fig. 4.
Therefore, according to the signal lamp detection method provided by the disclosure, the position in each frame of image acquired by the image acquisition equipment of the signal lamp is determined according to the difference image obtained by the image acquired when the signal switching occurs on the indicating signal sent by the signal lamp. Because the difference of the positions of the signal lamps shown by the two frames of images collected before and after the signal switching of the signal lamps is obvious, the determined positions of the signal lamps in the images are more accurate. In addition, the process in the disclosure does not need to determine the difference between the actual pose of the image acquisition equipment and the historical installation pose of the image acquisition equipment, and the position of the signal lamp in the image is directly output according to the difference image, which is beneficial to reducing the calculation resources consumed in the signal lamp detection process. Further, compared with the unchanged indicating signal, the switched indicating signal has a relatively obvious influence on the traffic environment.
It should be noted that, the process in the present disclosure is exemplified by a signal lamp including three lamp bases, and in fact, a signal lamp having one lamp base or another number of lamp bases is also applicable to the signal lamp detection process provided by the present disclosure.
As shown in fig. 2 or 4, in the image and the differential image, the area 1 (portion corresponding to the signal lamp) occupies a part of the image and the differential image, but not all of them. In order to further reduce the resources consumed in processing the image and the other region of the differential image except for the region 1 and reduce the interference of the background of the image (as shown in fig. 5) to the subsequent processing, in an alternative embodiment of the present disclosure, the image acquired when the image acquisition device is in the initial position may be determined as the reference image first. The reference image may be any one frame of image in the acquired image sequence, or may be a frame of image acquired by the image acquisition device before the image sequence is acquired.
Then, the traffic light is recognized in the reference image, and a reference region is obtained from a reference position region corresponding to the traffic light in the reference image. Specifically, the reference image may be enlarged by a first predetermined size corresponding to the reference position area of the signal lamp to obtain a reference area.
In the example shown in fig. 5, a region corresponding to the signal lamp (i.e., a reference location region) may be identified for the reference image, which may be characterized by the range of coordinates it occupies in the image coordinate system of the reference image. Then, the edge of the region is extended by a first predetermined size (for example, m pixels) to the outside of the region to be enlarged, and an enlarged region is obtained and used as a reference region. Wherein the obtained reference region can be characterized by a coordinate range.
In this embodiment, the reference region is obtained by enlarging the corresponding range of the signal lamp in the reference image. In other embodiments of the present disclosure, the area of each base of the signal lamp in the reference image may be identified, the area of each base in the reference image may be expanded according to the first predetermined size, and a circumscribed rectangle of each expanded area may be used as the reference area. In addition, the technical scheme of obtaining the reference region by performing circumscribed figures (such as circumscribed triangles, circumscribed hexagons and the like) with other numbers of sides on each expanded region is also within the protection scope of the present disclosure.
Wherein the first predetermined size may be an empirical value; the first predetermined size may also be positively correlated with the size of the area of the reference image corresponding to the signal lamp. Furthermore, the first predetermined dimension may also be positively correlated to the dimension of the burner in the image. For example, the diameter size or the radius size of the burner in the image may be taken as the first predetermined size.
It can be seen that the reference area obtained by the process in the present disclosure can show the position of the signal lamp in the image captured by the image capturing device under the condition that the image capturing device is in the initial position. It should be noted that the initial position in the present disclosure may be determined according to actual requirements. For example, the installation position of the image capturing device may be used as an initial position, and the image captured by the image capturing device when the installation is completed is a reference image. In addition, the position of the image acquisition device at any time in the life cycle can be used as the initial position, and images acquired at other positions except the initial position can be compared with the reference image acquired at the initial position.
In an alternative embodiment, a manual labeling mode may be used to determine the region corresponding to the signal lamp in the reference image. In addition, an artificial intelligence model with a recognition function can be used for determining the area of the reference image corresponding to the signal lamp.
After the reference region is determined, for each frame of image in the image sequence, a portion of the frame of image corresponding to the reference region may be used as a sub-image of the frame of image, so that determining the target difference image based on the image sequence is converted into determining the target difference image based on the sub-image sequence. As shown in fig. 6. Since the sub-image has the part of the image except the reference region removed to a large extent, that is, the background removed to a large extent, only the part of each frame image corresponding to the reference region needs to be subsequently processed, and the amount of data processed in the subsequent step can be effectively reduced. Moreover, the area of the reference region obtained by the first predetermined size expansion is larger than the area of the region originally corresponding to the signal lamp in the image, and even if the image acquisition device has changed the pose, most information of the signal lamp can be contained in the sub-image of each frame image corresponding to the reference region in the present specification.
It should be noted that, in the present disclosure, the sub-image is an area used in the subsequent steps of the image, and when the condition allows, only the coordinate range of the sub-image corresponding to the image may be determined, and the step of generating the graphic structure of the sub-image is not required to be specially set. That is, a certain sub-image is a range in one frame image, and other parts except the range in the frame image can be invisible in the subsequent steps. That is, the sub-images shown in fig. 6 are schematic, and the sub-images can be a more abstract concept without separately generating the graphic structures corresponding to the sub-images, apart from the images.
However, the situation that the sub-image independent from the image is generated by means of segmentation, matting and the like in practical application still belongs to the protection scope of the present disclosure.
And then, aiming at each switching of the indication signal sent by the signal lamp, determining a target differential image corresponding to the switching according to the differential image obtained by each sub-image corresponding to the switching. In the example shown in fig. 6, the sub-image s2 and the sub-image s3 correspond to the before and after timings of switching, and then the difference image obtained from the sub-image s2 and the sub-image s3 is the target difference image. The sub-image s1 and the sub-image s2 both correspond to before switching, the difference image obtained from the sub-image s1 and the sub-image s2 is not the target difference image.
In an optional embodiment of the present disclosure, the process of obtaining the position of the signal lamp in each frame image according to the target difference image may be: and for each target differential image, determining each connected domain in the target differential image based on the channel value of each pixel in the target differential image, wherein the connected domain is used for determining the reference position of the signal lamp in the image acquired by the image acquisition equipment. The connected domain shows the regions with the same or similar channel values on the signal lamp, which is beneficial to determining the position of the lamp holder in the image.
The connected component areas a to E as shown in fig. 7a (for convenience of explanation, only the portion corresponding to the reference area in the target differential image 2 is shown in fig. 7 a). According to the embodiment of the application, noise reduction, impurity removal and screening can be performed on each determined connected domain, and then noise reduction and screening are performed on the basis of corresponding noise reduction and noise removal
And/or determining the reference position of the signal lamp in the image acquired by the image acquisition equipment according to the connected domain corresponding to the lamp head after screening processing.
The existing methods for denoising and removing impurities from an image are all applicable to the process in the disclosure under the condition that the conditions allow.
Optionally, the noise reduction and impurity removal process may be: and performing opening operation on each determined connected domain to perform denoising processing. And/or judging whether the size of the connected domain corresponding to the lamp holder exceeds a second preset size or not so as to perform screening processing on the connected domain corresponding to the lamp holder.
In which, the on operation can remove the smaller-sized connected domain (impurity point) in the target difference image, such as the connected domain E shown in fig. 7 a. Further, a communication range larger than the second predetermined size of the base may correspond to components of the signal lamp's connection mechanism, lamp holder, etc., such as communication range D shown in FIG. 7 a. The connected domain a and the connected domain C remaining after the noise reduction and the impurity removal are performed correspond to the base a and the base C, respectively. Therefore, the connected domain of the image corresponding to the lamp holder can be accurately identified in the step.
Wherein the second predetermined size may be the maximum size of the burner shown in the image. For example, in case the burner is circular in shape, the second predetermined dimension is the radial dimension of the burner (diameter R as shown in fig. 7 a); in case the burner is rectangular in shape, the second predetermined dimension is the diagonal length of the burner.
Wherein the second predetermined size may be determined in the aforementioned process of determining the area of the reference image corresponding to the signal lamp. In the example shown in fig. 5, it is possible to determine the area of the reference image corresponding to the signal lamp and to determine the number of signal lamp bases. The length of the reference image corresponding to the side of the signal lamp with the larger area size is divided by the number of the bases, and the obtained size is regarded as the second predetermined base size. It can be seen that the second predetermined dimension of the burner obtained by this process is an approximation.
Since the image capturing device is likely to be out of position relative to the signal lamp in an actual scene, the size of the signal lamp in the image is not always constant. Through the process in the disclosure, the size of the signal lamp in the reference image can be effectively determined, and the pose deviation of the image acquisition equipment is avoided to influence the detection accuracy.
Thereafter, a reference position of the signal lamp in the image captured by the image capturing device may be determined according to the connected component corresponding to the lighthead. Specifically, a circumscribed rectangle may be made for each connected domain corresponding to the base, and a position of the circumscribed rectangle in the image may be determined as a position of the signal lamp in the image. As shown in fig. 7a, the circumscribed rectangles of the connected component a and the connected component C may be used as the positions of the signal lamps in the image.
In an actual scene, there may be a large number of signal heads of the signal lamp, and the heads corresponding to the connected component shown in a certain target differential image are not all heads located at the end of the signal lamp. As shown in fig. 7B, the signal lamp actually includes three bases, but the indication signal sent by the base C (located at the end of the signal lamp) is not changed, the base C is not shown in the target difference image, and the connected domain corresponding to the base C cannot be determined, and at this time, based on the small distance between the connected domain a ' and the connected domain B ', the proportion of the error is enlarged, so that the position ' of the obtained signal lamp in the image may have a certain error.
In an alternative embodiment of the present disclosure, to reduce the error, the distance between any two lightheads may be determined according to the position of each lighthead in the image after the position of the lighthead in the image is obtained through the foregoing steps.
If the spacing between the two lightheads indicates that there are other lightheads spaced between the two lightheads (the distance between the centroids of the two lightheads is greater than a second threshold distance, then there are other lightheads spaced between the two lightheads. Then, a reference position of the signal lamp in the image captured by the image capturing device is determined based on the position of the available lighthead in the image.
For the case where the lighthead is not unique and/or there is a difference in the dimensions of the signal light in the length direction and the width direction, in an alternative example of the present disclosure, for each connected domain corresponding to the lighthead, the position of the centroid of the connected domain in the image may be determined, as shown in fig. 7a for the centroid of connected domain a and the centroid of connected domain C. The position of the centroid in the image may be characterized in terms of coordinates. And determining that a communication relation exists between two centroids with the distance smaller than a first threshold value according to the positions of the centroids in the image.
As shown in fig. 7a and 7b, if the distance between the position of the centroid of the connected domain a in the target difference image 2 and the position of the centroid of the connected domain a ' in the target difference image 3 is smaller than the first threshold, a connected relationship exists between the centroid of the connected domain a and the centroid of the connected domain a ', indicating that the connected domain a and the connected domain a ' both correspond to the lamphead a.
And the distances between the position of the centroid of the connected domain A in the target difference image 2, the position of the centroid of the connected domain C in the target difference image 2 and the position of the connected domain B ' in the target difference image 3 are both greater than the first threshold, so that no connected relation exists between the centroid of the connected domain A and the centroid of the connected domain B ', and no connected relation exists between the centroid of the connected domain A and the centroid of the connected domain C, indicating that the connected domain A and the connected domain B ' correspond to different lamp holders, and the connected domain A and the connected domain C correspond to different lamp holders. Therefore, the corresponding relation between the connected domain and the lamp holder can be accurately determined in the process of the method.
In the present disclosure, the centroid may be the geometric center of the connected domain.
Then, each centroid is taken as a node, and an edge is drawn between nodes whose distance is smaller than a first threshold (i.e., nodes corresponding to centroids having a connected relationship), so as to obtain a graph, as shown in fig. 8. And determining the centroids corresponding to a group of nodes which are sequentially connected through edges in the graph as the centroids corresponding to the same lamp holder. Therefore, through the process in the disclosure, which centroids correspond to the lamp bases can be quickly and accurately determined in the form of a graph.
Then, for each lighthead, the positions of the centroids corresponding to the lighthead in the image can be averaged to obtain the position of the centroid of the lighthead in the image. It can be seen that this disclosure synthesizes the position of a plurality of barycenter, can effectually reduce the error that the process of location lamp holder caused.
In the case that the bases are regular shapes (e.g., square, circle, hexagon, etc.), the area corresponding to each base in the image can be determined according to the base size in the previous step on the basis of the position of the base centroid in the image. Optionally, after the area of the lighthead in the image is determined, a circumscribed rectangle may be made in the area of each lighthead in the image to obtain the position of the signal lamp in the image.
Because the process in the present disclosure involves concepts of position, coordinates, and the like, each concept corresponds to an image coordinate system. After determining the position of a certain element (e.g., a centroid, a node, a connected domain, a signal lamp, etc.) in a certain type of picture (e.g., a picture acquired by an image acquisition device, a target difference image, a picture, etc.), the position of the certain element in other types of pictures can be determined by calculation. For example, if a centroid can be located in the target difference image, the location of the centroid in the corresponding map can be determined. It can be seen that the positions, coordinates in the present disclosure can be converted in different types of pictures without conflict.
Further, in the present disclosure, an element having no meaning such as a centroid, a node, and the like in an image has a position as one coordinate point. Elements such as connected domains and signal lamps which have area meaning in the image, the position of which can be the coordinate point of the centroid thereof and the size and shape of the outline thereof; alternatively, its position may be a set of coordinate points whose image in the image contains individual pixels.
In some cases, the change in the pose of the image capture device with respect to the signal light is time-varying. The target differential image obtained at a historical time that is farther from the current time may have an effect on determining the position of the signal lamp in the image.
In an optional embodiment of the present disclosure, a specified number of connected domains, which are closest to the current time in acquisition time of an image to which the connected domains belong, may be screened out from the connected domains corresponding to the lighthead. And determining the reference position of the signal lamp in the image acquired by the image acquisition equipment according to each screened connected domain. As shown in fig. 6, when the number of executions is four, the target difference image 2 and the target difference image 3 each include two connected domains corresponding to the bases, and the connected domains corresponding to the bases in the target difference image 2 and the target difference image 3 can be used in the subsequent steps.
Alternatively, the specified number may be positively correlated with the number of signal heads.
In another optional embodiment of the present disclosure, a target connected domain may be screened out in each connected domain corresponding to the lamp head; the target connected domain is: the specified number of connected domains, which are closest to the current time, are acquired at the image acquisition time; or the time length from the acquisition time of the image to the current time does not exceed the connected domain of the specified time length. And determining the reference position of the signal lamp in the image acquired by the image acquisition equipment according to the screened target connected domains.
Alternatively, the specified time period may be positively correlated with the number of signal heads.
Therefore, the process in the disclosure can effectively avoid the error caused by signal lamp detection executed at the current moment due to the image which is relatively far away from the current moment. Moreover, the number of connected domains adopted in the detection process is not too small.
In order to further improve the accuracy of the determined position of the signal light, in an optional embodiment of the present disclosure, the reference position of the signal light in the image determined for the image sequence is taken as a candidate reference position; determining at least one other candidate reference position of the signal lamp in the image based on the other image sequence acquired for the signal lamp; determining a target reference position in the signal light image based on the candidate reference position and the at least one other candidate reference position.
Thus, the target reference position of the signal lamp in the image is determined. As shown in fig. 4, the final result of the process described is the position of the signal lamp in each image acquired by the image acquisition device, i.e. region 2 in fig. 4. For ease of understanding, the output of FIG. 4 may be viewed as a mask having the same shape and size as the image captured by the image capture device, with the pattern shown as region 2 disposed on the mask. The mask may be overlapped with a frame of image after the image capturing device captures the frame of image, which corresponds to the position of the area 2, i.e. the position of the signal lamp.
It should be noted that the "mask" is merely an exemplary illustration for easy understanding, and in actual use, the physical frame of the "mask" may not exist, and the position of the signal lamp in the image captured by the image capturing device is determined by the coordinate range corresponding to the area 2.
Based on the same idea, the present disclosure also provides a corresponding first electronic device, and the first electronic device can be used for signal lamp detection, as shown in fig. 9.
Fig. 9 is a schematic view of a first electronic device provided in the present disclosure, which specifically includes:
an image sequence acquisition module 900 configured to: acquiring an image sequence acquired by image acquisition equipment aiming at the signal lamp;
a target differential image determination module 902 configured to: determining a target difference image based on the image sequence; the target differential image is a differential image obtained based on an image when an indication signal sent by a signal lamp is switched;
a position determination module 904 configured to: and determining the reference position of the signal lamp in the image based on the determined target differential image.
In an optional embodiment of the present disclosure, the target difference image determining module 902 is specifically configured to: carrying out difference processing on any two adjacent frames of images in the image sequence to obtain each difference image; based on the channel value of each pixel in each differential image, the differential image at the time of switching corresponding to the instruction signal is selected from each differential image, and the selected differential image is determined as the target differential image.
In an optional embodiment of the present disclosure, the target difference image determining module 902 is specifically configured to: determining each frame image corresponding to the switching of the indication signal in the image sequence; and for each switching of the indication signal, carrying out difference processing on the two frame images before and after the switching to obtain a target difference image.
In an alternative embodiment of the present disclosure, determining the target difference image based on the image sequence may be converted into determining the target difference image based on the sub-image sequence. After the reference area is determined, regarding each frame of image in the image sequence, a portion of the frame of image corresponding to the reference area may be used as a sub-image of the frame of image, where the determining process of the preset reference area of the signal lamp in the image includes:
determining a reference image, wherein the reference image is an image acquired when an image acquisition device is at an initial position;
identifying a reference position area of the signal lamp from the reference image;
and expanding the identified reference position area of the signal lamp by a first preset size to obtain the reference area.
In an optional embodiment of the present disclosure, the first electronic device further comprises a reference region determining module 906 configured to: determining a reference image, wherein the reference image is an image acquired when an image acquisition device is at an initial position; identifying a reference position area of the signal lamp from the reference image; and expanding the identified reference position area of the signal lamp by a first preset size to obtain the reference area.
In an alternative embodiment of the present disclosure, the signal lamp comprises at least one lamp head. The position determination module 904 is specifically configured to: for each target differential image, determining each connected domain in the target differential image based on the channel value of each pixel in the target differential image; determining a connected domain corresponding to the lamp holder from each connected domain of each target differential image;
and determining the position of the signal lamp in the image based on the determined connected domain corresponding to the lamp head.
In an alternative embodiment of the present disclosure, the position determining module 904 is specifically configured to: performing opening operation on the determined connected domains corresponding to the lamp cap to perform denoising treatment; and/or judging whether the size of the connected domain corresponding to the lamp holder exceeds a second preset size or not so as to screen the connected domain corresponding to the lamp holder;
and determining the reference position of the signal lamp in the image acquired by the image acquisition equipment based on the connected domain corresponding to the lamp head after denoising and/or screening.
In an alternative embodiment of the present disclosure, the position determining module 904 is specifically configured to: screening out target connected domains in each connected domain corresponding to the lamp holder; the target connected domain is: the specified number of connected domains, which are closest to the current time, are acquired at the image acquisition time; or the time length from the acquisition time of the image to the current time does not exceed the connected domain of the specified time length; and determining the reference position of the signal lamp in the image acquired by the image acquisition equipment based on the screened target connected domain.
In an alternative embodiment of the present disclosure, the position determining module 904 is specifically configured to: for each connected domain corresponding to the lighthead, determining a location of a centroid of the connected domain in the image; determining two centroids with a distance smaller than a first threshold value from the position of each centroid in the image, wherein the two centroids correspond to the same lamp holder; and determining the reference position of the signal lamp in the image acquired by the image acquisition equipment based on the mass center corresponding to each lamp holder.
In an alternative embodiment of the present disclosure, the position determining module 904 is specifically configured to: each centroid is taken as a node, and an edge is drawn between two nodes with the distance smaller than a first threshold value to obtain a graph; and determining the centroids corresponding to a group of nodes which are sequentially connected through edges in the graph as the centroids corresponding to the same lamp holder.
In an alternative embodiment of the present disclosure, the position determining module 904 is specifically configured to: for each lamp holder, averaging the positions of the centroids corresponding to the lamp holder in the image to obtain the position of the lamp holder in the image; and determining the reference position of the signal lamp in the image acquired by the image acquisition equipment based on the position of each lamp head in the image.
In an alternative embodiment of the present disclosure, the signal lamps have at least three bases. The position determination module 904 is specifically configured to: determining the distance between any two lamp heads based on the positions of the lamp heads in the image; if the distance between the two lamp caps shows that other lamp caps are spaced between the two lamp caps, determining the two lamp caps as available lamp caps; the reference position of the signal lamp in the image captured by the image capturing device is determined based on the position of the available lighthead in the image.
In an optional embodiment of the present disclosure, the reference position of the signal lamp in the image determined for the sequence of images is taken as a candidate reference position;
determining at least one other candidate reference position of the signal lamp in the image based on the other image sequence acquired for the signal lamp;
determining a target reference position in the signal light image based on the candidate reference position and the at least one other candidate reference position.
The present disclosure still further provides a corresponding indication signal identification process, as shown in fig. 10. The indication signal identification process may include:
s1000: a reference position is acquired.
The reference position is a position of a signal lamp in an image acquired by the image acquisition device, and the reference position is obtained through any one of the signal lamp detection processes. The image is acquired by an image acquisition device.
S1002: acquiring an image acquired by image acquisition equipment;
s1004: and determining the area corresponding to the signal lamp in each frame of image based on the acquired reference position.
S1006: and identifying the indication signal sent by the signal lamp based on the area of each frame of image corresponding to the signal lamp.
Wherein the indication signal comprises at least one of: light, image, graphics, text.
The above indication signal identification process provided for one or more embodiments of the present disclosure also provides a corresponding indication signal identification apparatus based on the same idea. The indication signal recognition apparatus may include:
a first acquisition module configured to: and acquiring the position of the signal lamp in each frame of image.
A region determination module configured to: and determining the area corresponding to the signal lamp in each frame of image.
An identification module configured to: and identifying the indication signal sent by the signal lamp based on the area of each frame of image corresponding to the signal lamp.
The present disclosure further provides a corresponding road information display process, as shown in fig. 11. The road information presentation process may be performed by a terminal. The terminal may include a vehicle machine of a vehicle (e.g., an intelligent vehicle such as an unmanned vehicle), a handheld terminal of a user (e.g., a mobile phone, a PAD, etc.), an information display terminal provided in a drive test (e.g., an electronic station board, etc.), and other terminals having an information display function.
The road information presentation process may include:
s1100: and acquiring an indicating signal sent by the signal lamp.
The indication signal is obtained by the aforementioned indication signal identification process.
S1102: and generating road information based on the indication signal and the road section where the signal lamp is located, and displaying the road information.
The road information can be represented by images, characters, sounds and the like.
The above road information display process provided for one or more embodiments of the present disclosure is based on the same idea, and the present disclosure further provides a corresponding road information display device. The road information presentation device may include:
a second acquisition module configured to: and acquiring an indicating signal sent by the signal lamp.
A presentation module configured to: and generating road information based on the indication signal and the road section where the signal lamp is located, and displaying the road information.
The present disclosure still further provides a corresponding vehicle path planning process, as shown in fig. 12. The vehicle path planning process may be performed by at least one of a vehicle machine of the vehicle, a server communicatively coupled to the vehicle. The vehicle path planning process may include:
s1200: the current position of the vehicle is determined.
The vehicle can be an intelligent vehicle such as an unmanned vehicle.
S1202: and acquiring an indicating signal sent by each signal lamp which meets the preset condition with the current position of the vehicle.
The indication signal is obtained by the aforementioned indication signal identification process.
In an alternative embodiment of the present disclosure, the signal lamp whose distance from the vehicle does not exceed the third threshold value may be determined as the signal lamp satisfying the preset condition.
In another alternative embodiment of the present disclosure, the signal lights located in at least a part of the section between the current position of the vehicle and the destination to which the vehicle is heading may be determined as the signal lights satisfying the preset condition.
S1204: and planning a path adopted by the vehicle when the vehicle runs in the future time based on the current position of the vehicle and the acquired indication signal.
Based on the same idea, the vehicle path planning process provided by the above embodiment of the present disclosure also provides a corresponding vehicle path planning device. The vehicle path planning apparatus may include:
a vehicle position determination module configured to: the current position of the vehicle is determined.
A third acquisition module configured to: and acquiring an indicating signal sent by each signal lamp which meets the preset condition with the current position of the vehicle.
A first planning module configured to: and planning a path adopted by the vehicle when the vehicle runs in the future time based on the current position of the vehicle and the acquired indication signal.
The present disclosure further provides a corresponding vehicle driving state planning process, as shown in fig. 13. The vehicle driving state planning process may be performed by at least one of a vehicle machine of the vehicle and a server communicatively connected to the vehicle. The vehicle driving state planning process may include:
s1300: a path planned for the vehicle is obtained.
The vehicle can be an intelligent vehicle such as an unmanned vehicle.
S1302: and acquiring the indication signals sent by the signal lamps along the path.
The indication signal sent by the signal lamp is obtained through the identification process of the indication signal.
The execution sequence of step S1300 and step S1302 is not sequential.
S1304: and determining the running state of the vehicle at the next moment under the condition that the vehicle runs along the path based on the current position of the vehicle and the indication signal.
Wherein the driving state may include at least one of: speed, direction of speed, acceleration, direction of acceleration.
Based on the same idea, the vehicle driving state planning process provided by one or more embodiments of the present disclosure also provides a corresponding vehicle driving state planning device. The vehicle driving state planning apparatus may include:
a fourth acquisition module configured to: a path planned for the vehicle is obtained.
A fifth obtaining module configured to: and acquiring the indication signals sent by the signal lamps along the path.
A second planning module configured to: and determining the running state of the vehicle at the next moment under the condition that the vehicle runs along the path based on the current position of the vehicle and the indication signal.
The present disclosure still further provides a corresponding obstacle detection process, as shown in fig. 14. The obstacle detection process may be performed by at least one of a vehicle machine of the vehicle and a server communicatively connected to the roadside sensing unit. The obstacle detection process may include:
s1400: and acquiring the position of the obstacle, the current motion state of the obstacle and an indicating signal sent by a signal lamp corresponding to the obstacle as available information.
Wherein the indication signal emitted by the signal lamp is obtained through the aforementioned indication signal identification process.
In an alternative embodiment of the present disclosure, the signal light corresponding to the obstacle may be a signal light which is not farther from the obstacle than a fourth threshold distance.
In another alternative embodiment of the present disclosure, the signal light corresponding to the obstacle may be a signal light located in the moving direction of the obstacle.
In the case where there are a plurality of obstacles in the environment, the process in the present disclosure may be separately implemented for each obstacle.
S1402: and predicting at least one of the position and the motion state of the obstacle at the next moment based on the available information.
The movement state of the obstacle may include at least one of: speed, direction of speed, acceleration, direction of acceleration.
The above obstacle detection process provided for one or more embodiments of the present disclosure also provides a corresponding obstacle detection device based on the same idea. The obstacle detection device may include:
a sixth acquisition module configured to: and acquiring the position of the obstacle, the current motion state of the obstacle and an indicating signal sent by a signal lamp corresponding to the obstacle as available information.
A prediction module configured to: and predicting at least one of the position and the motion state of the obstacle at the next moment based on the available information.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 15 shows a schematic block diagram of an example electronic device 1500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 15, the apparatus 1500 includes a computing unit 1501 which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)1502 or a computer program loaded from a storage unit 1508 into a Random Access Memory (RAM) 1503. In the RAM 1503, various programs and data necessary for the operation of the device 1500 can also be stored. The calculation unit 1501, the ROM 1502, and the RAM 1503 are connected to each other by a bus 1504. An input/output (I/O) interface 1505 is also connected to bus 1504.
Various components in device 1500 connect to I/O interface 1505, including: an input unit 1506 such as a keyboard, a mouse, and the like; an output unit 1507 such as various types of displays, speakers, and the like; a storage unit 1508, such as a magnetic disk, optical disk, or the like; and a communication unit 1509 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1509 allows the device 1500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 1501 may be various general and/or special purpose processing components having processing and computing capabilities. Some examples of the computation unit 1501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computation chips, various computation units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The calculation unit 1501 executes the respective methods and processes described above, such as any of the aforementioned methods. For example, in some embodiments, any of the foregoing methods may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1500 via the ROM 1502 and/or the communication unit 1509. When the computer program is loaded into RAM 1503 and executed by computing unit 1501, one or more steps of any of the aforementioned methods described above may be performed. Alternatively, in other embodiments, the computing unit 1501 may be configured to perform any of the aforementioned methods in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (24)

1. A signal light detection method, comprising:
acquiring an image sequence collected aiming at a signal lamp;
determining a target difference image based on the image sequence; the target differential image is a differential image obtained on the basis of an image when an indicating signal sent by a signal lamp is switched;
and determining the reference position of the signal lamp in the image based on the determined target differential image.
2. The method of claim 1, wherein determining a target difference image based on the sequence of images comprises:
carrying out difference processing on any two adjacent frames of images in the image sequence to obtain each difference image;
based on the channel value of each pixel in each differential image, each differential image when switching occurs corresponding to the instruction signal is selected from each differential image and determined as a target differential image.
3. The method of claim 1, wherein determining a target difference image based on the sequence of images comprises:
determining each frame image corresponding to the switching of the indication signal in the image sequence;
and for each switching of the indication signal, carrying out difference processing on the two frame images before and after the switching to obtain a target difference image.
4. The method of claim 1, wherein determining a target difference image based on the sequence of images comprises:
determining a preset reference area of a signal lamp of an image in an image sequence in the image;
and determining a target differential image based on the determined preset reference region of the images in the image sequence.
5. The method of claim 4, wherein the determining of the preset reference area of the signal lamp in the image comprises:
determining a reference image, wherein the reference image is an image acquired when an image acquisition device is at an initial position;
identifying a reference position area of the signal lamp from the reference image;
and expanding the identified reference position area of the signal lamp by a first preset size to obtain the reference area.
6. The method of claim 1, wherein the method further comprises:
taking the reference position of the signal lamp in the image determined for the image sequence as a candidate reference position;
determining at least one other candidate reference position of the signal lamp in the image based on the other image sequence acquired for the signal lamp;
determining a target reference position in the signal light image based on the candidate reference position and the at least one other candidate reference position.
7. The method of any of claims 1 to 6, wherein the signal lamp comprises at least one lamp head; the determining the position in the signal lamp image based on the determined target differential image comprises:
for each target differential image, determining each connected domain in the target differential image based on the channel value of each pixel in the target differential image;
determining a connected domain corresponding to the lamp holder from each connected domain of each target differential image;
and determining the position of the signal lamp in the image based on the determined connected domain corresponding to the lamp head.
8. The method of claim 7, wherein determining a location of a signal lamp in an image based on the determined connected component corresponding to the lighthead comprises:
performing opening operation on the determined connected domains corresponding to the lamp cap to perform denoising treatment; and/or judging whether the size of the connected domain corresponding to the lamp holder exceeds a second preset size or not so as to screen the connected domain corresponding to the lamp holder;
and determining the reference position of the signal lamp in the image based on the connected domain corresponding to the lamp head after the corresponding denoising and/or screening processing.
9. The method of claim 7, wherein determining a location of a signal lamp in an image based on the determined connected component corresponding to the lighthead comprises:
screening out target connected domains in each connected domain corresponding to the lamp holder; the target connected domain is: the specified number of connected domains, which are closest to the current time, are acquired at the image acquisition time; or the time length from the acquisition time of the image to the current time does not exceed the connected domain of the specified time length;
and determining the reference position of the signal lamp in the image based on the screened target connected domain.
10. The method of claim 7, wherein determining a location of a signal lamp in an image based on the determined connected component corresponding to the lighthead comprises:
for each connected domain corresponding to the lighthead, determining a location of a centroid of the connected domain in the image;
determining two centroids with a distance smaller than a first threshold value from the position of each centroid in the image, wherein the two centroids correspond to the same lamp holder;
and determining the reference position of the signal lamp in the image acquired by the image acquisition equipment based on the mass center corresponding to each lamp holder.
11. The method of claim 10, wherein determining two centroids having a separation less than a first threshold distance, corresponding to the same lighthead, comprises:
each centroid is taken as a node, and an edge is drawn between two nodes with the distance smaller than a first threshold value to obtain a graph;
and determining the centroids corresponding to a group of nodes which are sequentially connected through edges in the graph as the centroids corresponding to the same lamp holder.
12. The method of claim 10, wherein determining a reference position of the signal lamp in the image based on the respective centroids of the lightheads comprises:
for each lamp holder, averaging the positions of the centroids corresponding to the lamp holder in the image to obtain the position of the lamp holder in the image;
and determining the reference position of the signal lamp in the image based on the position of each lamp head in the image.
13. The method of claim 12, wherein the signal lamps have at least three bases; determining a reference position of the signal lamp in the image based on the position of each lamp head in the image, comprising:
determining the distance between any two lamp heads based on the positions of the lamp heads in the image;
if the distance between the two lamp caps shows that other lamp caps are spaced between the two lamp caps, determining the two lamp caps as available lamp caps;
a reference position of the signal lamp in the image is determined based on the position of the available lighthead in the image.
14. An indication signal identification method, comprising:
acquiring a reference position, wherein the reference position is the position of a signal lamp in an image acquired by an image acquisition device, and the reference position is obtained in advance by the method of any one of claims 1 to 13;
acquiring an image acquired by image acquisition equipment;
determining the area of each frame of image corresponding to the signal lamp based on the acquired reference position;
identifying an indicating signal sent by the signal lamp based on the area of each frame of image corresponding to the signal lamp; wherein the indication signal comprises at least one of: light, image, graphics, text.
15. A road information display method comprises the following steps:
acquiring an indication signal emitted by a signal lamp, wherein the indication signal is obtained by the method of claim 14;
and generating road information based on the indication signal and the road section where the signal lamp is located, and displaying the road information.
16. A vehicle path planning method, comprising:
determining the current position of the vehicle;
acquiring an indication signal sent by each signal lamp meeting a preset condition with the current position of the vehicle, wherein the indication signal is obtained by the method of claim 14;
and planning a path adopted by the vehicle when the vehicle runs in the future time based on the current position of the vehicle and the acquired indication signal.
17. A vehicle driving state planning method, comprising:
acquiring a path planned for the vehicle and indicating signals sent by signal lamps along the path; the indication signal is obtained by the method of claim 14;
determining a driving state of the vehicle at a next moment in time on the condition that the vehicle drives along the path based on the current position of the vehicle and the indication signal, wherein the driving state comprises at least one of the following: speed, direction of speed, acceleration, direction of acceleration.
18. An obstacle detection method comprising:
acquiring the position of an obstacle, the current motion state of the obstacle and an indication signal sent by a signal lamp corresponding to the obstacle as available information; the indication signal is obtained by the method of claim 14;
and predicting at least one of the position and the motion state of the obstacle at the next moment based on the available information.
19. An electronic device for implementing the method of any one of claims 1 to 18.
20. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-18.
21. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-18.
22. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-18.
23. A roadside apparatus comprising the electronic apparatus of claim 20.
24. A cloud controlled platform comprising the electronic device of claim 20.
CN202110385452.6A 2021-04-10 2021-04-10 Signal lamp detection method, device, equipment and storage medium Active CN113033464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110385452.6A CN113033464B (en) 2021-04-10 2021-04-10 Signal lamp detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110385452.6A CN113033464B (en) 2021-04-10 2021-04-10 Signal lamp detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113033464A true CN113033464A (en) 2021-06-25
CN113033464B CN113033464B (en) 2023-11-21

Family

ID=76456277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110385452.6A Active CN113033464B (en) 2021-04-10 2021-04-10 Signal lamp detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113033464B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106799A1 (en) * 2009-07-03 2012-05-03 Shenzhen Taishan Online Technology Co., Ltd. Target detection method and apparatus and image acquisition device
CN106598356A (en) * 2016-11-24 2017-04-26 北方工业大学 Method, device and system for detecting positioning point of input signal of infrared emission source
US20170206427A1 (en) * 2015-01-21 2017-07-20 Sportstech LLC Efficient, High-Resolution System and Method to Detect Traffic Lights
JP2017187589A (en) * 2016-04-05 2017-10-12 キヤノン株式会社 Focus adjustment device, method for the same, imaging apparatus, program, and storage medium
CN108804983A (en) * 2017-05-03 2018-11-13 腾讯科技(深圳)有限公司 Traffic signal light condition recognition methods, device, vehicle-mounted control terminal and motor vehicle
CN109145678A (en) * 2017-06-15 2019-01-04 杭州海康威视数字技术股份有限公司 Signal lamp detection method and device and computer equipment and readable storage medium storing program for executing
CN109389838A (en) * 2018-11-26 2019-02-26 爱驰汽车有限公司 Unmanned crossing paths planning method, system, equipment and storage medium
CN110287828A (en) * 2019-06-11 2019-09-27 北京三快在线科技有限公司 Detection method, device and the electronic equipment of signal lamp
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
US20200353932A1 (en) * 2018-06-29 2020-11-12 Beijing Sensetime Technology Development Co., Ltd. Traffic light detection method and apparatus, intelligent driving method and apparatus, vehicle, and electronic device
CN111950536A (en) * 2020-09-23 2020-11-17 北京百度网讯科技有限公司 Signal lamp image processing method and device, computer system and road side equipment
CN112084905A (en) * 2020-08-27 2020-12-15 深圳市森国科科技股份有限公司 Traffic light state identification method, system, equipment and storage medium
CN112101272A (en) * 2020-09-23 2020-12-18 北京百度网讯科技有限公司 Traffic light detection method and device, computer storage medium and road side equipment
CN112131414A (en) * 2020-09-23 2020-12-25 北京百度网讯科技有限公司 Signal lamp image labeling method and device, electronic equipment and road side equipment
CN112164221A (en) * 2020-09-23 2021-01-01 北京百度网讯科技有限公司 Image data mining method, device and equipment and road side equipment
CN112180285A (en) * 2020-09-23 2021-01-05 北京百度网讯科技有限公司 Method and device for identifying faults of traffic signal lamp, navigation system and road side equipment
CN112396116A (en) * 2020-11-24 2021-02-23 武汉三江中电科技有限责任公司 Thunder and lightning detection method and device, computer equipment and readable medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106799A1 (en) * 2009-07-03 2012-05-03 Shenzhen Taishan Online Technology Co., Ltd. Target detection method and apparatus and image acquisition device
US20170206427A1 (en) * 2015-01-21 2017-07-20 Sportstech LLC Efficient, High-Resolution System and Method to Detect Traffic Lights
JP2017187589A (en) * 2016-04-05 2017-10-12 キヤノン株式会社 Focus adjustment device, method for the same, imaging apparatus, program, and storage medium
CN106598356A (en) * 2016-11-24 2017-04-26 北方工业大学 Method, device and system for detecting positioning point of input signal of infrared emission source
CN108804983A (en) * 2017-05-03 2018-11-13 腾讯科技(深圳)有限公司 Traffic signal light condition recognition methods, device, vehicle-mounted control terminal and motor vehicle
CN109145678A (en) * 2017-06-15 2019-01-04 杭州海康威视数字技术股份有限公司 Signal lamp detection method and device and computer equipment and readable storage medium storing program for executing
US20200353932A1 (en) * 2018-06-29 2020-11-12 Beijing Sensetime Technology Development Co., Ltd. Traffic light detection method and apparatus, intelligent driving method and apparatus, vehicle, and electronic device
CN109389838A (en) * 2018-11-26 2019-02-26 爱驰汽车有限公司 Unmanned crossing paths planning method, system, equipment and storage medium
CN110287828A (en) * 2019-06-11 2019-09-27 北京三快在线科技有限公司 Detection method, device and the electronic equipment of signal lamp
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN112084905A (en) * 2020-08-27 2020-12-15 深圳市森国科科技股份有限公司 Traffic light state identification method, system, equipment and storage medium
CN111950536A (en) * 2020-09-23 2020-11-17 北京百度网讯科技有限公司 Signal lamp image processing method and device, computer system and road side equipment
CN112101272A (en) * 2020-09-23 2020-12-18 北京百度网讯科技有限公司 Traffic light detection method and device, computer storage medium and road side equipment
CN112131414A (en) * 2020-09-23 2020-12-25 北京百度网讯科技有限公司 Signal lamp image labeling method and device, electronic equipment and road side equipment
CN112164221A (en) * 2020-09-23 2021-01-01 北京百度网讯科技有限公司 Image data mining method, device and equipment and road side equipment
CN112180285A (en) * 2020-09-23 2021-01-05 北京百度网讯科技有限公司 Method and device for identifying faults of traffic signal lamp, navigation system and road side equipment
CN112396116A (en) * 2020-11-24 2021-02-23 武汉三江中电科技有限责任公司 Thunder and lightning detection method and device, computer equipment and readable medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GUO MU 等: "Traffic light detection and recognition for autonomous vehicles", 《THE JOURNAL OF CHINA UNIVERSITIES OF POSTS AND TELECOMMUNICATIONS》, vol. 22, no. 1, pages 50 - 56 *
MOISES DIAZ-CABRERA 等: "Robust real-time traffic light detection and distance estimation using a single camera", 《EXPERT SYSTEMS WITH APPLICATIONS》, vol. 42, no. 8, pages 3911 - 3923, XP029221101, DOI: 10.1016/j.eswa.2014.12.037 *
叶尔江・哈力木;曼苏乐;张秀彬;: "交通信号灯智能控制算法研究", 微型电脑应用, no. 06, pages 46 - 48 *
吴则平;: "基于数字图像处理的智能交通管理控制系统", 通讯世界, no. 08, pages 316 - 317 *

Also Published As

Publication number Publication date
CN113033464B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
US8184859B2 (en) Road marking recognition apparatus and method
Tae-Hyun et al. Detection of traffic lights for vision-based car navigation system
US10916019B2 (en) Moving object detection in image frames based on optical flow maps
WO2013186662A1 (en) Multi-cue object detection and analysis
KR102253989B1 (en) object tracking method for CCTV video by use of Deep Learning object detector
US9152865B2 (en) Dynamic zone stabilization and motion compensation in a traffic management apparatus and system
JP2021119462A (en) Traffic light image processing method, device, computer system, and roadside device
CN112200131A (en) Vision-based vehicle collision detection method, intelligent terminal and storage medium
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
AU2019100914A4 (en) Method for identifying an intersection violation video based on camera cooperative relay
CN111178119A (en) Intersection state detection method and device, electronic equipment and vehicle
CN112700410A (en) Signal lamp position determination method, signal lamp position determination device, storage medium, program, and road side device
CN113286081B (en) Target identification method, device, equipment and medium for airport panoramic video
CN113936458B (en) Method, device, equipment and medium for judging congestion of expressway
CN113011323A (en) Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN111046746A (en) License plate detection method and device
CN112528795A (en) Signal lamp color identification method and device and road side equipment
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
CN113733086A (en) Robot traveling method, device, equipment and storage medium
CN104616277B (en) Pedestrian's localization method and its device in video structural description
CN113033464B (en) Signal lamp detection method, device, equipment and storage medium
CN107564031A (en) Urban transportation scene foreground target detection method based on feedback background extracting
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN112991446A (en) Image stabilization method and device, road side equipment and cloud control platform
CN113408409A (en) Traffic signal lamp identification method and equipment, cloud control platform and vehicle-road cooperative system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant