CN112991290A - Image stabilization method and device, road side equipment and cloud control platform - Google Patents

Image stabilization method and device, road side equipment and cloud control platform Download PDF

Info

Publication number
CN112991290A
CN112991290A CN202110259595.2A CN202110259595A CN112991290A CN 112991290 A CN112991290 A CN 112991290A CN 202110259595 A CN202110259595 A CN 202110259595A CN 112991290 A CN112991290 A CN 112991290A
Authority
CN
China
Prior art keywords
image
centroid
signal lamp
night image
night
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110259595.2A
Other languages
Chinese (zh)
Other versions
CN112991290B (en
Inventor
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110259595.2A priority Critical patent/CN112991290B/en
Publication of CN112991290A publication Critical patent/CN112991290A/en
Application granted granted Critical
Publication of CN112991290B publication Critical patent/CN112991290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Abstract

The application discloses an image stabilization method and device, roadside equipment and a cloud control platform, and relates to the technical field of artificial intelligence such as intelligent transportation and computer vision. One embodiment of the method comprises: for a current night image in the night image sequence, segmenting a signal lamp region of interest ROI from the current night image based on the signal lamp position of the template image; carrying out binarization on a gray level image corresponding to the signal lamp ROI to obtain a binarized image; calculating and storing the centroid position of each white area in the binary image; in response to the number of frames of the nighttime images at which the centroid positions are saved being equal to a first preset value, dividing the saved centroid positions into a plurality of sets based on the distance between the saved centroid positions; and calculating the average value of the centroid positions in at least one set to obtain the signal lamp position of the current night image. This embodiment improves the stability of the image stabilization effect.

Description

Image stabilization method and device, road side equipment and cloud control platform
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of artificial intelligence such as intelligent transportation and computer vision, and particularly relates to an image stabilizing method and device, roadside equipment and a cloud control platform.
Background
Roadside sensing is the analysis of data captured by sensor devices installed at the roadside. For example, a camera is mounted on a signal light pole or a monitoring light pole. Because the color of the signal lamp is identified in road side perception, and the position of the signal lamp relative to the camera is kept unchanged, a frame of image of the signal lamp shot by the camera is collected in advance, and the position of the signal lamp is marked manually. During actual detection, according to the position of a signal lamp marked in advance, the image is directly scratched to be identified. However, if the signal light or the camera head slightly moves, the relative position between the camera head and the signal light changes. Therefore, an image stabilization method is required to correct the occurring positional shift.
Currently, a commonly used image stabilization method is offset correction based on feature point matching. Specifically, for a certain frame of history image, the signal lamp position (x1, y1, x2, y2) in the history image is determined, and the feature point (such as SIFT feature, HOG feature and the like) corresponding to the signal lamp position is extracted. For each next frame image, (x1, y1, x2, y2) are expanded outward by a part, feature points are extracted by using the same feature extraction method, and then feature point matching is performed. And calculating the offset of the signal lamp position of the current image relative to the signal lamp position of the historical image through the matched characteristic points.
Disclosure of Invention
The embodiment of the application provides an image stabilization method and device, road side equipment and a cloud control platform.
In a first aspect, an embodiment of the present application provides an image stabilization method, including: for a current night image in the night image sequence, segmenting a signal lamp region of interest ROI from the current night image based on the signal lamp position of the template image; carrying out binarization on a gray level image corresponding to the signal lamp ROI to obtain a binarized image; calculating and storing the centroid position of each white area in the binary image; in response to the number of frames of the nighttime images at which the centroid positions are saved being equal to a first preset value, dividing the saved centroid positions into a plurality of sets based on the distance between the saved centroid positions; and calculating the average value of the centroid positions in at least one set to obtain the signal lamp position of the current night image.
In a second aspect, an embodiment of the present application provides an image stabilizing apparatus, including: a segmentation module configured to segment a signal light region of interest ROI from a current night image based on a signal light position of a template image for the current night image in the night image sequence; the binarization module is configured to binarize a gray level image corresponding to the signal lamp ROI to obtain a binarized image; a first calculation module configured to calculate and save a centroid position of each white region in the binarized image; a dividing module configured to divide the saved centroid positions into a plurality of sets based on a distance between the saved centroid positions in response to a number of frames of the night images of the saved centroid positions being equal to a first preset value; a second calculation module configured to calculate an average of the centroid positions in the at least one set, resulting in a signal light position of the current night image.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.
In a fourth aspect, embodiments of the present application propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method as described in any one of the implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product, which includes a computer program that, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
In a sixth aspect, embodiments of the present application provide a roadside apparatus including the electronic apparatus according to the third aspect.
In a seventh aspect, an embodiment of the present application provides a cloud control platform, including the electronic device according to the third aspect.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings. The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an image stabilization method according to the present application;
FIG. 3 is a flow diagram of yet another embodiment of an image stabilization method according to the present application;
FIG. 4 is a schematic structural diagram of one embodiment of an image stabilization device according to the present application;
fig. 5 is a block diagram of an electronic device for implementing an image stabilization method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the image stabilization method or image stabilization apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between camera 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The camera 101 may interact with a server 103 via a network 102 to receive or send messages or the like.
The camera 101 is generally a camera device that is installed on the roadside and can capture an image of a signal lamp. Such as a camera mounted on a signal light pole or a monitoring light pole at an intersection. The camera frame is arranged on a signal lamp pole or a monitoring lamp pole, and may slightly move due to the expansion and contraction of the lamp pole, the subsidence of the ground, the looseness of the installation of the camera and the like, so that the relative position between the camera and the signal lamp changes. The application aims to provide an image stabilizing method for correcting the occurring position offset.
The server 103 may provide various services. For example, the server 103 may perform processing such as analysis on data such as a night image sequence acquired from the camera 101 to generate a processing result (for example, a signal position of a current night image).
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
Furthermore, the server 103 may also be replaced by a roadside device (e.g., a roadside computing device RSCU), a cloud control platform, or the like.
It should be noted that the image stabilizing method provided in the embodiment of the present application is generally executed by a server, a roadside device, a cloud control platform, and the like, and accordingly, the image stabilizing apparatus is generally disposed in the server, the roadside device, the cloud control platform, and the like.
It should be understood that the number of cameras, networks, and servers in fig. 1 is merely illustrative. There may be any number of cameras, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an image stabilization method according to the present application is shown. The image stabilizing method comprises the following steps:
step 201, for a current night image in the night image sequence, segmenting a signal lamp region of interest ROI from the current night image based on a signal lamp position of the template image.
In this embodiment, the execution subject of the image stabilization method (for example, the server 103 shown in fig. 1) may receive the nighttime image sequence acquired by the execution subject in real time from a roadside-mounted camera (for example, the camera 101 shown in fig. 1). For a current night image in the sequence of night images, the executive may segment a region of interest (ROI) for the signal lamp from the current night image based on the signal lamp position of the template image.
The signal lamps are usually road traffic signal lamps, are installed at intersections such as crosses and T-shaped intersections, and are controlled by a road traffic signal controller to guide vehicles and pedestrians to safely and orderly pass through. At present, the lamp cap of the signal lamp mainly has the following two types: a lamp cap in the shape of a circular cake; the other is a scissor-head shaped lamp cap. Both types of lampholders will appear as a bright-centered, radially dimmed light range during normal night operation. Although the different types of lamp holders have different light and shade degrees and different radiation ranges to the surroundings, the centroids of the lamp light ranges are all at the centroid position of the lamp holder of the original signal lamp, so that the position of the signal lamp can be acquired by acquiring the centroid position of the lamp light range of the lamp holder.
Typically, the camera head is positioned at the roadside and is capable of acquiring images of the signal lights. For example, on signal or monitoring light poles erected at intersections. The camera can periodically collect images and send the images to the execution main body in real time. According to the acquisition time, a series of images acquired by the camera can form an image sequence. Wherein, a series of night images collected by the camera at night can form a night image sequence. The execution subject may sequentially process the night images in the night image sequence according to the acquisition time.
For a current night image in the night image sequence, the execution subject may first determine a rectangular frame corresponding to the signal lamp position of the template image in the current night image, and then segment the signal lamp ROI from the current night image based on the rectangular frame. The template image can be an image of a signal lamp acquired by a camera in advance, and the position of the signal lamp and the number of lamp holders are marked on the image. In order to avoid the problems of noise and overexposure and ensure the labeling accuracy, the template image can be an image acquired by a camera in the daytime. The position of the signal lamp may be coordinates of the signal lamp in a preset coordinate system, for example, coordinates of an upper left corner point and a lower right corner point of a bounding box of the signal lamp in a pixel coordinate system. The pixel coordinate system refers to a coordinate system of the visualized picture, and usually the upper left corner is an origin and the right corner is a positive direction.
In general, since the relative position between the camera and the signal lamp is changed due to the signal lamp or the camera moving slightly, the rectangular frame determined on the current night image based on the signal lamp position of the template image cannot fully include the signal lamp. In order for the signal light ROI to include the signal light completely, the rectangular frame may be expanded outward by a plurality of pixels, and the expanded rectangular frame may be determined as the signal light ROI. Because the moving direction of the signal lamp or the camera is not fixed, the periphery of the rectangular frame needs to be expanded outwards. For example, a rectangular frame is extended up and down by h pixels and left and right by w pixels. Wherein h and w are positive integers.
And 202, carrying out binarization on the gray level image corresponding to the signal lamp ROI to obtain a binarized image.
In this embodiment, the executing body may perform binarization on the grayscale image corresponding to the signal lamp ROI to obtain a binarized image.
Generally, the image acquired by the camera is an RGB (red green blue) image. The signal lamp ROI segmented from the current night image is also an RGB image. And converting the RGB value of the signal lamp ROI into corresponding gray values to obtain gray images corresponding to the signal lamp ROI. The binarized image can be obtained by setting the gradation value of the gradation image to 0 or 255 by an appropriate threshold value (e.g., 200). Wherein, the binary image only has two colors of black and white. A white area corresponds to the light range of a lamp cap of the signal lamp.
Wherein the selected threshold should satisfy at least one of the following conditions: the binary image obtained by threshold binarization can obviously distinguish brightness, and the binary image obtained by threshold binarization can eliminate halo of a signal lamp.
In some optional implementations of this embodiment, the binarized threshold may be determined by:
firstly, for any frame of night image in the night image sequence, the gray level image corresponding to the signal lamp ROI is uniformly divided into a plurality of sub-regions.
For example, for a transverse signal lamp, the gray image of the transverse signal lamp can be longitudinally and uniformly divided into a plurality of sub-areas; for a longitudinal signal lamp, the gray image of the longitudinal signal lamp can be transversely and uniformly divided into a plurality of sub-areas. Wherein the number of sub-areas may be, for example, 3.
Thereafter, the average luminance of each sub-region is calculated.
That is, the average gradation value of each sub-region is calculated.
Then, a sub-region having the largest average brightness is selected from the plurality of sub-regions.
That is, the sub-region having the largest average gray value is selected.
And finally, determining a binarization threshold value based on the brightness distribution of the sub-region with the maximum average brightness.
Generally, based on the luminance distribution of the sub-region with the largest average luminance, the left boundary point of the interval in which most of the pixel points of the sub-region are distributed may be determined, and the binarized threshold may be obtained by multiplying the left boundary point by a numerical value (e.g., 0.8) smaller than 1.
And step 203, calculating and storing the centroid position of each white area in the binary image.
In this embodiment, the execution subject may calculate and store the centroid position of each white region in the binarized image.
And 204, in response to the number of frames of the night images storing the centroid positions being equal to a first preset value, dividing the stored centroid positions into a plurality of sets based on the distance between the stored centroid positions.
In the present embodiment, the execution subject described above may determine the relationship of the number of frames of the nighttime image holding the centroid position to a first preset value (e.g., 500). If the number of frames of the night images at the stored centroid positions is equal to the first preset value, the executing body may calculate the distance between each pair of stored centroid positions. If the distance is smaller than the preset distance threshold, the pair of centroids is considered to be corresponding to the centroids of the same lamp cap of the signal lamp with high probability, and the positions of the pair of centroids are added into the same set; and otherwise, considering that the pair of centroids has high probability of corresponding to the centroids of different lamp caps of the signal lamp, and adding the positions of the centroids into the two sets respectively. In this way, the stored centroid positions may be divided into multiple sets.
Generally, the number of frames of the night images at the centroid position needs to be stored to be accumulated to a certain number, so that the calculated signal lamp position of the current night image can be ensured to have higher accuracy.
Step 205, calculating an average value of the centroid positions in at least one set to obtain the signal lamp position of the current night image.
In this embodiment, the executing entity may calculate an average value of the centroid positions in at least one set, and obtain the signal light position of the current night image.
Typically, the number of sets is equal to the number of signal heads, one set corresponding to one signal head. And calculating the average value of the centroid positions in a set to obtain the position of a lamp cap of the signal lamp. The frame formed by the positions of the lamp heads of all the signal lamps is the position of the signal lamp.
According to the image stabilization method provided by the embodiment of the application, firstly, for a current night image in a night image sequence, a signal lamp region of interest ROI is segmented from the current night image based on the signal lamp position of a template image; then, binarizing the gray level image corresponding to the signal lamp ROI to obtain a binarized image; then calculating and storing the centroid position of each white area in the binary image; then, in response to the fact that the number of frames of the night images of the stored centroid positions is equal to a first preset value, dividing the stored centroid positions into a plurality of sets based on the distance between the stored centroid positions; and finally, calculating the average value of the centroid positions in at least one set to obtain the signal lamp position of the current night image. The method determines the signal lamp position of the current night image according to the offset of the mass center position of the signal lamp which normally works at night relative to the signal lamp position of the template image. Features in the image do not need to be extracted, consumption of computing resources is reduced, and time consumed for image stabilization is shortened. The influence of the irradiation of the vehicle lamp can be hardly received, and the stability of the image stabilizing effect is improved.
With further reference to fig. 3, fig. 3 shows a flow 300 of yet another embodiment of an image stabilization method according to the present application. The image stabilizing method comprises the following steps:
step 301, for the current night image in the night image sequence, segmenting the signal lamp interesting region ROI from the current night image based on the signal lamp position of the template image.
And 302, binarizing the gray level image corresponding to the signal lamp ROI to obtain a binarized image.
And step 303, calculating and storing the centroid position of each white area in the binary image.
In the present embodiment, the specific operations of steps 301-.
And step 304, storing the relationship between the frame number of the night image at the centroid position and a first preset value.
In the present embodiment, the execution subject of the image stabilization method (e.g., the server 103 shown in fig. 1) may determine the relationship between the number of frames of the nighttime image holding the centroid position and a first preset value (e.g., 500). If the frame number of the night image with the centroid position is equal to the first preset value, continuing to execute the step 305; if the frame number of the night image of the centroid position is less than the first preset value, returning to the step 301, and obtaining the next night image from the night image sequence to continue calculating the centroid position; and if the frame number of the night images with the centroid positions is larger than a first preset value, skipping to execute the step 315.
Generally, the number of frames of the night images at the centroid position needs to be stored to be accumulated to a certain number, so that the calculated signal lamp position of the current night image can be ensured to have higher accuracy.
Step 305, dividing the saved centroid positions into a plurality of sets based on the distance between the saved centroid positions.
In this embodiment, the execution subject may calculate the distance between each pair of saved centroid positions. If the distance is smaller than the preset distance threshold, the pair of centroids is considered to be corresponding to the centroids of the same lamp cap of the signal lamp with high probability, and the positions of the pair of centroids are added into the same set; and otherwise, considering that the pair of centroids has high probability of corresponding to the centroids of different lamp caps of the signal lamp, and adding the positions of the centroids into the two sets respectively. In this way, the stored centroid positions may be divided into multiple sets.
And step 306, judging whether the number of the sets is less than the number of the lamp caps of the signal lamp.
In this embodiment, the execution body may determine whether the number of sets is smaller than the number of bases of the signal lamp. If the number of sets is not less than the number of lamp caps of the signal lamp, continuing to execute step 307; if the number of sets is less than the number of signal heads, step 313 is skipped.
And 307, performing set traversal, and selecting a set corresponding to the number of the lamp caps of the signal lamp.
In this embodiment, if the number of sets is not less than the number of bases of the signal lamps, the execution main body may perform set traversal, and select a set corresponding to the number of bases of the signal lamps. For example, if the number of sets is 5 and the number of signal heads is 4, 4 sets are selected from the 5 sets.
Step 308, calculate the mean of the centroid positions in the selected set.
In this embodiment, the execution subject may calculate an average of the centroid positions in each of the selected sets.
Step 309, whether a frame formed by the average value of the centroid positions in the selected set is matched with the signal lamp position of the template image or not.
In this embodiment, the execution subject may connect the points corresponding to the average value of the centroid positions in the selected set clockwise or counterclockwise to obtain the corresponding frame. If the frame is matched with the rectangular frame corresponding to the signal lamp position of the template image, continuing to execute step 310; if the frame does not match the rectangular frame corresponding to the signal light position of the template image, the step 311 is skipped.
Generally, if a certain matching condition is satisfied between a frame composed of the average value of the centroid positions in the selected set and a rectangular frame corresponding to the signal light position of the template image, it is described that the frame composed of the average value of the centroid positions in the selected set matches the signal light position of the template image. Wherein, the matching condition may include but is not limited to: similar areas, similar shapes, a degree of overlap above a certain value, etc.
And step 310, determining the average value of the centroid positions in the selected set as the signal lamp position of the current night image.
In this embodiment, if a frame formed by the average of the centroid positions in the selected set matches the signal light position of the template image, the executing entity may determine the average of the centroid positions in the selected set as the signal light position of the current night image. The average value of the centroid positions in the selected set may be the corner points of the frame corresponding to the signal lamp positions.
Typically, the number of sets is equal to the number of signal heads, one set corresponding to one signal head. If the number of the sets is larger than the number of the lamp caps of the signal lamp, the problem of image overexposure and the like is solved. The set matched with the signal lamp position of the template image is selected in a mode of traversing the set, so that the interference of image overexposure and the like can be eliminated, and the accuracy of the calculated signal lamp position is improved.
Step 311, whether the set is traversed is finished.
In this embodiment, if the frame formed by the average value of the centroid positions in the selected set does not match the signal light position of the template image, the executing entity may determine whether the set is traversed. If not, returning to the step 307, and continuing to perform set traversal; if the traversal is completed and the position of the signal light of the template image is not matched, the step 312 is continuously executed.
In step 312, the first predetermined value is incremented.
In this embodiment, if the set traversal is completed and the set traversal does not match the signal lamp position of the template image, or the set number is smaller than the number of the lamp caps of the signal lamp, the executing entity may increase the first preset value (for example, the first preset value is increased by 100, and is changed from 500 to 600), and return to step 301, and obtain the next night image from the night image sequence and continue to calculate the centroid position.
Under the condition that the traversal of the set is finished, the set matched with the signal lamp position of the template image is not selected, or the number of the sets is smaller than the number of lamp caps of the signal lamp, so that the number of frames of the night image with the centroid position is not accumulated enough, the first preset value is increased, accumulation is continued, and the accuracy of the calculated signal lamp position can be improved.
And 313, storing whether the frame number of the night image at the centroid position is smaller than a second preset value.
In this embodiment, if the number of sets is less than the number of bases of the signal lamps, the executive body may determine whether the number of frames of the night images storing the centroid position is less than a second preset value (e.g., 1000). If the frame number of the night image storing the centroid position is smaller than a second preset value, skipping to execute step 312, increasing the first preset value, and obtaining the next night image from the night image sequence to continue calculating the centroid position; if the frame number of the night images of the centroid position is not less than the second preset value, the signal lamp is considered to have a fault lamp holder, a fault of the signal lamp is reported, and the step 314 is continuously executed.
And step 314, calculating the average value of the centroid positions in each set, and determining the signal lamp position of the current night image.
In this embodiment, if the number of sets is less than the number of lamp caps of the signal lamp, and the number of frames of the night image with the centroid position is not less than the second preset value, the execution subject may calculate an average value of the centroid positions in each set to determine the signal lamp position of the current night image.
In general, the number of the lamp bases is a double number for the aesthetic appearance of the signal lamp, and a symmetrical relationship exists among the lamp bases. Therefore, under the condition that few lamp caps have faults, the positions of the fault lamp caps can be estimated according to the symmetrical relation of the lamp caps, and the positions of signal lamps are obtained. For example, for a signal lamp composed of 4 burners, if there is a burner fault, a point corresponding to the average value of the centroid positions in 3 sets may be calculated as the positions of 3 burners that are working normally. The rectangular frame with the positions of 3 normally working lamp heads as corner points is the position of the signal lamp.
Step 315, the centroid position of the oldest stored night image is deleted.
In this embodiment, if the number of frames storing the night images with the centroid position is greater than the first preset value, the executing entity may delete the centroid position of the oldest stored night image and return to step 305.
Under the condition that the number of frames of the night images of the centroid position is excessively accumulated, the centroid position of the earliest stored night image is deleted, interference of the early night image can be avoided, and the accuracy of the calculated signal lamp position is improved.
In the case where the stored centroid positions have been divided into sets, the executing agent may determine whether a centroid position of the current night image belongs to a certain set. If so, adding the centroid position into the set to which the centroid position belongs, and deleting the centroid position stored earliest in the set to which the centroid position belongs; and if not, creating a new set, and adding the centroid position of the current night image into the new set. Therefore, under the condition of the divided set, the set to which the centroid position of the current night image belongs can be quickly determined. Wherein if the difference between a centroid position and the average of the centroid positions in a set is less than a certain value, it is determined that the centroid position belongs to the set.
In some optional implementations of this embodiment, if the number of times the new set is updated is equal to the first preset value, the process returns to step 307, and the set traversal is performed again. If the signal lamp position is determined by the newly selected set, then calculating the signal lamp position by using the newly selected set; otherwise, the original set is used to calculate the signal lamp position. So that it can be determined whether the signal lamp or the camera is moved again.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the image stabilization method in this embodiment highlights the step of determining the signal light position of the current night image based on the centroid position in the set. Therefore, the scheme described in this embodiment needs to store the number of frames of the night images at the centroid position to be accumulated to a certain number, so as to ensure that the calculated signal lamp position of the current night image has higher accuracy; under the condition that the number of the sets is larger than that of the lamp caps of the signal lamps, the sets matched with the signal lamp positions of the template images are selected in a mode of traversing the sets, so that the interference of overexposure of the images can be eliminated, and the accuracy of the calculated signal lamp positions is improved; under the condition that the set traversal is finished, the set matched with the signal lamp position of the template image is not selected, or the number of the sets is smaller than the number of lamp caps of the signal lamp, the first preset value is increased, accumulation is continued, and the accuracy of the calculated signal lamp position can be improved; reporting the fault of the signal lamp under the condition that the fault lamp cap exists in the signal lamp, and predicting the position of the fault lamp cap by utilizing the symmetrical relation among the lamp caps so as to obtain the position of the signal lamp; under the condition that the number of frames of the night images of the centroid position is excessively accumulated, the centroid position of the earliest stored night image is deleted, interference of the early night image can be avoided, and the accuracy of the calculated signal lamp position is improved.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an image stabilization apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the image stabilization apparatus 400 of the present embodiment may include: a segmentation module 401, a binarization module 402, a first calculation module 403, a division module 404 and a second calculation module 405. Wherein, the segmentation module 401 is configured to segment a signal light region of interest ROI from a current night image based on a signal light position of the template image for the current night image in the night image sequence; a binarization module 402 configured to binarize a grayscale image corresponding to the signal lamp ROI to obtain a binarized image; a first calculation module 403 configured to calculate and save a centroid position of each white region in the binarized image; a dividing module 404 configured to divide the saved centroid positions into a plurality of sets based on a distance between the saved centroid positions in response to a number of frames of the night time images of the saved centroid positions being equal to a first preset value; a second calculation module 405 configured to calculate an average of the centroid positions in the at least one set, resulting in a signal light position of the current night image.
In the present embodiment, in the image stabilization apparatus 400: the specific processing and the technical effects thereof performed by the segmentation module 401, the binarization module 402, the first calculation module 403, the division module 404, and the second calculation module 405 can refer to the related descriptions of step 201 and step 205 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the segmentation module 401 is further configured to: in the current night image, determining a rectangular frame corresponding to the signal lamp position of the template image; the rectangular frame is expanded outward by a plurality of pixels, and the expanded rectangular frame is determined as a signal light ROI.
In some optional implementations of this embodiment, the second calculating module 405 includes: the selecting submodule is configured to respond to the number of the sets not smaller than the number of the lamp caps of the signal lamps, perform set traversal, and select a set corresponding to the number of the lamp caps of the signal lamps; a calculation submodule configured to calculate an average of the centroid positions in the selected set; and the first determining submodule is configured to determine the average value of the centroid positions in the selected set as the signal lamp position of the current night image if a frame formed by the average values of the centroid positions in the selected set is matched with the signal lamp position of the template image.
In some optional implementations of this embodiment, the second calculating module 405 further includes: and the traversal submodule is configured to continue set traversal if a frame formed by the average value of the centroid positions in the selected set is not matched with the signal lamp positions of the template images.
In some optional implementations of this embodiment, the second calculating module 405 further includes: and the first increasing submodule is configured to increase a first preset value if the set is traversed and is not matched with the signal lamp position of the template image, and acquire the next night image from the night image sequence to continue to calculate the centroid position.
In some optional implementations of this embodiment, the second calculating module 405 further includes: a second incrementing submodule configured to increment the first preset value in response to the number of sets being less than the number of lightheads of the signal lamp and the number of frames of the night image holding the centroid position being less than a second preset value, and to obtain a next frame of the night image from the sequence of night images to continue calculating the centroid position.
In some optional implementations of this embodiment, the second calculating module 405 further includes: and the second determination submodule is configured to respond to the condition that the number of the sets is less than the number of lamp heads of the signal lamp, the number of the frames of the night images of the centroid positions is not less than a second preset value, calculate the average value of the centroid positions in each set and determine the signal lamp position of the current night image.
In some optional implementations of this embodiment, the image stabilization device 400 further includes: a deleting module configured to delete the centroid position of the oldest stored night image in response to the number of frames of the night image storing the centroid position being greater than a first preset value.
In some optional implementations of this embodiment, the deletion module is further configured to: if the centroid position of the current night image belongs to the plurality of sets, adding the centroid position of the current night image into the set to which the current night image belongs, and deleting the centroid position stored earliest in the set to which the current night image belongs; and if the centroid position of the current night image does not belong to the plurality of sets, creating a new set, and adding the centroid position of the current night image into the new set.
In some optional implementations of this embodiment, the image stabilization device 400 further includes: and the traversing module is configured to perform set traversal again if the number of times of updating the new set is equal to a first preset value.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as the image stabilization method. For example, in some embodiments, the image stabilization method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the image stabilization method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the image stabilization method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the embodiment of the application, the application also provides the roadside device. The roadside apparatus may include the electronic apparatus shown in fig. 5. Optionally, the roadside device may include a communication component and the like in addition to the electronic device, and the electronic device may be integrated with the communication component or may be separately provided. The electronic device may acquire data, such as pictures and videos, from a sensing device (e.g., a roadside camera) for image video processing and data computation. Optionally, the electronic device itself may also have a sensing data acquisition function and a communication function, for example, an AI camera, and the electronic device may directly perform image video processing and data calculation based on the acquired sensing data.
According to the embodiment of the application, the application further provides a cloud control platform. The cloud-controlled platform may include the electronic device shown in fig. 5. Optionally, the cloud control platform performs processing at the cloud end, and the electronic device included in the cloud control platform may acquire data, such as pictures and videos, of the sensing device (such as a roadside camera), so as to perform image video processing and data calculation; the cloud control platform can also be called a vehicle-road cooperative management platform, an edge computing platform, a cloud computing platform, a central system, a cloud server and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (25)

1. An image stabilization method, comprising:
for a current night image in a night image sequence, segmenting a signal lamp region of interest ROI from the current night image based on a signal lamp position of a template image;
binarizing the gray level image corresponding to the signal lamp ROI to obtain a binarized image;
calculating and storing the centroid position of each white area in the binary image;
in response to the number of frames of the nighttime images at which the centroid positions are saved being equal to a first preset value, dividing the saved centroid positions into a plurality of sets based on the distance between the saved centroid positions;
and calculating the average value of the centroid positions in at least one set to obtain the signal lamp position of the current night image.
2. The method of claim 1, wherein the segmenting a signal light region of interest, ROI, from the current night image based on the signal light position of the template image comprises:
determining a rectangular frame corresponding to the signal lamp position of the template image in the current night image;
and expanding the rectangular frame outwards by a plurality of pixels, and determining the expanded rectangular frame as the signal lamp ROI.
3. The method of claim 1, wherein said calculating an average of centroid positions in at least one set resulting in a signal light position for the current night image comprises:
responding to the condition that the number of the sets is not less than the number of the lamp holders of the signal lamps, performing set traversal, and selecting the set corresponding to the number of the lamp holders of the signal lamps;
calculating the average value of the centroid positions in the selected set;
and if a frame formed by the average value of the centroid positions in the selected set is matched with the signal lamp position of the template image, determining the average value of the centroid positions in the selected set as the signal lamp position of the current night image.
4. The method of claim 3, wherein said calculating an average of centroid positions in at least one set resulting in a signal light position for the current night image further comprises:
and if the frame formed by the average value of the centroid positions in the selected set is not matched with the signal lamp position of the template image, continuing set traversal.
5. The method of claim 4, wherein said calculating an average of centroid positions in at least one set resulting in a signal light position for the current night image further comprises:
and if the set traversal is finished and the set traversal is not matched with the signal lamp position of the template image, increasing the first preset numerical value, and acquiring the next night image from the night image sequence to continue calculating the centroid position.
6. The method of claim 1, wherein said calculating an average of centroid positions in at least one set resulting in a signal light position for the current night image further comprises:
and in response to the condition that the number of the sets is smaller than the number of lamp caps of the signal lamp and the number of the frames of the night images of the centroid position is smaller than a second preset value, increasing the first preset value, and acquiring the next night image from the night image sequence to continue calculating the centroid position.
7. The method of claim 6, wherein said calculating an average of centroid positions in at least one set resulting in a signal light position for the current night image further comprises:
and in response to the condition that the number of the sets is smaller than the number of lamp caps of the signal lamp and the number of the night images with the mass center positions is not smaller than the second preset value, calculating the average value of the mass center positions in each set, and determining the position of the signal lamp of the current night image.
8. The method of claim 1, wherein the method further comprises:
and deleting the center of mass position of the oldest stored night image in response to the number of frames of the night image in which the center of mass position is stored being greater than the first preset value.
9. The method of claim 8, wherein the deleting the centroid position of the oldest saved night image comprises:
if the centroid position of the current night image belongs to the plurality of sets, adding the centroid position of the current night image into the set to which the current night image belongs, and deleting the centroid position stored earliest in the set to which the current night image belongs;
and if the centroid position of the current night image does not belong to the plurality of sets, creating a new set, and adding the centroid position of the current night image into the new set.
10. The method of claim 9, wherein the method further comprises:
and if the number of times of updating the new set is equal to the first preset value, the set traversal is carried out again.
11. An image stabilization device, comprising:
a segmentation module configured to segment a signal light region of interest ROI from a current night image in a sequence of night images based on a signal light position of a template image;
the binarization module is configured to binarize the gray level image corresponding to the signal lamp ROI to obtain a binarized image;
a first calculation module configured to calculate and save a centroid position of each white region in the binarized image;
a dividing module configured to divide the saved centroid positions into a plurality of sets based on a distance between the saved centroid positions in response to a number of frames of the night images of the saved centroid positions being equal to a first preset value;
a second calculation module configured to calculate an average of the centroid positions in at least one set, resulting in a signal light position of the current night image.
12. The apparatus of claim 11, wherein the segmentation module is further configured to:
determining a rectangular frame corresponding to the signal lamp position of the template image in the current night image;
and expanding the rectangular frame outwards by a plurality of pixels, and determining the expanded rectangular frame as the signal lamp ROI.
13. The apparatus of claim 11, wherein the second computing module comprises:
the selecting submodule is configured to respond to the condition that the number of the sets is not smaller than the number of the lamp caps of the signal lamps, perform set traversal, and select the set corresponding to the number of the lamp caps of the signal lamps;
a calculation submodule configured to calculate an average of the centroid positions in the selected set;
a first determining submodule configured to determine the mean value of the centroid positions in the selected set as the signal light position of the current night image if a frame composed of the mean value of the centroid positions in the selected set matches the signal light position of the template image.
14. The apparatus of claim 13, wherein the second computing module further comprises:
and the traversal submodule is configured to continue set traversal if a frame formed by the average value of the centroid positions in the selected set is not matched with the signal lamp positions of the template images.
15. The apparatus of claim 14, wherein the second computing module further comprises:
and the first increasing submodule is configured to increase the first preset numerical value and acquire the next night image from the night image sequence to continue to calculate the centroid position if the set is traversed and the set is not matched with the signal lamp position of the template image.
16. The apparatus of claim 11, wherein the second computing module further comprises:
a second increasing submodule configured to increase the first preset value in response to the set number being less than the number of bases of the signal lamp and the number of frames of the night image storing the centroid position being less than a second preset value, and to continue to calculate the centroid position by acquiring a next frame of night image from the sequence of night images.
17. The apparatus of claim 16, wherein the second computing module further comprises:
a second determination submodule configured to calculate an average value of the centroid positions in each set and determine a signal light position of the current night image in response to the number of sets being less than the number of heads of the signal light and the number of frames of the night image storing the centroid positions being not less than the second preset value.
18. The apparatus of claim 11, wherein the apparatus further comprises:
a deletion module configured to delete the centroid position of the oldest stored night image in response to the number of frames of the night image storing the centroid position being greater than the first preset value.
19. The apparatus of claim 18, wherein the deletion module is further configured to:
if the centroid position of the current night image belongs to the plurality of sets, adding the centroid position of the current night image into the set to which the current night image belongs, and deleting the centroid position stored earliest in the set to which the current night image belongs;
and if the centroid position of the current night image does not belong to the plurality of sets, creating a new set, and adding the centroid position of the current night image into the new set.
20. The apparatus of claim 19, wherein the apparatus further comprises:
and the traversing module is configured to perform set traversal again if the number of times of updating the new set is equal to the first preset value.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-10.
24. A roadside apparatus comprising the electronic apparatus of claim 21.
25. A cloud controlled platform comprising the electronic device of claim 21.
CN202110259595.2A 2021-03-10 2021-03-10 Image stabilizing method and device, road side equipment and cloud control platform Active CN112991290B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110259595.2A CN112991290B (en) 2021-03-10 2021-03-10 Image stabilizing method and device, road side equipment and cloud control platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110259595.2A CN112991290B (en) 2021-03-10 2021-03-10 Image stabilizing method and device, road side equipment and cloud control platform

Publications (2)

Publication Number Publication Date
CN112991290A true CN112991290A (en) 2021-06-18
CN112991290B CN112991290B (en) 2023-12-05

Family

ID=76334798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110259595.2A Active CN112991290B (en) 2021-03-10 2021-03-10 Image stabilizing method and device, road side equipment and cloud control platform

Country Status (1)

Country Link
CN (1) CN112991290B (en)

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0963103A2 (en) * 1998-06-01 1999-12-08 Canon Kabushiki Kaisha Image reading apparatus, and dimming control method and line sensor layout method therefor
WO2006023863A1 (en) * 2004-08-23 2006-03-02 Intergraph Software Technologies Company Real-time image stabilization
CN101098465A (en) * 2007-07-20 2008-01-02 哈尔滨工程大学 Moving object detecting and tracing method in video monitor
CN101609504A (en) * 2009-07-21 2009-12-23 华中科技大学 A kind of method for detecting, distinguishing and locating infrared imagery sea-surface target
US20100111392A1 (en) * 2008-11-03 2010-05-06 Gerardo Hermosillo Valadez System and method for automatically classifying regions-of-interest
CN103854278A (en) * 2012-12-06 2014-06-11 五邑大学 Printed circuit board image registration method based on shape context of mass center of communicated region
US20140233853A1 (en) * 2013-02-19 2014-08-21 Research In Motion Limited Method and system for generating shallow depth of field effect
CN104851288A (en) * 2015-04-16 2015-08-19 宁波中国科学院信息技术应用研究院 Traffic light positioning method
US20150332097A1 (en) * 2014-05-15 2015-11-19 Xerox Corporation Short-time stopping detection from red light camera videos
CN106991707A (en) * 2017-05-27 2017-07-28 浙江宇视科技有限公司 A kind of traffic lights image intensification method and device based on imaging features round the clock
US20180068451A1 (en) * 2016-09-08 2018-03-08 Qualcomm Incorporated Systems and methods for creating a cinemagraph
CN107786788A (en) * 2016-08-31 2018-03-09 百度在线网络技术(北京)有限公司 Intelligent traffic lamp system and its image processing method
JP2018040861A (en) * 2016-09-06 2018-03-15 キヤノン株式会社 Image blur correction device, lens barrel, and imaging device
CN108416798A (en) * 2018-03-05 2018-08-17 山东大学 A kind of vehicle distances method of estimation based on light stream
CN109285188A (en) * 2017-07-21 2019-01-29 百度在线网络技术(北京)有限公司 Method and apparatus for generating the location information of target object
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method
CN110084833A (en) * 2019-04-25 2019-08-02 北京计算机技术及应用研究所 A kind of infrared motion target detection method based on adaptive neighborhood Technology of Judgment
CN110717438A (en) * 2019-10-08 2020-01-21 东软睿驰汽车技术(沈阳)有限公司 Traffic signal lamp identification method and device
US20200097739A1 (en) * 2018-09-26 2020-03-26 Toyota Jidosha Kabushiki Kaisha Object detection device and object detection method
US20200134333A1 (en) * 2018-10-31 2020-04-30 Cognizant Technology Solutions India Pvt. Ltd. Traffic light recognition system and method
CN111292531A (en) * 2020-02-06 2020-06-16 北京百度网讯科技有限公司 Tracking method, device and equipment of traffic signal lamp and storage medium
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN111462225A (en) * 2020-03-31 2020-07-28 电子科技大学 Centroid identification and positioning method of infrared light spot image
CN111695546A (en) * 2020-06-28 2020-09-22 北京京东乾石科技有限公司 Traffic signal lamp identification method and device for unmanned vehicle
CN111723650A (en) * 2020-05-09 2020-09-29 华南师范大学 Night vehicle detection method, device, equipment and storage medium
CN111767851A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Method and device for monitoring emergency, electronic equipment and medium
CN111931726A (en) * 2020-09-23 2020-11-13 北京百度网讯科技有限公司 Traffic light detection method and device, computer storage medium and road side equipment
CN112288767A (en) * 2020-11-04 2021-01-29 成都寰蓉光电科技有限公司 Automatic detection and tracking method based on target adaptive projection

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0963103A2 (en) * 1998-06-01 1999-12-08 Canon Kabushiki Kaisha Image reading apparatus, and dimming control method and line sensor layout method therefor
WO2006023863A1 (en) * 2004-08-23 2006-03-02 Intergraph Software Technologies Company Real-time image stabilization
CN101098465A (en) * 2007-07-20 2008-01-02 哈尔滨工程大学 Moving object detecting and tracing method in video monitor
US20100111392A1 (en) * 2008-11-03 2010-05-06 Gerardo Hermosillo Valadez System and method for automatically classifying regions-of-interest
CN101609504A (en) * 2009-07-21 2009-12-23 华中科技大学 A kind of method for detecting, distinguishing and locating infrared imagery sea-surface target
CN103854278A (en) * 2012-12-06 2014-06-11 五邑大学 Printed circuit board image registration method based on shape context of mass center of communicated region
US20140233853A1 (en) * 2013-02-19 2014-08-21 Research In Motion Limited Method and system for generating shallow depth of field effect
US20150332097A1 (en) * 2014-05-15 2015-11-19 Xerox Corporation Short-time stopping detection from red light camera videos
CN104851288A (en) * 2015-04-16 2015-08-19 宁波中国科学院信息技术应用研究院 Traffic light positioning method
CN107786788A (en) * 2016-08-31 2018-03-09 百度在线网络技术(北京)有限公司 Intelligent traffic lamp system and its image processing method
JP2018040861A (en) * 2016-09-06 2018-03-15 キヤノン株式会社 Image blur correction device, lens barrel, and imaging device
US20180068451A1 (en) * 2016-09-08 2018-03-08 Qualcomm Incorporated Systems and methods for creating a cinemagraph
CN106991707A (en) * 2017-05-27 2017-07-28 浙江宇视科技有限公司 A kind of traffic lights image intensification method and device based on imaging features round the clock
CN109285188A (en) * 2017-07-21 2019-01-29 百度在线网络技术(北京)有限公司 Method and apparatus for generating the location information of target object
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN108416798A (en) * 2018-03-05 2018-08-17 山东大学 A kind of vehicle distances method of estimation based on light stream
US20200097739A1 (en) * 2018-09-26 2020-03-26 Toyota Jidosha Kabushiki Kaisha Object detection device and object detection method
US20200134333A1 (en) * 2018-10-31 2020-04-30 Cognizant Technology Solutions India Pvt. Ltd. Traffic light recognition system and method
CN110084833A (en) * 2019-04-25 2019-08-02 北京计算机技术及应用研究所 A kind of infrared motion target detection method based on adaptive neighborhood Technology of Judgment
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method
CN110717438A (en) * 2019-10-08 2020-01-21 东软睿驰汽车技术(沈阳)有限公司 Traffic signal lamp identification method and device
CN111292531A (en) * 2020-02-06 2020-06-16 北京百度网讯科技有限公司 Tracking method, device and equipment of traffic signal lamp and storage medium
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN111462225A (en) * 2020-03-31 2020-07-28 电子科技大学 Centroid identification and positioning method of infrared light spot image
CN111723650A (en) * 2020-05-09 2020-09-29 华南师范大学 Night vehicle detection method, device, equipment and storage medium
CN111695546A (en) * 2020-06-28 2020-09-22 北京京东乾石科技有限公司 Traffic signal lamp identification method and device for unmanned vehicle
CN111767851A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Method and device for monitoring emergency, electronic equipment and medium
CN111931726A (en) * 2020-09-23 2020-11-13 北京百度网讯科技有限公司 Traffic light detection method and device, computer storage medium and road side equipment
CN112288767A (en) * 2020-11-04 2021-01-29 成都寰蓉光电科技有限公司 Automatic detection and tracking method based on target adaptive projection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZAMANI MD SANI ET AL.: "Real time traffic light detection and interpretation using circle centroid", 《IJITEE》, vol. 8, no. 12, pages 456 - 469 *
肖宏涛;: "一种新的兴趣区域车辆定位方法", 制造业自动化, no. 15 *
金立生;程蕾;成波;: "基于毫米波雷达和机器视觉的夜间前方车辆检测", 汽车安全与节能学报, no. 02 *

Also Published As

Publication number Publication date
CN112991290B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
US11700457B2 (en) Flicker mitigation via image signal processing
CN104282011A (en) Method and device for detecting interference stripes in video images
CN111275036A (en) Target detection method, target detection device, electronic equipment and computer-readable storage medium
US20240013453A1 (en) Image generation method and apparatus, and storage medium
CN112700410A (en) Signal lamp position determination method, signal lamp position determination device, storage medium, program, and road side device
CN103729828A (en) Video rain removing method
CN111062331A (en) Mosaic detection method and device for image, electronic equipment and storage medium
JP6413318B2 (en) Vehicle detection device, system, and program
CN109658441B (en) Foreground detection method and device based on depth information
CN111311500A (en) Method and device for carrying out color restoration on image
US11792514B2 (en) Method and apparatus for stabilizing image, roadside device and cloud control platform
CN112991290B (en) Image stabilizing method and device, road side equipment and cloud control platform
JP6413319B2 (en) Vehicle detection device, system, and program
EP4080479A2 (en) Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system
CN110135224B (en) Method and system for extracting foreground target of surveillance video, storage medium and terminal
CN110557622A (en) Depth information acquisition method and device based on structured light, equipment and medium
CN116310889A (en) Unmanned aerial vehicle environment perception data processing method, control terminal and storage medium
US11961239B2 (en) Method and device for marking image position of sub-pixel of display screen, and storage medium
CN114494680A (en) Accumulated water detection method, device, equipment and storage medium
CN114511862A (en) Form identification method and device and electronic equipment
CN114333345B (en) Early warning method, device, storage medium and program product for shielding parking space
CN112700657B (en) Method and device for generating detection information, road side equipment and cloud control platform
TWI830553B (en) Method for detecting wear of vehicle windows and related devices
CN113947762A (en) Traffic light color identification method, device and equipment and road side computing equipment
CN116311132A (en) Deceleration strip identification method, deceleration strip identification device, deceleration strip identification equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211013

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant