CN116823661A - Traffic light color drawing method, device, equipment and storage medium - Google Patents

Traffic light color drawing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116823661A
CN116823661A CN202310791393.1A CN202310791393A CN116823661A CN 116823661 A CN116823661 A CN 116823661A CN 202310791393 A CN202310791393 A CN 202310791393A CN 116823661 A CN116823661 A CN 116823661A
Authority
CN
China
Prior art keywords
image frame
color
processed
pixel
traffic light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310791393.1A
Other languages
Chinese (zh)
Inventor
胡倩
李瑮
章勇
汪磊
王诗韵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN202310791393.1A priority Critical patent/CN116823661A/en
Publication of CN116823661A publication Critical patent/CN116823661A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a traffic light color drawing method, a device, equipment and a storage medium, which comprise the following steps: acquiring a to-be-processed image frame set corresponding to a to-be-processed video stream, performing color recognition conversion on each to-be-processed image frame in the to-be-processed image frame set, and generating an auxiliary image frame set corresponding to the to-be-processed image frame set; the lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels; determining each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relation, respectively inputting the groups of image frames to be processed into a pre-trained traffic light color drawing model, and determining a primary color drawing image frame set; and carrying out color restoration on each preliminary color drawing image frame according to the pixel category information in each preliminary color drawing image frame and the traffic light position information in the image frame to be processed. The recognition confusion of red and yellow lamps in the traffic lamps is reduced, and the color tracing accuracy of the traffic lamps is improved.

Description

Traffic light color drawing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer vision recognition technology, and in particular, to a traffic light color drawing method, apparatus, device and storage medium.
Background
With the rapid development of science and technology, new deep learning platforms such as electric police and intelligent traffic are more and more commonly applied to daily life, and traffic light color is used as an important judgment basis for identifying traffic light states and is widely applied to intelligent traffic.
Based on the development of deep learning in computer vision, compared with the traditional method of manually extracting features and performing pattern recognition, the neural network has better recognition effect, so the deep learning has also been widely cited in traffic light color drawing. The existing method usually realizes the identification of the color and the position of traffic lights through a neural network model based on multi-template training, or directly completes the identification of red lights, green lights and yellow lights through the neural network.
However, since the templates cannot cover all conditions existing in traffic lights, and massive template participation is required when multi-template training is performed, a great deal of computing resources are consumed. And scene, light are comparatively complicated in real environment, and yellow lamp and red lamp show the colour comparatively similarly in the real-time image that bayonet electric police and wisdom traffic obtained under the dusk scene, even discern through neural network model still the great possibility of appearing discernment mistake, and then influence bayonet electric police and wisdom traffic's work accuracy.
Disclosure of Invention
The application provides a traffic light color drawing method, a device, equipment and a storage medium, which reduce the scene dependence of a neural network model for identifying the color of a traffic light in computer vision, reduce confusion of red lights and yellow lights in the traffic light, and enable the color drawing of the traffic light with high identification accuracy to be realized under the conditions of different scenes, different time periods and different illumination.
In a first aspect, an embodiment of the present application provides a traffic light color-rendering method, including:
acquiring a to-be-processed image frame set corresponding to a to-be-processed video stream, performing color recognition conversion on each to-be-processed image frame in the to-be-processed image frame set, and generating an auxiliary image frame set corresponding to the to-be-processed image frame set; wherein, the lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels;
determining each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relation, respectively inputting the groups of image frames to be processed into a pre-trained traffic light color drawing model, and determining a primary color drawing image frame set;
and carrying out color reduction on each preliminary color drawing image frame according to the pixel type information in each preliminary color drawing image frame and the traffic light position information in the image frame to be processed corresponding to each preliminary color drawing image frame, and generating a color drawing image frame set corresponding to the image frame set to be processed.
In a second aspect, an embodiment of the present application further provides a traffic light color-tracing apparatus, including:
the image frame acquisition module is used for acquiring a to-be-processed image frame set corresponding to the to-be-processed video stream, performing color recognition conversion on each to-be-processed image frame in the to-be-processed image frame set, and generating an auxiliary image frame set corresponding to the to-be-processed image frame set; wherein, the lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels;
the primary color drawing module is used for determining each image frame to be processed and the corresponding auxiliary image frame as one image frame group to be processed according to the corresponding relation, inputting each image frame group to be processed into a pre-trained traffic light color drawing model respectively, and determining a primary color drawing image frame set;
and the color image generation module is used for carrying out color reduction on the preliminary color image frames according to the pixel category information in the preliminary color image frames and the traffic light position information in the image frames to be processed corresponding to the preliminary color image frames, and generating a color image frame set corresponding to the image frame set to be processed.
In a third aspect, an embodiment of the present application further provides a traffic light color-rendering device, including:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the traffic light shading method provided by the embodiments of the present application.
In a fourth aspect, embodiments of the present application also provide a storage medium containing computer-executable instructions that, when executed by a computer processor, are used to perform the traffic light color method provided by embodiments of the present application.
The embodiment of the application provides a traffic light color drawing method, a device, equipment and a storage medium, which are used for generating an auxiliary image frame set corresponding to a to-be-processed image frame set by acquiring the to-be-processed image frame set corresponding to a to-be-processed video stream and performing color recognition conversion on each to-be-processed image frame in the to-be-processed image frame set; wherein, the lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels; determining each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relation, respectively inputting the groups of image frames to be processed into a pre-trained traffic light color drawing model, and determining a primary color drawing image frame set; and carrying out color reduction on each preliminary color drawing image frame according to the pixel type information in each preliminary color drawing image frame and the traffic light position information in the image frame to be processed corresponding to each preliminary color drawing image frame, and generating a color drawing image frame set corresponding to the image frame set to be processed. By adopting the technical scheme, the image frame set to be processed corresponding to the obtained video stream to be processed is preprocessed, so that the lighting part of the traffic light in each image frame set to be processed can be processed into a preset non-green color or green color according to the actually detected color, namely, the pixels corresponding to the yellow light and the red light which are easy to be confused are processed into red pixels, and the lighting color displayed by the traffic light in the processed auxiliary image frame set is easy to distinguish red and green, so that the traffic light color can be accurately identified by the subsequent model. And then, the image frames to be processed and the corresponding auxiliary image frames are input into a pre-trained traffic light color drawing model in groups for processing, a group of images are mutually compared to determine pixel type information corresponding to each pixel, the images after the pixel type information is determined are determined to be primary color drawing image frames, the lighting color of the traffic light can be determined based on the pixel type information in each primary color drawing image frame, and the lighting color of the traffic light in the primary color drawing image frames can be restored according to the lighting logic of the traffic light in the traffic light, so that a color drawing image frame set which finally corresponds to the image frame set to be processed is obtained. Because the pre-trained traffic light color drawing model only needs to distinguish red and green with obvious distinguishing degree, the pre-trained traffic light color drawing model is less influenced by scenes where traffic lights are located, different templates are not required to be given for training different scenes when the traffic light color drawing model is trained, the data volume required by training is reduced, traffic light color drawing errors caused by insufficient scene templates are avoided, the data operation volume is reduced in the use process, the lighting colors of the traffic lights in the preliminary color drawing image frame are restored according to the lighting logic of traffic lights in the traffic lights after accurate classification is carried out, the yellow lights in the traffic lights can be accurately restored, and the accuracy of traffic light color drawing is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a traffic light color method according to one embodiment of the present application;
FIG. 2 is a flow chart of a traffic light color method according to one embodiment of the present application;
fig. 3 is a schematic structural diagram of a traffic light color-tracing device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a traffic light color drawing device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of a traffic light color drawing method according to an embodiment of the present application, where the method may be performed by a traffic light color drawing device, the traffic light color drawing device may be implemented by software and/or hardware, and the traffic light color drawing device may be configured in a traffic light color drawing apparatus when platforms such as a bayonet electric police and intelligent traffic collect a video including traffic lights. Alternatively, the traffic light color device may be an electronic device, which may be a notebook, a desktop computer, a smart tablet, etc., and embodiments of the present application are not limited in this respect.
As shown in fig. 1, the traffic light color drawing method provided by the embodiment of the application specifically includes the following steps:
s101, acquiring a to-be-processed image frame set corresponding to a to-be-processed video stream, performing color recognition conversion on each to-be-processed image frame in the to-be-processed image frame set, and generating an auxiliary image frame set corresponding to the to-be-processed image frame set.
The lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels.
In this embodiment, the video stream to be processed may be specifically understood as video data collected by video capturing devices included in intelligent platforms such as a gate traffic police and intelligent traffic, and in general, the video stream to be processed is a video captured by video devices disposed at an intersection for whether a traffic violation occurs on a vehicle at the intersection, and since the violation includes a red light running behavior, traffic lights at the intersection will be generally captured in the video stream to be processed. The image frame set to be processed can be specifically understood as all video frames in the video stream to be processed or a set of a plurality of video frames obtained by performing frame extraction processing on the video stream to be processed, and each video frame forming the set can be considered as the image frame to be processed, which comprises traffic lights and needs to divide the lighting condition of the traffic lights and color the lighting lights. It can be understood that the video stream to be processed may be a video stream in a historical time period or a video stream obtained in real time, and when the corresponding image frame set to be processed is aimed at a real-time processing scene, the image frame to be processed corresponding to the current moment is the image frame to be processed in the image frame set to be processed, which is currently required to be subjected to color recognition conversion, that is, the image frame to be processed at the current moment obtained in real time is subjected to real-time color recognition conversion to obtain the corresponding auxiliary image frame.
In this embodiment, the auxiliary image frame is specifically understood as a to-be-processed image frame after the completion of the color conversion. The lighting pixels are specifically understood as pixels corresponding to the traffic light lighting portions in the image frame to be processed and the auxiliary image frame.
Specifically, when a video stream to be processed is obtained, extracting each video frame in the video stream to be processed according to actual requirements to obtain a set of image frames to be processed corresponding to the video stream to be processed. For each image frame to be processed in the image frame set to be processed, the lighting part of the image frame to be processed can be considered to be in a state which is easier to distinguish in the image frame when the traffic light is lighted, so that the pixels which belong to the traffic light and are lighted in the image frame to be processed can be determined and used as the lighting pixels, and the colors of the lighting pixels can be identified. Because the yellow light and the red light in the traffic light are easy to be influenced by environmental scenes and light rays and are difficult to distinguish when the traffic light is lighted, and the recognition difficulty of the green light is lower than that of the red light and the yellow light when the traffic light is lighted, the embodiment of the application only distinguishes whether the lighted pixels are displayed as green when the color recognition is carried out on the lighted pixels. And uniformly converting the lighting pixels into preset non-green colors which are larger than the green and background colors when the display is not green, otherwise, keeping the colors of the lighting pixels unchanged or converting the colors of the lighting pixels into uniform green pixel values, and determining the image obtained after the color conversion of the lighting pixels as an auxiliary image frame corresponding to the image frame to be processed, namely obtaining the auxiliary image frame of which the lighting pixels of the traffic light area can only be represented as green pixels or preset non-green pixels. After the auxiliary image frames corresponding to the image frames to be processed one by one are obtained, the auxiliary image frame set formed by the auxiliary image frames can be considered to have the same corresponding relation with the image frame set to be processed.
Optionally, the HSV color of each pixel in the image frame to be processed may be identified to determine the color of each lighting pixel in the image frame to be processed, and when the lighting pixel is judged to be green, each lighting pixel may be assigned as [0,255,0]; when the lighting pixels are judged to be non-green, each of the lighting pixels can be assigned as [255, 0], and each of the lighting pixels is assigned as an illustration here, and can be assigned as other non-green pixel values; wherein, HSV color reference ranges are shown in the following table:
in the embodiment of the application, through carrying out color recognition conversion on each image frame to be processed in the image frame set to be processed, the traffic light part is divided into green which is easy to distinguish and a preset non-green, so that the degree of distinguishing the display colors of the traffic light under different light rays of different scenes is improved, the subsequent preliminary color drawing of the traffic light is conveniently carried out on the image frames to be processed, and the accuracy of the preliminary color drawing is improved.
S102, determining each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relation, respectively inputting the groups of image frames to be processed into a pre-trained traffic light color drawing model, and determining a primary color drawing image frame set.
In this embodiment, the traffic light color drawing model may be specifically understood as a neural network model in which feature extraction, classification, and other processes are performed on an image combination input thereto, and confidence corresponding to each pixel in the image combination on each category into which the model can be divided is output. Alternatively, the categories of traffic light color model classification in embodiments of the present application may include background, red light, and green light. The preliminary color image frame may be specifically understood as an image frame including pixel type information corresponding to each pixel in the image frame group to be processed, where the size of the preliminary color image frame is the same as that of the image frame to be processed and the auxiliary image frame, and the pixel type information corresponding to each pixel in the preliminary color image frame is the pixel type information of the pixels at the same position in the image frame to be processed and the auxiliary image frame.
Specifically, according to the corresponding relation between the image frames to be processed and the auxiliary image frames, the two image frames to be processed and the auxiliary image frames which are correspondingly related are taken as one image frame group to be processed, namely the number of the image frame group to be processed is the same as the number of the image frames to be processed in the image frame set to be processed. And respectively inputting each image frame group to be processed into a pre-trained traffic light color drawing model, carrying out feature extraction and category division corresponding to each other on two image frames in the image frame group to be processed by using the traffic light color drawing model, outputting the confidence level of each pixel in the image frame group to be processed on each category which can be divided by the traffic light color drawing model, further determining pixel category information corresponding to each pixel according to the confidence level, determining the image frames formed by arranging each pixel containing the pixel category information according to the corresponding positions as preliminary color drawing image frames, and constructing a preliminary color drawing image frame set according to the corresponding relation between each image frame group to be processed and the image frame set to be processed to obtain a preliminary color drawing image frame set corresponding to the image frame set to be processed.
And S103, carrying out color reduction on each preliminary color drawing image frame according to the pixel category information in each preliminary color drawing image frame and the traffic light position information in the image frame to be processed corresponding to each preliminary color drawing image frame, and generating a color drawing image frame set corresponding to the image frame set to be processed.
In this embodiment, the pixel class information is specifically defined by classifying the traffic light color model, and the class information of the main body to which the pixel belongs in the image frame to be processed is that is one of the classes that the traffic light color model can classify. In the embodiment of the application, the pixel type information can be one of a background, a red light and a green light. The traffic light position information may be specifically understood as an image area corresponding to the traffic light in the image frame to be processed, or may be understood as a set of pixels in which the subject in the image frame to be processed is the traffic light. A color image frame is specifically understood to be an image frame that completes the restoration of the illuminated color for the traffic light.
Specifically, for each preliminary color image frame in the preliminary color image frame set, target analysis is performed on the image to be processed corresponding to the preliminary color image frame set, and a plurality of pixels corresponding to the traffic light positions in the preliminary color image frame are determined. Because each pixel in the preliminary color image frame contains pixel type information, the lighting color of the obtained traffic light can be determined according to the pixel type information of a plurality of pixels corresponding to the traffic light position when the preliminary color image frame does not perform color reduction. Because the lighting of the traffic light follows the sequence of green- > yellow- > red- > green, wherein yellow and red are both identified as pixel type information of red light through preprocessing, and time continuity exists among the image frames to be processed, when the pixel type information is determined to be red light, the lighting color of the traffic light in the primary image frame to be processed can be judged according to the reduction result of each primary image frame before the primary image frame in combination with the logical sequence of the lighting of the traffic light, and then the color reduction of the primary image frame is completed according to the judgment result, so that the image frame corresponding to the primary image frame can be obtained. After the processing of all the preliminary color image frames is completed, the obtained color image frames are formed into a set in the same sequence as the image frames to be processed in the image frame set to be processed, and the color image frame set corresponding to the image frame set to be processed can be generated.
According to the technical scheme, the image frame sets to be processed corresponding to the video streams to be processed are preprocessed, so that the lighting part of the traffic light in each image frame set to be processed into a preset non-green color or green color according to the actually detected color, namely, the lighting color of the traffic light and the position information of the lighting pixel can be determined based on the pixel type information in each primary color image frame, so that the lighting color of the traffic light in the auxiliary image frame set obtained through processing is easy to distinguish two pixel colors, then the image frames to be processed and the corresponding auxiliary image frames are input into a pre-trained traffic light color model in a group for processing, a group of images are mutually compared to determine pixel type information corresponding to each pixel, the image after the pixel type information is determined to be the primary color image frame, the lighting color of the traffic light and the position information of the lighting pixel can be determined based on the pixel type information in each primary color image frame, and the lighting color of the traffic light in the primary color image frame can be reduced according to the lighting logic of the traffic light, and the image frame set corresponding to the traffic light to be finally processed. Because the pre-trained traffic light color drawing model only needs to distinguish preset non-green colors and green colors with obvious distinguishing degree, the pre-trained traffic light color drawing model is less influenced by scenes where traffic lights are located, different templates are not required to be given for training different scenes when the traffic light color drawing model is trained, the data volume required by training is reduced, traffic light color drawing errors caused by insufficient scene templates are avoided, the data operation volume is reduced in the use process, the lighting colors of the traffic lights in the preliminary color drawing image frames are restored according to the lighting logic of the traffic lights in the traffic lights after accurate classification is carried out, the yellow lights in the traffic lights can be accurately restored, and the accuracy of the traffic light color drawing is improved.
Fig. 2 is a flowchart of a traffic light color drawing method provided by an embodiment of the present application, where the technical solution of the embodiment of the present application is further optimized based on the above-mentioned alternative technical solutions, a lighting pixel is obtained by performing object recognition on an image frame to be processed, and then, by performing color recognition on the lighting pixel, a pixel reassignment is performed on the lighting pixel according to a color recognition result, a pixel corresponding to a traffic light lighting portion is converted into a preset non-green pixel or a green pixel that is easy to distinguish, and then, a corresponding auxiliary image frame is obtained, and further, the image frame to be processed and the corresponding auxiliary image frame form a set of image frames to be processed are input into a traffic light color drawing model that includes at least a feature extraction layer and a classification layer for performing multi-scale feature extraction and classification, an image with the same image size as the image in the set of image frames to be processed is output, and each pixel in the output image has at least one candidate pixel category including confidence, and the preliminary color drawing image frame after the pixel category information of each pixel in the output image is processed according to confidence. As the arrangement sequence of the frames in the preliminary color drawing image frame set is the same as that in the image frame set to be processed, the time sequence of the frames is the same, and the lighting of the traffic light also has the corresponding time logic. Therefore, each preliminary color drawing image frame can be sequentially processed as a current image frame, whether the traffic light is lighted or not and the lighting color under the lighting state are determined according to the pixel type information corresponding to the traffic light pixel set in the current image frame, and the color which is required to be displayed by the actual traffic light in the current image frame is determined according to the lighting color, the accumulated yellow light frame number, the accumulated green light frame number and the traffic light position information before the current image frame under the condition that the traffic light is lighted is determined, so that the color reduction of the current image frame is realized. The traffic light lighting color in the image frame to be processed is reassigned to be the preset non-green color or green color, so that the definition of the pixel classification aiming at each pixel point in the preliminary color drawing image frame is ensured, and then the color reduction is carried out on each preliminary color drawing image frame according to the color change logic in the traffic light, the accumulated yellow light frame number, the accumulated green light frame number and the traffic light position information, the accuracy of the yellow light reduction is ensured, and the yellow light and red light confusion caused by insufficient model templates or environmental factor influence when the color of the traffic light is directly identified is avoided.
As shown in fig. 2, the traffic light color drawing method provided by the embodiment of the application specifically includes the following steps:
s201, acquiring a set of image frames to be processed corresponding to the video stream to be processed, aiming at each image frame to be processed in the set of image frames to be processed, carrying out target identification on the image frames to be processed, and determining lighting pixels in the image frames to be processed.
Specifically, when a video stream to be processed is obtained, extracting each video frame in the video stream to be processed according to actual requirements to obtain a set of image frames to be processed corresponding to the video stream to be processed. For each image frame to be processed in the image frame set to be processed, the image frame to be processed can be considered as an image containing a traffic light, and as the traffic light has obvious difference from the surrounding environment in the lighting state, the lighting area in the traffic light is generally an area with a fixed shape, the object recognition can be carried out on the image frame to be processed according to the hue, the saturation, the brightness and the shape of the lighting area in the lighting state, the lighting area in the image frame to be processed is determined, and the pixels corresponding to the lighting area are determined as the lighting pixels.
S202, performing color recognition on the lighting pixels, and reassigning the lighting pixels to green pixels or preset non-green pixels according to a color recognition result.
Specifically, by identifying the HSV color of each lighting pixel, the lighting pixel can be considered to be green when the hue, saturation and brightness are within a certain range, but the color displayed by the lighting pixel during actual display can be green with different depths, so that the color of each lighting pixel can be reassigned for facilitating the classification of the following traffic light color drawing model, and the lighting pixel can be displayed with the same color and is easy to distinguish. When the HSV color of each lighting pixel belongs to the green range, the lighting pixels are uniformly reassigned to be green pixels; when the HSV color of each lighting pixel does not belong to the green range, the traffic light corresponding to each lighting pixel can be considered to be a yellow light or a red light, and at the moment, each lighting pixel is uniformly reassigned to be a preset non-green pixel with obvious difference between green and background colors, namely, the yellow light and the red light are jointly processed to be the preset non-green color, and then follow-up recognition is carried out. In an alternative embodiment, each lit pixel that does not belong to the green range may be reassigned uniformly to a red pixel, i.e., the yellow and red lights are treated as red lights for subsequent identification.
S203, determining the image frames to be processed after pixel reassignment as auxiliary image frames corresponding to the image frames to be processed, and determining a set formed by the auxiliary image frames as an auxiliary image frame set corresponding to the image frame set to be processed.
Specifically, the image frames to be processed after pixel reassignment are determined to be auxiliary image frames corresponding to the image frames to be processed, and the auxiliary image frames form a set according to the same arrangement sequence as the image frames to be processed, so that an auxiliary image frame set corresponding to the image frame set to be processed can be obtained.
S204, determining each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relation.
S205, inputting the image frame group to be processed into a feature extraction layer in a pre-trained traffic light color model for multi-scale feature extraction aiming at the image frame group to be processed, and determining an intermediate feature image which corresponds to the image frame group to be processed and has the same size.
In this embodiment, the feature extraction layer can be understood as a collection of neural network layers for multi-scale feature extraction and fusion of images input therein. Alternatively, the feature extraction layer may include a convolution layer, a bulk sample normalization layer (Batch Normal ization, BN), and a pooling layer.
Specifically, each image frame group to be processed is respectively input into a traffic light color drawing model with pre-trained values for processing, taking one of the image frame groups to be processed as an example, when the image frame groups to be processed are input into the traffic light color drawing model, firstly, image feature extraction is carried out on the image frame groups to be processed by a feature extraction layer in the traffic light color drawing model, when a feature extraction task is executed, feature images with different scales can be obtained by downsampling the image frame groups to be processed, then upsampling and result merging are sequentially carried out, and the extracted multi-scale feature images are fused to obtain an intermediate feature image which corresponds to the image frame groups to be processed and has the same size.
For example, the feature map after four downsampling may be upsampled once, the obtained result may be combined with the result of downsampling three times, then upsampled once, the upsampled result may be combined with the result of downsampling twice, then upsampled once, the upsampled result may be combined with the result of downsampling once, and then upsampled once may be performed to obtain the feature map consistent with the size of the input image.
Optionally, the residual structure of the feature extraction layer is a convolution residual structure and an identity mapping residual structure.
Specifically, in the embodiment of the application, the residual structure of the feature extraction layer is set to include a 1*1 convolution residual structure and an identity residual structure, and as the residual structure is provided with a plurality of branches, a plurality of gradient flow paths are added to the feature extraction layer network, and all network layers can be converted into 3*3 convolutions through node fusion, so that the deployment and acceleration of a model are facilitated. The 1*1 convolution is equivalent to a special convolution with 3*3 of a plurality of 0 s in the convolution kernel, and the identity mapping is a special convolution with 1*1 of the convolution kernel as the identity matrix, so that the coexistence residual structure of the convolution residual structure and the identity mapping residual structure can be easily realized in the model deducing stage, and the integration of the model is realized.
S206, inputting the intermediate feature map to a classification layer in the traffic light color model for classification, and determining at least one candidate pixel class corresponding to each pixel in the output image and the class confidence of the candidate pixel class.
In this embodiment, the classification layer may be specifically understood as a neural network layer for classifying the feature images input therein and outputting the probability corresponding to each category. The candidate pixel category can be specifically understood as a pixel category of which the probability corresponding to each pixel in the output image is not zero after the classification layer performs category classification on the intermediate feature image, for example, for one pixel in the output image, the classification layer outputs that the probability is 80% of the background probability, 20% of the red light probability and 0% of the green light probability, and then the background and the red light can be considered as candidate pixel categories of the pixel. Class confidence may be understood as, in particular, a likelihood value that the probability of a candidate pixel class is true.
Specifically, the intermediate feature map is used as the input of a classification layer in the traffic light color drawing model, the classification layer is used for classifying each pixel in the intermediate feature map, the probability of a plurality of pixel classes obtained by pre-training in the classification layer is determined, the pixel class with non-zero probability is used as a candidate pixel class corresponding to the pixel, the class confidence coefficient of each candidate pixel class is determined, and an image containing each pixel candidate pixel class and the class confidence coefficient is used as the output image of the traffic light color drawing model.
S207, determining the candidate pixel category with the highest category confidence as pixel category information of the pixels, and determining an output image with the determined pixel category information as a preliminary color image frame corresponding to the image frame group to be processed.
Specifically, for each pixel in the output image, comparing the probability of each candidate pixel category in the pixel with the confidence of each category, determining the candidate pixel category with the highest probability and the highest category confidence as pixel category information of the pixel, and determining the output image, which only contains one corresponding pixel category information in each pixel after the confirmation of the pixel category information of each pixel in the output image is completed, as a preliminary color drawing image frame corresponding to the image frame group to be processed.
S208, determining a set of preliminary color image frames corresponding to each image frame group to be processed as a preliminary color image frame set.
Specifically, the primary color drawing image frames are assembled in the same manner as the arrangement sequence of the image frame groups to be processed, so that an auxiliary image frame set corresponding to the image frame set to be processed can be obtained, namely, the arrangement sequence of the primary color drawing image frames in the primary color drawing image frame set is understood to be the same as the acquisition sequence of the image frames to be processed in the video stream to be processed, so that the subsequent color reduction of the traffic light lighting condition in the primary color drawing image frames is conveniently carried out according to the lighting sequence of the traffic light.
S209, each preliminary color image frame is used as a current image frame, and a traffic light pixel set in the current image frame is determined according to traffic light position information in the image frame to be processed corresponding to the current image frame.
Specifically, each preliminary color image frame is sequentially processed as a current image frame according to the arrangement sequence of each preliminary color image frame in the preliminary color image frame set. Since the size of the current image frame is consistent with that of the image frame to be processed, the positions of pixels in the current image frame and the positions of pixels in the image frame to be processed have a one-to-one correspondence, so that the traffic light position information corresponding to the traffic light in the image frame to be processed, namely the pixel set corresponding to the traffic light position in the image frame to be processed, can be determined by carrying out target identification on the image frame to be processed corresponding to the current image frame, and the traffic light pixel set in the current image frame can be determined reversely. In an alternative manner, the current image frame may also be a preliminary color image frame corresponding to the image frame to be processed acquired in real time.
S210, determining a lighting pixel set in the current image frame and the lighting color of the lighting pixel set according to the pixel category information corresponding to each traffic light pixel in the traffic light pixel set. If the set of light-up pixels exists, S211 is executed; otherwise, S212 is performed.
Specifically, since the lighting area of the traffic light is located within the pixel range corresponding to the traffic light, whether the traffic light is lighted, the lighting position and the color of the lighting position can be determined by distinguishing the pixel type information of each traffic light pixel in the traffic light pixel set. Because the pixel type information in the embodiment of the application can include a background, a red light and a green light, when the traffic light is not lighted, the pixel type information of each traffic light pixel should be the background type, at this time, the lighting pixel can be considered to be absent in the traffic light pixel, that is, the lighting pixel set in the current image frame can be considered to be absent, at this time, S212 is executed; when the traffic light is on, a part of the corresponding pixel type information in each traffic light pixel is necessarily red light type or green light type, at this time, each traffic light pixel with the pixel type information being red light type or green light type is determined as an on pixel, a set of each on pixel is determined as an on pixel set, a color in the pixel type information corresponding to the on pixel is determined as an on color, at this time, S211 may be executed to restore the color actually exhibited by the traffic light in the current image frame.
Optionally, the number of traffic light pixels in the traffic light pixel set may be counted, the number of lighting pixels corresponding to different pixel type information in the lighting pixel set may be determined, the ratio of the number of lighting pixels in the number of traffic light pixels may be determined, and if the ratio exceeds a preset ratio threshold, the color of the corresponding pixel type information may be determined as the lighting color of the lighting pixel set.
S211, performing color restoration on the current image frame according to the lighting color, the yellow light accumulated frame number, the green light accumulated frame number, the traffic light position information and the image frame to be processed corresponding to the current image frame.
In this embodiment, the number of yellow light accumulated frames may be specifically understood as the number of frames in which the traffic light that is lighted in the plurality of image frames to be processed and is adjacent to the current image frame is continuously identified as the yellow light, and it may be understood that when the traffic light is identified as the red light or the green light, the number of yellow light accumulated frames may be set to zero, that is, the number of yellow light accumulated frames corresponding to the current image frame may be used to indicate whether the moment adjacent to the moment corresponding to the current image frame is the yellow light, and the duration information of the yellow light is presented. The number of green light accumulated frames can be specifically understood as the number of frames in which the traffic light that is lighted in the plurality of image frames to be processed adjacent to the current image frame is continuously recognized as a green light.
Specifically, whether the lighting pixel set in the current image frame is likely to display a yellow light or not can be determined according to the lighting color, when the fact that the current image frame is likely to display the yellow light is determined, the lighting pixel set corresponding to the current image frame is further distinguished according to the accumulated yellow light frame number, the accumulated green light frame number, the traffic light position information and the traffic light lighting logic to display the red light or the yellow light, the lighting pixels are subjected to color reduction according to the traffic light color which is determined to be displayed by the lighting pixel set, and pixels except for the lighting pixel set in the current image frame are reduced according to the image frame to be processed which corresponds to the current image frame.
Optionally, the embodiment of the present application provides a specific implementation manner of performing color reduction on a current image frame according to a lighting color, a yellow light accumulated frame number, a green light accumulated frame number, traffic light position information and a to-be-processed image frame corresponding to the current image frame, which may be divided into the following four cases:
1) If the lighting color is green, the color of the lighting pixel set is kept unchanged, pixels except the lighting pixel set in the current image frame are restored to the color of the image frame to be processed corresponding to the current image frame, the color drawing image frame corresponding to the current image frame is obtained, the accumulated yellow light frame number is set to be zero, and the accumulated green light frame number is increased by one.
Specifically, if the lighting color of the lighting pixel set is recognized as green, the lighting traffic light in the current image frame can be directly considered as green light, the lighting range is the range where the lighting pixel set is located, and each pixel in the lighting pixel set is reassigned to be green pixel during processing, so that the color of the lighting pixel set can be kept unchanged. Meanwhile, each pixel except the lighting pixel set in the current image frame can be understood as a background pixel, namely, a pixel which does not need to be subjected to color modification, so that the part of background pixels can be filled with the color of the corresponding pixel in the image frame to be processed corresponding to the current image frame, the expression form of the part of background pixels is the same as that of the image frame to be processed, and further the color drawing image frame corresponding to the current image frame can be obtained. At this time, the traffic lights in the current image frame are lighted to be green lights, the accumulated frame number of the yellow lights is set to be zero, and the accumulated frame number of the green lights is increased by one, so that the preliminary color-drawing image frame after the current image frame is convenient for traffic light color restoration.
2) If the lighting color is a preset non-green color and the green light accumulated frame number is greater than zero, the color of the lighting pixel set is reduced to yellow, the pixels except the lighting pixel set in the current image frame are reduced to the color of the image frame to be processed corresponding to the current image frame, the color description image frame corresponding to the current image frame is obtained, the yellow light accumulated frame number is increased by one, and the green light accumulated frame number is set to zero.
Specifically, if the lighting color of the lighting pixel set is identified as a preset non-green color and the accumulated number of green light frames is greater than zero, the lighting traffic light in the previous frame to be processed adjacent to the current image frame can be considered as a green light, and the current image frame is the first frame after the green light, and the green light is considered to be lighted as a yellow light based on the color conversion logic of the traffic light, so that the lighting traffic light in the current image frame can be considered as a yellow light, the lighting range is the range where the lighting pixel set is located, and at the moment, each pixel in the lighting pixel set can be reassigned as a yellow pixel. Meanwhile, each pixel except the lighting pixel set in the current image frame can be understood as a background pixel, namely, a pixel which does not need to be subjected to color modification, so that the part of background pixels can be filled with the color of the corresponding pixel in the image frame to be processed corresponding to the current image frame, the expression form of the part of background pixels is the same as that of the image frame to be processed, and further the color drawing image frame corresponding to the current image frame can be obtained. At this time, the traffic light in the current image frame is lighted to be a yellow light, so the current image frame can be considered to be the first frame lighted to be a yellow light, the accumulated green light frame number can be set to zero, and meanwhile, the accumulated yellow light frame number is added by one, so that the preliminary color-drawing image frame after the current image frame is convenient for traffic light color reduction.
3) And if the lighting color is a preset non-green color and the accumulated number of frames of the yellow light is equal to one, restoring the color of the lighting pixel set to yellow, restoring the pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, adding one to the accumulated number of frames of the yellow light, and determining initial yellow light position information according to the previous image frame of the current image frame and the previous traffic light position information.
In this embodiment, the last traffic light position information can be specifically understood as the position range occupied by the traffic light in the previous color image frame corresponding to the current image frame. The initial yellow light position information can be specifically understood as the relative position information of the yellow light in the first frame of yellow light image within the whole range of the traffic light in one traffic light lighting cycle.
Specifically, if the lighting color of the lighting pixel set is identified as a preset non-green color and the accumulated frame number of the yellow light is equal to one, the lighting range of the traffic light in the current image frame can be considered as the yellow light, the lighting range is the range where the lighting pixel set is located, and the last color-drawing image frame at the moment corresponding to the current image frame is the first frame of the yellow light lighting in the current traffic light lighting cycle, and at this time, each pixel in the lighting pixel set can be reassigned to be a yellow pixel. Meanwhile, each pixel except the lighting pixel set in the current image frame can be understood as a background pixel, namely, a pixel which does not need to be subjected to color modification, so that the part of background pixels can be filled with the color of the corresponding pixel in the image frame to be processed corresponding to the current image frame, the expression form of the part of background pixels is the same as that of the image frame to be processed, and further the color drawing image frame corresponding to the current image frame can be obtained. The traffic light is lighted by the current image frame, the yellow light is lighted by the current image frame, and the current image frame is a second frame lighted by the yellow light.
4) If the lighting color is a preset non-green color and the accumulated frame number of the yellow lamps is greater than one, determining the current lamp position information according to the lighting pixel set and the traffic lamp position information, and performing color restoration on the current image frame according to the current lamp position information, the initial yellow lamp position information and the image frame to be processed corresponding to the current image frame.
Specifically, if the lighting color of the lighting pixel set is identified as a non-green color and the accumulated number of frames of the yellow light is greater than one, then it can be considered that the traffic light lighted in the current image frame may be a yellow light or a red light, and at this time, the relative positional relationship between the lighting light and the whole traffic light in the current image frame can be determined according to the lighting pixel set and the traffic light positional information, and the relative positional relationship is used as the current light positional information. Because the overall relative positions of the yellow light and the red light in the traffic light are different, a certain distance difference exists between the yellow light and the red light, the current light position information and the initial yellow light position information can be compared, the position offset ratio is determined according to the current light position information and the initial yellow light position information, whether the lighting light in the current image frame is at the position where the yellow light in the traffic light should be lighted is further determined according to the position offset ratio, and further, the color reduction can be carried out on the current image frame according to the judging result and the image frame to be processed corresponding to the current image frame.
Optionally, according to the current lamp position information, the initial yellow lamp position information and the image frame to be processed corresponding to the current image frame, the current image frame is subjected to color reduction, which can be specifically divided into the following two cases:
a) And if the position deviation ratio is smaller than the preset deviation ratio threshold value, restoring the colors of the lighted pixel sets to yellow, restoring the pixels except the lighted pixel sets in the current image frame to the colors of the image frames to be processed corresponding to the current image frame, obtaining the color-drawing image frame corresponding to the current image frame, and adding one to the accumulated yellow light frame number.
In this embodiment, the preset deviation ratio threshold may be specifically understood as a deviation ratio between red light and yellow light, which is predetermined according to a distribution position of each color light in the traffic light.
Specifically, when the position offset ratio is smaller than the preset offset ratio threshold, the deviation between the position of the lighting lamp in the current image frame and the position of the traffic light yellow lamp is not large, at this time, the lighting traffic light in the current image frame can be regarded as the yellow lamp, the lighting range is the range where the lighting pixel set is located, and at this time, each pixel in the lighting pixel set can be reassigned to be the yellow pixel. Meanwhile, each pixel except the lighting pixel set in the current image frame can be understood as a background pixel, namely, a pixel which does not need to be subjected to color modification, so that the part of background pixels can be filled with the color of the corresponding pixel in the image frame to be processed corresponding to the current image frame, the expression form of the part of background pixels is the same as that of the image frame to be processed, and further the color drawing image frame corresponding to the current image frame can be obtained. Because the traffic light lighted by the current image frame is a yellow light, the accumulated frame number of the yellow light needs to be increased by one so as to facilitate the preliminary color-drawing image frame after the current image frame to restore the color of the traffic light.
B) And if the position offset ratio is greater than or equal to a preset offset ratio threshold, restoring the lighting color to red, restoring pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, and setting the accumulated yellow light frame number to zero.
Specifically, when the position offset ratio is greater than or equal to the preset offset ratio threshold, the deviation between the lighting position in the current image frame and the yellow position of the traffic light can be considered to be larger, the lighting traffic light in the current image frame can be considered to be a red light, the lighting range is the range where the lighting pixel set is located, and each pixel in the lighting pixel set can be reassigned to be a red pixel. Meanwhile, each pixel except the lighting pixel set in the current image frame can be understood as a background pixel, namely, a pixel which does not need to be subjected to color modification, so that the part of background pixels can be filled with the color of the corresponding pixel in the image frame to be processed corresponding to the current image frame, the expression form of the part of background pixels is the same as that of the image frame to be processed, and further the color drawing image frame corresponding to the current image frame can be obtained. At this time, the traffic lights in the current image frame are lighted to be red lights, the accumulated frame number of the yellow lights is set to zero, and the accumulated frame number of the green lights is not required to be set to zero again because the accumulated frame number of the green lights is already zero, so that the preliminary color-drawing image frame after the current image frame is subjected to the traffic light color reduction.
S212, restoring each pixel in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, adding one to the accumulated yellow light frame number, and setting the accumulated green light frame number to zero.
Specifically, in the working process of the traffic light, the situation that the whole traffic light is not lightened can occur in the flickering part of the yellow light, so when the fact that the lightened pixel set does not exist in the current image frame is determined, the current image frame is correspondingly regarded as the flickering situation of the yellow light, at the moment, each pixel in the current image frame can be understood as a background pixel, namely, the pixel which does not need to be subjected to color modification is not needed, and therefore the pixel in the current image frame can be filled with the color of the corresponding pixel in the image frame to be processed, which corresponds to the current image frame, and the color-drawing image frame, which corresponds to the current image frame, is obtained. At this time, because the scene corresponding to the current image frame is the scene that the traffic light lights up the yellow light, the accumulated frame number of the yellow light needs to be added by one, and because it is uncertain whether the current image frame is the first frame behind the green light, the accumulated frame number of the green light can be set to zero, so that when the preliminary color-drawing image frame behind the current image frame is used for carrying out the color restoration of the traffic light, the error accumulated frame number of the green light cannot occur.
Further, before acquiring the set of image frames to be processed corresponding to the video stream to be processed, training of the traffic light color model is required to be completed, and the specific training may include the following steps:
1) At least one historical image frame is acquired, and the historical image frames are subjected to color recognition and conversion, so that the lighted pixels in the historical image frames are reassigned to green pixels or non-green pixels, and auxiliary historical image frames corresponding to the historical image frames are generated.
In this embodiment, the history image frame may be specifically understood as an image captured by the imaging device during the history time acquired by the bayonet electric police or the intelligent transportation platform. The auxiliary history image frame is specifically understood as a history image frame after each color conversion is completed.
It can be understood that, in order to ensure that the traffic light color model obtained by training achieves the same result as the training target in practical application, the processing of the historical image frames should be the same as the processing of the image frames to be processed when constructing the training sample, so the process of performing color recognition and conversion on the historical image frames will not be described in detail in this step.
2) And acquiring pixel type information of each pixel in the auxiliary historical image frame, and constructing a training sample set by taking the pixel type information as labels of the historical image frame and the auxiliary historical image frame.
Wherein the pixel class information includes a red light class, a green light class, and a background class.
Specifically, for the history image frames after the conversion, in order to ensure the training accuracy, the information that each pixel in the history image frames belongs to the red light category, the green light category or the background category can be marked manually, and the pixel category information corresponding to each pixel in the auxiliary history image frames can be obtained. Meanwhile, the traffic light color drawing model which is expected to be trained can output pixel type information corresponding to each pixel in an input image, namely the pixel type information can be understood as a label in the model training process, so that the pixel type information corresponding to each pixel in a history image frame can be used as labels of the history image frame and an auxiliary history image frame, and the history image frame, the auxiliary history image frame and the label are combined to form a training sample.
It can be understood that, to promote the training effect of the traffic light color model, a plurality of training samples can be constructed to train the model, the construction mode of each training sample is the same as the above steps 1) and 2), and the generated set of the plurality of training samples is the training sample set.
In the embodiment of the application, the labels in the training sample set only comprise red light category, green light category and background category, so that the traffic light color drawing model can be considered to be only required to carry out classification training on the three categories with obvious distinction, the accuracy of the model obtained by training is improved, and a group of corresponding historical image frames and auxiliary historical image frames are simultaneously trained during training, the richness of the extraction features in the training process is improved, and the model training result is more suitable for the classification of pixel categories.
3) And inputting the training sample set into an initial traffic light color drawing model, and training the initial traffic light color drawing model based on the constructed loss function until a preset convergence condition is met to obtain the traffic light color drawing model.
In this embodiment, the initial traffic light color model is specifically understood as a traffic light color model without weight adjustment, and its architecture is completely consistent with that of the traffic light color model. The preset convergence condition is specifically understood as a condition for determining that the traffic light color model has completed convergence, that is, a condition for determining when to stop training for the traffic light color model, which is set according to actual conditions. Optionally, the preset convergence condition may include iterating to a maximum iteration number or the loss function value is lower than a set loss function threshold, which is not limited in the embodiment of the present application.
Specifically, the historical image frames and the auxiliary historical image frames in the training sample set are input into an initial traffic light color model as a group of input information, a loss function is constructed based on the output of the initial traffic light color model and the errors between the labels in the training sample set, and the weights of the neural network layers in the initial traffic light color model are adjusted based on the loss function until the preset convergence condition is met, and then the traffic light color model which can be put into use is obtained.
According to the technical scheme, the lighting pixels are obtained by carrying out target identification on the image frames to be processed, further, the lighting pixels are subjected to pixel reassignment based on the color identification result of the lighting pixels, the pixels corresponding to the lighting parts of the traffic lights are converted into non-green pixels or green pixels which are easy to distinguish, the corresponding auxiliary image frames are obtained, the image frames to be processed and the corresponding auxiliary image frames form a group of image frames to be processed, the group of image frames to be processed are input into the traffic light color drawing model for processing, an image with the same size as the image in the group of image frames to be processed is output, each pixel in the output image has at least one candidate pixel category containing confidence, and the preliminary color drawing image frame after the pixel category information of each pixel is clear can be obtained by processing each pixel in the output image according to the confidence. As the arrangement sequence of the frames in the preliminary color drawing image frame set is the same as that in the image frame set to be processed, the time sequence of the frames is the same, and the lighting of the traffic light also has the corresponding time logic. Therefore, each preliminary color drawing image frame can be sequentially processed as a current image frame, whether the traffic light is lighted or not and the lighting color under the lighting state are determined according to the pixel type information corresponding to the traffic light pixel set in the current image frame, and the color which is required to be displayed by the actual traffic light in the current image frame is determined according to the lighting color, the accumulated yellow light frame number, the accumulated green light frame number and the traffic light position information before the current image frame under the condition that the traffic light is lighted is determined, so that the color reduction of the current image frame is realized. The traffic light lighting color in the image frame to be processed is reassigned to be the preset non-green color or green color, so that the definition of the pixel classification aiming at each pixel point in the preliminary color drawing image frame is ensured, and then the color reduction is carried out on each preliminary color drawing image frame according to the color change logic in the traffic light, the accumulated yellow light frame number, the accumulated green light frame number and the traffic light position information, the accuracy of the yellow light reduction is ensured, and the yellow light and red light confusion caused by insufficient model templates or environmental factor influence when the color of the traffic light is directly identified is avoided.
Fig. 3 is a schematic structural diagram of a traffic light color device according to an embodiment of the present application, and as shown in fig. 3, the traffic light color device includes an image frame acquisition module 31, a preliminary color drawing module 32 and a color drawing image generation module 33.
The image frame acquisition module 31 is configured to acquire a set of image frames to be processed corresponding to the video stream to be processed, and perform color recognition conversion on each image frame to be processed in the set of image frames to be processed, so as to generate an auxiliary image frame set corresponding to the set of image frames to be processed; wherein, the lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels; the preliminary color drawing module 32 is configured to determine each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relationship, and input each group of image frames to be processed into a pre-trained traffic light color drawing model to determine a preliminary color drawing image frame set; the color image generating module 33 is configured to perform color reduction on each preliminary color image frame according to the pixel category information in each preliminary color image frame and the traffic light position information in the image frame to be processed corresponding to each preliminary color image frame, and generate a color image frame set corresponding to the image frame set to be processed.
According to the technical scheme, the image frame sets to be processed corresponding to the obtained video streams to be processed are preprocessed, so that the lighting parts of traffic lights in the image frame sets to be processed can be processed into preset non-green colors or green colors according to the colors detected actually, namely, the pixels corresponding to yellow lights and red lights which are easy to be confused are processed into red pixels, and the lighting colors displayed by the traffic lights in the auxiliary image frame sets obtained through processing are red and green which are easy to distinguish, so that the colors of the traffic lights can be accurately identified by the subsequent models. And then, the image frames to be processed and the corresponding auxiliary image frames are input into a pre-trained traffic light color drawing model in groups for processing, a group of images are mutually compared to determine pixel type information corresponding to each pixel, the images after the pixel type information is determined are determined to be primary color drawing image frames, the lighting color of the traffic light can be determined based on the pixel type information in each primary color drawing image frame, and the lighting color of the traffic light in the primary color drawing image frames can be restored according to the lighting logic of the traffic light in the traffic light, so that a color drawing image frame set which finally corresponds to the image frame set to be processed is obtained. Because the pre-trained traffic light color drawing model only needs to distinguish red and green with obvious distinguishing degree, the pre-trained traffic light color drawing model is less influenced by scenes where traffic lights are located, different templates are not required to be given for training different scenes when the traffic light color drawing model is trained, the data volume required by training is reduced, traffic light color drawing errors caused by insufficient scene templates are avoided, the data operation volume is reduced in the use process, the lighting colors of the traffic lights in the preliminary color drawing image frame are restored according to the lighting logic of traffic lights in the traffic lights after accurate classification is carried out, the yellow lights in the traffic lights can be accurately restored, and the accuracy of traffic light color drawing is improved.
Optionally, the image frame acquisition module 31 includes:
the lighting pixel determining unit is used for acquiring a to-be-processed image frame set corresponding to the to-be-processed video stream, carrying out target identification on each to-be-processed image frame in the to-be-processed image frame set, and determining lighting pixels in the to-be-processed image frame;
the pixel reassigning unit is used for carrying out color recognition on the lighting pixels and reassigning the lighting pixels into green pixels or preset non-green pixels according to a color recognition result;
and the auxiliary frame set determining unit is used for determining the to-be-processed image frames with the pixels reassigned as auxiliary image frames corresponding to the to-be-processed image frames, and determining a set formed by the auxiliary image frames as an auxiliary image frame set corresponding to the to-be-processed image frame set.
Optionally, the preliminary color drawing module 32 includes:
an image frame group determining unit, configured to determine each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relationship;
the feature extraction unit is used for inputting the image frame group to be processed into a feature extraction layer in the pre-trained traffic light color model for multi-scale feature extraction aiming at the image frame group to be processed, and determining an intermediate feature image which corresponds to the image frame group to be processed and has the same size; the residual structure of the feature extraction layer is a convolution residual structure and an identity mapping residual structure;
The classification unit is used for inputting the intermediate feature image into a classification layer in the traffic light color model to perform classification, and determining at least one candidate pixel class corresponding to each pixel in the output image and the class confidence of the candidate pixel class;
a preliminary color drawing frame determining unit for determining a candidate pixel category with the greatest category confidence as pixel category information of the pixel, and determining an output image for which the determination of the pixel category information is completed as a preliminary color drawing image frame corresponding to the image frame group to be processed;
and the preliminary color drawing set determining unit is used for determining a set of preliminary color drawing image frames corresponding to each image frame group to be processed as a preliminary color drawing image frame set.
Optionally, the color image generating module 33 includes:
the traffic light pixel determining unit is used for respectively taking each preliminary color image frame as a current image frame and determining a traffic light pixel set in the current image frame according to traffic light position information in the image frame to be processed corresponding to the current image frame;
a lighting pixel determining unit, configured to determine a lighting pixel set in a current image frame and a lighting color of the lighting pixel set according to pixel category information corresponding to each traffic light pixel in the traffic light pixel set;
The image color restoration unit is used for restoring each pixel in the current image frame to the color of the image frame to be processed corresponding to the current image frame if the lighted pixel set does not exist, obtaining a color drawing image frame corresponding to the current image frame, adding one to the accumulated yellow light frame number, and setting the accumulated green light frame number to zero; otherwise, performing color restoration on the current image frame according to the lighting color, the yellow light accumulated frame number, the green light accumulated frame number, the traffic light position information and the image frame to be processed corresponding to the current image frame.
Optionally, the image color reduction unit is specifically configured to:
if the lighting color is green, keeping the color of the lighting pixel set unchanged, restoring pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame to obtain a color drawing image frame corresponding to the current image frame, setting the accumulated yellow light frame number to zero, and adding one to the accumulated green light frame number;
if the lighting color is a preset non-green color and the green light accumulated frame number is greater than zero, the color of the lighting pixel set is reduced to yellow, the pixels except the lighting pixel set in the current image frame are reduced to the color of the image frame to be processed corresponding to the current image frame, a color drawing image frame corresponding to the current image frame is obtained, the yellow light accumulated frame number is increased by one, and the green light accumulated frame number is set to zero;
If the lighting color is a preset non-green color and the accumulated number of frames of the yellow lights is equal to one, the color of the lighting pixel set is reduced to yellow, the pixels except the lighting pixel set in the current image frame are reduced to the color of the image frame to be processed corresponding to the current image frame, a color drawing image frame corresponding to the current image frame is obtained, the accumulated number of frames of the yellow lights is increased by one, and initial yellow light position information is determined according to the last image frame of the current image frame and the last traffic light position information;
if the lighting color is a preset non-green color and the accumulated frame number of the yellow lamps is greater than one, determining the current lamp position information according to the lighting pixel set and the traffic lamp position information, and performing color restoration on the current image frame according to the current lamp position information, the initial yellow lamp position information and the image frame to be processed corresponding to the current image frame.
Optionally, performing color restoration on the current image frame according to the current lamp position information, the initial yellow lamp position information and the image frame to be processed corresponding to the current image frame, including:
determining a position offset ratio according to the current lamp position information and the initial yellow lamp position information;
if the position offset ratio is smaller than the preset offset ratio threshold, the colors of the lighted pixel sets are reduced to yellow, the pixels except the lighted pixel sets in the current image frame are reduced to the colors of the image frames to be processed corresponding to the current image frame, the color-drawing image frame corresponding to the current image frame is obtained, and the accumulated number of yellow lamps is increased by one;
And if the position offset ratio is greater than or equal to a preset offset ratio threshold, restoring the lighting color to red, restoring pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, and setting the accumulated yellow light frame number to zero.
Optionally, before acquiring the set of image frames to be processed corresponding to the video stream to be processed, the method further includes:
acquiring at least one historical image frame, and carrying out color recognition and conversion on the historical image frame so as to enable the lighted pixels in the historical image frame to be reassigned to green pixels or preset non-green pixels, and generating an auxiliary historical image frame corresponding to the historical image frame;
acquiring pixel type information of each pixel in the auxiliary historical image frame, and constructing a training sample set by taking the pixel type information as labels of the historical image frame and the auxiliary historical image frame; the pixel category information comprises a red light category, a green light category and a background category;
and inputting the training sample set into an initial traffic light color drawing model, and training the initial traffic light color drawing model based on the constructed loss function until a preset convergence condition is met to obtain the traffic light color drawing model.
The traffic light color drawing device provided by the embodiment of the application can execute the traffic light color drawing method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 4 is a schematic structural diagram of a traffic light color drawing device according to an embodiment of the present application. Traffic light color device 40 may be an electronic device intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 4, the traffic light color drawing device 40 includes at least one processor 41, and a memory, such as a Read Only Memory (ROM) 42, a Random Access Memory (RAM) 43, etc., communicatively connected to the at least one processor 41, in which the memory stores a computer program executable by the at least one processor, and the processor 41 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 42 or the computer program loaded from the storage unit 48 into the Random Access Memory (RAM) 43. In the RAM 43, various programs and data required for the operation of the traffic light coloring apparatus 40 may also be stored. The processor 41, the ROM 42 and the RAM 43 are connected to each other via a bus 44. An input/output (I/O) interface 45 is also connected to bus 44.
The various components in the traffic light color device 40 are connected to an I/O interface 45, including: an input unit 46 such as a keyboard, a mouse, etc.; an output unit 47 such as various types of displays, speakers, and the like; a storage unit 48 such as a magnetic disk, an optical disk, or the like; and a communication unit 49 such as a network card, modem, wireless communication transceiver, etc. The communication unit 49 allows the traffic light color device 40 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 41 may be various general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 41 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 41 performs the various methods and processes described above, such as the traffic light color method.
In some embodiments, the traffic light color method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 48. In some embodiments, part or all of the computer program may be loaded and/or installed onto the traffic light coloring device 40 via the ROM 42 and/or the communication unit 49. When the computer program is loaded into RAM 43 and executed by processor 41, one or more steps of the traffic light shading method described above may be performed. Alternatively, in other embodiments, the processor 41 may be configured to perform the traffic light shading method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. A method of color rendering a traffic light, comprising:
acquiring a to-be-processed image frame set corresponding to a to-be-processed video stream, performing color recognition conversion on each to-be-processed image frame in the to-be-processed image frame set, and generating an auxiliary image frame set corresponding to the to-be-processed image frame set; wherein, the lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels;
Determining each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relation, respectively inputting each group of image frames to be processed into a pre-trained traffic light color drawing model, and determining a primary color drawing image frame set;
and carrying out color reduction on each preliminary color drawing image frame according to pixel category information in each preliminary color drawing image frame and traffic light position information in the image frame to be processed corresponding to each preliminary color drawing image frame, and generating a color drawing image frame set corresponding to the image frame set to be processed.
2. The method of claim 1, wherein performing color recognition conversion on each image frame to be processed in the set of image frames to be processed to generate a set of auxiliary image frames corresponding to the set of image frames to be processed, comprises:
performing target identification on the image frames to be processed for each image frame to be processed in the image frame set to be processed, and determining lighting pixels in the image frames to be processed;
performing color recognition on the lighting pixels, and reassigning the lighting pixels to green pixels or preset non-green pixels according to a color recognition result;
And determining the image frames to be processed after pixel reassignment as auxiliary image frames corresponding to the image frames to be processed, and determining a set formed by the auxiliary image frames as an auxiliary image frame set corresponding to the image frame set to be processed.
3. The method of claim 1, wherein said inputting each of said sets of image frames to be processed into a pre-trained traffic light color model, respectively, determines a preliminary set of color image frames, comprising:
inputting the image frame group to be processed into a feature extraction layer in a pre-trained traffic light color model for multi-scale feature extraction aiming at the image frame group to be processed, and determining an intermediate feature image which corresponds to the image frame group to be processed and has the same size;
inputting the intermediate feature map to a classification layer in the traffic light color model for classification, and determining at least one candidate pixel class corresponding to each pixel in an output image and class confidence of the candidate pixel class;
determining the candidate pixel category with the maximum category confidence as pixel category information of the pixels, and determining the output image with the determined pixel category information as a preliminary color drawing image frame corresponding to the image frame group to be processed;
Determining a set of preliminary color image frames corresponding to each image frame group to be processed as a preliminary color image frame set;
the residual structure of the feature extraction layer is a convolution residual structure and an identity mapping residual structure.
4. The method of claim 1, wherein performing color reduction on each preliminary color image frame based on pixel class information in each preliminary color image frame and traffic light position information in an image frame to be processed corresponding to each preliminary color image frame, comprises:
each preliminary color image frame is respectively used as a current image frame, and a traffic light pixel set in the current image frame is determined according to traffic light position information in the image frame to be processed corresponding to the current image frame;
determining a lighting pixel set in the current image frame and a lighting color of the lighting pixel set according to pixel category information corresponding to each traffic light pixel in the traffic light pixel set;
if the lighting pixel set does not exist, restoring each pixel in the current image frame to the color of the image frame to be processed corresponding to the current image frame to obtain a color drawing image frame corresponding to the current image frame, adding one to the accumulated yellow light frame number, and setting the accumulated green light frame number to zero; otherwise, performing color restoration on the current image frame according to the lighting color, the yellow light accumulated frame number, the green light accumulated frame number, the traffic light position information and the image frame to be processed corresponding to the current image frame.
5. The method of claim 4, wherein the performing color restoration on the current image frame according to the lighting color, the yellow light accumulated frame number, the green light accumulated frame number, the traffic light position information, and the image frame to be processed corresponding to the current image frame comprises:
if the lighting color is green, keeping the color of the lighting pixel set unchanged, reducing pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, setting the accumulated yellow light frame number to zero, and adding one to the accumulated green light frame number;
if the lighting color is a preset non-green color and the green light accumulated frame number is greater than zero, restoring the color of the lighting pixel set to yellow, restoring the pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, adding one to the yellow light accumulated frame number, and setting the green light accumulated frame number to zero;
If the lighting color is a preset non-green color and the accumulated number of frames of the yellow light is equal to one, restoring the color of the lighting pixel set to yellow, restoring the pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, adding one to the accumulated number of frames of the yellow light, and determining initial yellow light position information according to the last image frame of the current image frame and the last traffic light position information;
if the lighting color is a preset non-green color and the accumulated number of frames of the yellow light is greater than one, determining current light position information according to the lighting pixel set and the traffic light position information, and performing color reduction on the current image frame according to the current light position information, the initial yellow light position information and the image frame to be processed corresponding to the current image frame.
6. The method of claim 5, wherein the performing color reduction on the current image frame according to the current light position information, the initial yellow light position information, and the image frame to be processed corresponding to the current image frame comprises:
Determining a position offset ratio according to the current lamp position information and the initial yellow lamp position information;
if the position deviation ratio is smaller than a preset deviation ratio threshold value, the colors of the lighting pixel sets are reduced to yellow, the pixels except the lighting pixel sets in the current image frame are reduced to the colors of the image frames to be processed corresponding to the current image frame, a color drawing image frame corresponding to the current image frame is obtained, and the accumulated number of yellow lamps is increased by one;
and if the position deviation ratio is greater than or equal to the preset deviation ratio threshold, restoring the lighting color to red, restoring pixels except the lighting pixel set in the current image frame to the color of the image frame to be processed corresponding to the current image frame, obtaining a color drawing image frame corresponding to the current image frame, and setting the accumulated yellow light frame number to zero.
7. The method according to any one of claims 1-6, further comprising, prior to said obtaining a set of image frames to be processed corresponding to a video stream to be processed:
acquiring at least one historical image frame, and carrying out color recognition and conversion on the historical image frame so as to enable the lighting pixels in the historical image frame to be reassigned to green pixels or preset non-green pixels, and generating an auxiliary historical image frame corresponding to the historical image frame;
Acquiring pixel type information of each pixel in the auxiliary historical image frame, and constructing a training sample set by taking each pixel type information as labels of the historical image frame and the auxiliary historical image frame; the pixel category information comprises a red light category, a green light category and a background category;
and inputting the training sample set into an initial traffic light color drawing model, and training the initial traffic light color drawing model based on the constructed loss function until a preset convergence condition is met to obtain the traffic light color drawing model.
8. A traffic light color drawing device, comprising:
the image frame acquisition module is used for acquiring a to-be-processed image frame set corresponding to a to-be-processed video stream, performing color recognition conversion on each to-be-processed image frame in the to-be-processed image frame set, and generating an auxiliary image frame set corresponding to the to-be-processed image frame set; wherein, the lighting pixels in each auxiliary image frame in the auxiliary image frame set are green pixels or preset non-green pixels;
the preliminary color drawing module is used for determining each image frame to be processed and the corresponding auxiliary image frame as a group of image frames to be processed according to the corresponding relation, inputting each group of image frames to be processed into a pre-trained traffic light color drawing model respectively, and determining a preliminary color drawing image frame set;
And the color image generation module is used for carrying out color reduction on each preliminary color image frame according to the pixel category information in each preliminary color image frame and the traffic light position information in the image frame to be processed corresponding to each preliminary color image frame, and generating a color image frame set corresponding to the image frame set to be processed.
9. A traffic light color drawing apparatus, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the traffic light shading method of any one of claims 1-7.
10. A storage medium containing computer-executable instructions, which when executed by a computer processor are for performing the traffic light shading method according to any one of claims 1-7.
CN202310791393.1A 2023-06-30 2023-06-30 Traffic light color drawing method, device, equipment and storage medium Pending CN116823661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310791393.1A CN116823661A (en) 2023-06-30 2023-06-30 Traffic light color drawing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310791393.1A CN116823661A (en) 2023-06-30 2023-06-30 Traffic light color drawing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116823661A true CN116823661A (en) 2023-09-29

Family

ID=88125508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310791393.1A Pending CN116823661A (en) 2023-06-30 2023-06-30 Traffic light color drawing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116823661A (en)

Similar Documents

Publication Publication Date Title
CN112949710B (en) Image clustering method and device
CN113033537A (en) Method, apparatus, device, medium and program product for training a model
CN117392733B (en) Acne grading detection method and device, electronic equipment and storage medium
CN113723377A (en) Traffic sign detection method based on LD-SSD network
CN112132216B (en) Vehicle type recognition method and device, electronic equipment and storage medium
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
CN112750162A (en) Target identification positioning method and device
CN116385430A (en) Machine vision flaw detection method, device, medium and equipment
CN106960188B (en) Weather image classification method and device
CN115690615B (en) Video stream-oriented deep learning target recognition method and system
CN114821194B (en) Equipment running state identification method and device
CN114627435B (en) Intelligent light adjusting method, device, equipment and medium based on image recognition
CN115761698A (en) Target detection method, device, equipment and storage medium
CN115760854A (en) Deep learning-based power equipment defect detection method and device and electronic equipment
CN116823661A (en) Traffic light color drawing method, device, equipment and storage medium
CN114998387A (en) Object distance monitoring method and device, electronic equipment and storage medium
CN114332993A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN113869317A (en) License plate recognition method and device, electronic equipment and storage medium
Guo et al. Robust and automatic skyline detection algorithm based on mssdn
CN109146893B (en) Oil light area segmentation method and device and mobile terminal
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN114037865B (en) Image processing method, apparatus, device, storage medium, and program product
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN113360688B (en) Method, device and system for constructing information base
CN116071625B (en) Training method of deep learning model, target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination