CN114022820A - Intelligent beacon light quality detection method based on machine vision - Google Patents

Intelligent beacon light quality detection method based on machine vision Download PDF

Info

Publication number
CN114022820A
CN114022820A CN202111317591.1A CN202111317591A CN114022820A CN 114022820 A CN114022820 A CN 114022820A CN 202111317591 A CN202111317591 A CN 202111317591A CN 114022820 A CN114022820 A CN 114022820A
Authority
CN
China
Prior art keywords
detection
beacon light
target
network
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111317591.1A
Other languages
Chinese (zh)
Inventor
刘庆
张临强
王凌燕
郑建华
刘娟秀
孙小鹏
倪永强
邓皓
袁兴
冯冬梅
孙洋
张恒泉
叶昊斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai Navigation Mark Of Beihai Navigation Support Center Of Ministry Of Transport
University of Electronic Science and Technology of China
Original Assignee
Yantai Navigation Mark Of Beihai Navigation Support Center Of Ministry Of Transport
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai Navigation Mark Of Beihai Navigation Support Center Of Ministry Of Transport, University of Electronic Science and Technology of China filed Critical Yantai Navigation Mark Of Beihai Navigation Support Center Of Ministry Of Transport
Priority to CN202111317591.1A priority Critical patent/CN114022820A/en
Publication of CN114022820A publication Critical patent/CN114022820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an intelligent detection method for beacon light quality based on machine vision, which is characterized by comprising the steps of firstly carrying out complex environment simulation based on gamma correction and a median filtering method on image frame data of a beacon light video, then carrying out target detection under a Yolov4 deep learning algorithm and retraining based on transfer learning on the whole image data to obtain a network model with accurate detection under a small amount of data, then carrying out real-time tracking on a target area by utilizing a SimFC deep learning method to train to obtain a twin neural network model, and carrying out automatic correction and judgment at regular time in the tracking process, further extracting corresponding beacon light signals to carry out color acquisition mapping, recording the time of color change of the beacon light signals and calculating to obtain corresponding period, frequency and light intensity characteristics of the beacon light signals; the working requirements of on-site detection of the remote navigation mark lamp equipment and acquisition of corresponding light quality parameters and result analysis can be met.

Description

Intelligent beacon light quality detection method based on machine vision
Technical Field
The invention relates to the technical field of digital image processing, machine learning and deep learning, in particular to an intelligent navigation mark lamp quality detection method based on machine vision, which can combine the color space of a video image with an identification tracking network model and dynamically and automatically carry out portable tracking detection on the navigation mark lamp quality in real time under a complex environment.
Background
Fairway signs are very important signs that help guide a vessel through, locate, and mark obstructions and indicate warnings. At present, the navigation mark detection mainly depends on laboratory detection, and workers need to go to the sea at regular intervals to bring the navigation mark lamp back to the onshore laboratory for detection, so that whether the navigation mark lamp has problems or not is judged. The navigation mark lamp has large volume and complicated disassembly process, so the problems of high requirement on detection environment, complex detection flow, incapability of realizing outdoor portable detection and the like exist. The beacon light detection equipment is used on the outdoor sea, the weather conditions on the sea are complex and changeable, and therefore the influence of various natural environments such as rain, strong wind, strong fog, night and the like is required to be considered for detecting the beacon light. Meanwhile, the video shooting background of the beacon light is complex, objects such as sea surface sundries, shore buildings, sea surface ships and the like are often contained, and the detection of the beacon light is greatly interfered. Therefore, the target object identification and tracking only aiming at a clear picture is not enough to enable the algorithm to obtain strong adaptability.
At present, a navigation mark lamp identification method based on Yolov3-tiny and a method for analyzing the quality of the navigation mark lamp based on the spectrum of a CCD detector are commonly used in the navigation mark lamp identification method.
A navigation mark lamp identification method based on Yolov3-tiny has the basic ideas as follows: adopting VOC2007 to carry out data set manufacturing on the collected navigation light image, selecting a label file for storing data in the options and a corresponding picture file for JPEGImage, carrying out prior framing area for manually marking the image for target detection through LabelImg, and carrying out the following steps according to 9: the proportion of 1 is divided into a training set and a data set, then the training set and the data set are put into a Yolo v3-tiny network structure for training, judging and identifying the needed beacon light, and then the subsequent on-off judgment and light quality identification are carried out. The following disadvantages exist: the data volume of the experimental training is small, and only image data of 70 manufactured beacon lights are collected in total; and only known two kinds of beacon light quality can be detected temporarily, specifically: and the A lamp is turned on and off for 1s and the B lamp is turned on and off for 3s, and finally, only the visual beacon light monitoring effect is realized.
A method for analyzing the lampquality of a navigation mark lamp based on a spectrum of a CCD detector comprises the following basic ideas: the intelligent detector comprises an intelligent analysis unit, a signal detection unit, a spectrum analysis unit and other modules, wherein the intelligent detector comprises a small coupling mirror and a telescope which respectively correspond to short-distance and long-distance detection requirements, a novel crossed Czerny-Turner light path structure is adopted, spectrum analysis is carried out on an obtained flash signal by combining a linear array CCD, light with different colors is input and converted into electric signals through optical fibers, the electric signals are output into corresponding spectrum data through A/D conversion so as to be convenient for subsequent analysis and calculation, a lamp quality database is established in an identification module, a background identification algorithm is designed in the process of identifying the signal, background data are approximately obtained by adopting a proportional method, background data are extracted by adopting a wave crest method, and the template is identified and detected in a high-low level mode. The following disadvantages exist: the method only provides a module for collecting photoelectric conversion aiming at the quality of the light, obtains a complete beacon light flashing signal, divides the beacon light flashing signal into two paths according to a certain proportion through a light splitting unit, one path enters an electronic observation unit, and the rest path enters a spectrum analysis unit for detection through optical fiber transmission. The need for identification acquisition tracking detection of offshore beacon lights in different environments is not addressed in a specific design.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an intelligent detection method for the lampquality of a beacon light based on machine vision.
The technical scheme provided by the invention is as follows: an intelligent detection method for the lampquality of a beacon light based on machine vision is characterized by comprising the following steps:
a, acquiring navigation light video data, performing video extraction on the video data, and performing image environment simulation and image fuzzification pretreatment on the video data;
b, constructing a target detection and identification network, and putting the navigation mark lamp video data set in the step a for target detection and identification network training;
c, constructing a target tracking network, carrying out video tracking on the navigation mark lamp training target accurately identified in the step b, and correcting in real time;
d, constructing a color detection network and a frequency and period detection network of the intelligent lamp quality, and measuring and analyzing the color, frequency, period and light intensity detection of the beacon light;
and e, integrating the intelligent lamp quality network analysis results, combining light intensity information, outputting data and finishing the detection process.
Preferably, the video extraction in step a: checking videos shot by the beacon light on the site at different light qualities, and selecting representative video image frames of the beacon light with obvious background environment difference;
the image environment simulation comprises the following steps: gamma correction is adopted to adjust the gray value of the image so as to simulate abnormal weather conditions corresponding to foggy days and dark night;
and (3) blurring the image: and the median filtering is adopted to carry out fuzzification processing on the image, so that the influence caused by inaccurate focusing of a camera or other shooting factors is simulated.
Preferably, the object detection and identification network in step b: selecting a yolo v4 deep learning network with higher detection speed on the premise of ensuring the detection accuracy;
the training target detection and identification network comprises the following steps: direct learning and transfer learning, retraining a large number of samples of other data sets, solving the problem that the original data set has smaller samples, and framing the beacon light area through an identification algorithm.
Preferably, the target tracking network in step c: a SiamFC-3s algorithm with good real-time tracking performance and high processing speed under deep learning is adopted;
the training target video tracking comprises the following steps: the target video tracking network realizes the subsequent tracking of a predicted target through a twin neural network obtained by training;
the twin neural network comprises a dimension [127, 3 ] for input]With a larger candidate area image x having dimensions [255,255,3 ]]Both using the same transformation function
Figure DEST_PATH_IMAGE001
Respectively carrying out mixing processing, and predicting the position of a target on the candidate image through the final full convolution result;
the real-time correction comprises the following steps: in the process of tracking the video target of the beacon light, a detection model is introduced, judgment and correction are carried out once every hundred frames, and whether the problem that the beacon light target is out of a frame or in a frame is analyzed in real time.
Preferably, the step d of constructing a color detection network comprises: extracting mapping and color analysis of chromaticity, specifically, extracting a statistical mean value of color information of image feature points of a beacon light ROI area obtained through target detection and tracking, and obtaining the color of the beacon light in an actual scene by calculating the average brightness value of 3 channels and referring according to chromaticity coordinates;
the frequency and period detection network is constructed as follows: recording and calculating the lighting period and the lighting frequency, wherein the specific steps comprise recording the time of one color change and simultaneously recording the number of times of color change in a lighting and extinguishing period;
the light intensity detection: using a formula for calculating luminous intensity
Figure 444317DEST_PATH_IMAGE002
And the laser range finder is used for recording the distance from the light source center of the beacon light to be measured to the receiving surface of the detector at the moment, so that the luminous intensity information of the beacon light at the moment can be obtained.
The invention has the beneficial effects that: the method has the advantages that the method surrounds three aspects of high-definition imaging, accurate algorithm and high-efficiency processing, and utilizes related key technologies such as deep learning tracking of a target object, image judgment of light quality and the like, so that the influence of field background light can be overcome, a clear navigation mark lamp image can be obtained, the target in the image is analyzed, the information of the relative position is accurately extracted, the rhythm and the color of navigation mark lamp light are accurately identified and judged, and the light intensity of the navigation mark lamp is measured; has stronger portability and operability.
Drawings
FIG. 1 is a block diagram of an intelligent light quality detection algorithm of the present invention;
FIG. 2 is a loss value transformation diagram of direct learning and transfer learning provided in an embodiment of the present invention;
FIG. 3 is a diagram of a twin neural network provided in an embodiment of the present invention;
FIG. 4 is a flowchart of a target tracking detection calibration provided in an embodiment of the present invention;
fig. 5 is a flow chart of color identification detection and cycle frequency calculation provided in the embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an intelligent detection method for the lamplight quality of a navigation mark lamp based on machine vision, which takes an intelligent lamplight quality detection algorithm as a theme and integrates an anti-interference algorithm, a dynamic real-time autonomous tracking algorithm and an intelligent lamplight quality detection algorithm in a complex environment;
the anti-interference algorithm under the complex environment is mainly characterized in that the collected scene video of the beacon light is primarily analyzed and sorted, image frames meeting the conditions are screened out, and the image frames are subjected to image processing to simulate the complex environment which possibly appears on the sea surface;
the dynamic real-time autonomous tracking algorithm combines target identification and target tracking, detects in real time and outputs a video image of an accurate framing beacon light image;
the intelligent light quality detection algorithm determines the color of the light by recording the lighting and extinguishing rhythm of the navigation mark lamp light and the form of parameter values so as to calculate and analyze the frequency, the period and the light intensity characteristics.
The main detection flow of the intelligent detection method for the lampquality of the beacon light based on the machine vision comprises the following steps:
1. collecting video data of a beacon light, performing video extraction on the video data, and performing image environment simulation and image fuzzification pretreatment on the video data;
1.1, video extraction: checking videos shot by the beacon light on the site with different light qualities, and selecting a video image frame of the beacon light with certain representativeness and obvious background environment difference;
1.2, image environment simulation: gamma correction is adopted to adjust the gray value of the image, so that the gray value of the output image and the gray value of the input image are in an exponential transformation relation, and the gray value of the image is converted into real numbers distributed between 0 and 1 through normalization; pre-compensation, according to the formula:
Figure DEST_PATH_IMAGE003
calculating a conversion value corresponding to the normalized pixel; performing inverse normalization, namely inversely transforming real numbers obtained by the pre-compensation treatment into integers between 0 and 255 to obtain output images simulating abnormal weather conditions such as foggy days, night and the like;
1.3, image blurring: performing fuzzification processing on the image by adopting median filtering, and simulating the influence caused by inaccurate focusing of a camera or other shooting factors by setting the gray value of each pixel point of the input image as the median of the gray values of all the pixel points in a window of the field of the point;
2. constructing a target identification network, and putting the beacon light video data set in the step 1 for target detection and identification network training:
2.1, constructing a target detection and identification network based on Yolo v 4: after determining that a deep learning method is adopted to solve the target detection problem, starting from the detection speed and the detection effect of different algorithms on the same target, the table 1 compares several currently common deep learning networks, including RCNN, Retinat, Yolo v4 and the like;
Figure 423774DEST_PATH_IMAGE004
finally, a yolo v4 deep learning network with higher detection speed is selected on the premise of ensuring the detection accuracy;
the yolo v4 deep learning network extracts appearance characteristics such as the shape, color and size of the beacon light, so that the rapid detection of the video, the image and the real-time dynamic image of the beacon light is realized, and meanwhile, the accurate classification of different types of beacon lights is realized;
2.2, training a target detection and identification network: direct learning and transfer learning: numerous samples of other data sets are retrained. Compared with direct learning, the transfer learning can better optimize the prediction of the network model of the transfer learning by means of training results of other samples. As shown in fig. 2, when the network training reaches 500-600 rounds, the loss value of the transfer learning is obviously reduced, i.e. the smaller the corresponding loss value, the more accurate the training result of the network is, so as to effectively solve the problem of smaller original data set sample, and select the beacon light region by the identification algorithm. The target detection network based on the yolo v4 framework and the transfer learning idea can accurately frame and select the positions of the beacon lights under different backgrounds;
3. constructing a target tracking network, carrying out video tracking on the accurate navigation mark lamp target identified in the step 2, and correcting in real time;
3.1, constructing a target tracking network based on Sim: compared with the traditional target tracking algorithm, the traditional method has the restriction factors that the tracking effect does not reach the standard, and the tracking time and the tracking speed do not meet the real-time requirement. As shown in table 2, the tracking effect of the KCF algorithm with the highest tracking speed is not ideal; the TLD algorithm with a good tracking effect has the tracking time reaching 2s for one frame, and the requirements can not be met;
Figure DEST_PATH_IMAGE005
therefore, after the algorithm in the target tracking field is subjected to the joint comparison of the tracking accuracy, the coincidence rate and the tracking speed, as shown in table 3, a siamf fc-3s algorithm with better real-time tracking performance and higher processing speed under deep learning is adopted;
Figure 124883DEST_PATH_IMAGE006
3.2, training a target video tracking network: the target tracking network realizes the subsequent tracking of the predicted target through the twin neural network obtained by training;
the twin neural network results model is shown in FIG. 3, with inputs comprising a dimension [127, 3 ]]With a larger candidate area image x having dimensions [255,255,3 ]]Both using the same transformation function
Figure 254513DEST_PATH_IMAGE008
Respectively, to generate a corresponding one [6, 128 ]]And one [22, 128 ]]The features are put into a formula:
Figure DEST_PATH_IMAGE009
and mixed by using a function g, and the obtained [17,17,1] characteristic response graph is output through full convolution. Searching a point with the highest response value in the response map, wherein the point is a corresponding area in the candidate image x, namely the position of the prediction target;
3.3, real-time correction: in the process of tracking the target in the beacon light video, a detection model is introduced, judgment and correction are carried out once every hundred frames, and whether the beacon light target generates the frame-out or frame-in problem is analyzed in real time. The real-time correction function is combined with the image preprocessing, the target detection and the target tracking, and the whole algorithm flow is shown in fig. 4; acquiring a beacon light image frame obtained through a network camera, firstly acquiring a framing area of a dynamic target through a target detection and target tracking module, judging and correcting the beacon light image of every hundred frames of the dynamic video by a real-time correction detection model, detecting five continuous frames of the video again if the image is tracked and lost, returning updated position information to the target tracking module for re-framing of a corresponding area if a correct position can be obtained, and otherwise, returning the video image frame to a target detection module for identifying and re-detecting the target;
for the condition that the target tracking is lost, the tracking of the beacon light target can be timely re-corrected after a detection model for real-time correction is added;
4. constructing a color detection network and a frequency and period detection network of an intelligent light quality, and measuring and analyzing the color, frequency, period and light intensity detection of the beacon light;
4.1, constructing a color detection network: chroma extraction mapping and color analysis: specifically, a color information statistical mean value of image feature points is extracted from a navigation mark lamp ROI area obtained through target detection and tracking, and the color of the navigation mark lamp in an actual scene is obtained by calculating the average brightness value of 3 channels and referring according to chromaticity coordinates;
the color information is mainly mapped through an RGB color model, for the quality of the navigation mark lamp, the color information needs to be mapped to a corresponding area in GB 12708-; for yellow, setting the lower limit of the threshold value as [15,230,230] and the upper limit as [35,255,256], analyzing the extracted color information and judging the luminous color of the navigation mark lamp;
the color analysis judges whether the quality of the light-emitting lamp of the beacon light is in a normal chromaticity range by using the chromaticity coordinates of corresponding colors of the beacon light, such as red, white and the like;
4.2, constructing a frequency and period detection network: recording and calculating the light emitting period and the light emitting frequency: recording the time of one color change and simultaneously recording the times of color change in a lighting and extinguishing period;
the cycle and frequency detection algorithm flow chart is shown in fig. 5, a region ROI of the beacon light frame obtained by the target identification and tracking algorithm is sequentially subjected to color detection and color identification, the time a at the time is recorded for the color of the beacon light obtained by the first judgment, and the current time b when the change occurs is recorded if the light-emitting color of the subsequent beacon light changes, the two steps are sequentially circulated, the color detection and color identification are returned, and the analysis is continuously performed until the recording in one cycle is completed;
calculating the lighting period and frequency mainly by recording the light flicker time ON and the non-lighting time OFF of the beacon light, as shown in the white light case of Table 4, and analyzing and calculating the corresponding period and frequency by knowing the lighting rhythm of the beacon light in a certain time period;
Figure 600044DEST_PATH_IMAGE010
4.3: light intensity detection: calculating the formula by using the luminous intensity:
Figure DEST_PATH_IMAGE011
the laser range finder is used for recording the distance from the light source center of the beacon light to be measured to the receiving surface of the detector at the moment, so that the luminous intensity information of the beacon light at the moment can be obtained;
in the luminous intensity calculation formula, E is the illumination intensity on the receiving surface of the illuminometer;
Figure 484823DEST_PATH_IMAGE012
the fixed light intensity of the measured beacon light; l corresponds to the distance from the light source center of the measured beacon light to the receiving surface of the illuminometer;
Figure DEST_PATH_IMAGE013
representing an included angle between a light beam in the measured direction of the beacon light and a normal of a receiving surface of the illuminometer;
5. and integrating the intelligent lamp quality network analysis results, combining light intensity information, outputting data and finishing the detection process.

Claims (6)

1. An intelligent detection method for lampquality of a beacon light based on machine vision is characterized by comprising the following steps:
a, acquiring navigation light video data, performing video extraction on the video data, and performing image environment simulation and image fuzzification pretreatment on the video data;
b, constructing a target detection and identification network, and putting the navigation mark lamp video data set in the step a for target detection and identification network training;
c, constructing a target tracking network, carrying out video tracking on the navigation mark lamp training target accurately identified in the step b, and correcting in real time;
d, constructing a color detection network and a frequency and period detection network of the intelligent lamp quality, and measuring and analyzing the color, frequency, period and light intensity detection of the beacon light;
and e, integrating the intelligent lamp quality network analysis results, combining light intensity information, outputting data and finishing the detection process.
2. The intelligent detection method for the lamplight quality of the beacon light based on the machine vision as claimed in claim 1, wherein the video extraction in the step a is as follows: checking videos shot by the beacon light on the site at different light qualities, and selecting representative video image frames of the beacon light with obvious background environment difference;
the image environment simulation comprises the following steps: gamma correction is adopted to adjust the gray value of the image so as to simulate abnormal weather conditions corresponding to foggy days and dark night;
and (3) blurring the image: and the median filtering is adopted to carry out fuzzification processing on the image, so that the influence caused by inaccurate focusing of a camera or other shooting factors is simulated.
3. The intelligent detection method for the lamplight quality of the beacon light based on the machine vision as claimed in claim 1, wherein the target detection and identification network in the step b comprises: selecting a yolo v4 deep learning network with higher detection speed on the premise of ensuring the detection accuracy;
the training target detection and identification network comprises the following steps: direct learning and transfer learning, retraining a large number of samples of other data sets, solving the problem that the original data set has smaller samples, and framing the beacon light area through an identification algorithm.
4. The intelligent detection method for lamplight quality of beacon lights based on machine vision as claimed in claim 1, wherein the target tracking network in step c: a SiamFC-3s algorithm with good real-time tracking performance and high processing speed under deep learning is adopted;
the training target video tracking network comprises the following steps: the target video tracking network realizes the subsequent tracking of a predicted target through a twin neural network obtained by training;
the real-time correction comprises the following steps: in the process of tracking the video target of the beacon light, a detection model is introduced, judgment and correction are carried out once every hundred frames, and whether the problem that the beacon light target is out of a frame or in a frame is analyzed in real time.
5. The intelligent detection method for the lamplight quality of the beacon light based on the machine vision as claimed in claim 1, wherein the step d is to construct a color detection network: extracting mapping and color analysis of chromaticity, specifically, extracting a statistical mean value of color information of image feature points of a beacon light ROI area obtained through target detection and tracking, and obtaining the color of the beacon light in an actual scene by calculating the average brightness value of 3 channels and referring according to chromaticity coordinates;
the frequency and period detection network is constructed as follows: recording and calculating the lighting period and the lighting frequency, wherein the specific steps comprise recording the time of one color change and simultaneously recording the number of times of color change in a lighting and extinguishing period;
the light intensity detection: using a formula for calculating luminous intensity
Figure 869924DEST_PATH_IMAGE001
And the laser range finder is used for recording the distance from the light source center of the beacon light to be measured to the receiving surface of the detector at the moment, so that the luminous intensity information of the beacon light at the moment can be obtained.
6. The intelligent detection method for lamplight quality of beacon lights based on machine vision as claimed in claim 4, wherein the twin neural network comprises a dimension [127, 3 ] for input]With a larger candidate area image x having dimensions [255,255,3 ]]Both using the same transformation function
Figure 733975DEST_PATH_IMAGE002
And respectively carrying out mixing processing, and predicting the position of the target on the candidate image according to the final full convolution result.
CN202111317591.1A 2021-11-09 2021-11-09 Intelligent beacon light quality detection method based on machine vision Pending CN114022820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111317591.1A CN114022820A (en) 2021-11-09 2021-11-09 Intelligent beacon light quality detection method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111317591.1A CN114022820A (en) 2021-11-09 2021-11-09 Intelligent beacon light quality detection method based on machine vision

Publications (1)

Publication Number Publication Date
CN114022820A true CN114022820A (en) 2022-02-08

Family

ID=80062308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111317591.1A Pending CN114022820A (en) 2021-11-09 2021-11-09 Intelligent beacon light quality detection method based on machine vision

Country Status (1)

Country Link
CN (1) CN114022820A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115902685A (en) * 2022-11-14 2023-04-04 集美大学 Optical test system special for navigation mark lamp

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115902685A (en) * 2022-11-14 2023-04-04 集美大学 Optical test system special for navigation mark lamp
CN115902685B (en) * 2022-11-14 2023-07-18 集美大学 Optical test system special for navigation mark lamp

Similar Documents

Publication Publication Date Title
US20160260306A1 (en) Method and device for automated early detection of forest fires by means of optical detection of smoke clouds
CN110378865A (en) A kind of greasy weather visibility intelligence hierarchical identification method and system under complex background
CN104168478B (en) Based on the video image color cast detection method of Lab space and relevance function
CN103208126A (en) Method for monitoring moving object in natural environment
CN101115131A (en) Pixel space relativity based image syncretizing effect real-time estimating method and apparatus
CN112361990B (en) Laser pattern extraction method and device, laser measurement equipment and system
CN109741307A (en) Veiling glare detection method, veiling glare detection device and the veiling glare detection system of camera module
CN109120919A (en) A kind of automatic analysis system and method for the evaluation and test of picture quality subjectivity
CN113327255A (en) Power transmission line inspection image processing method based on YOLOv3 detection, positioning and cutting and fine-tune
Zhang et al. Application research of YOLO v2 combined with color identification
CN107862333A (en) A kind of method of the judgment object combustion zone under complex environment
CN110610485A (en) Ultra-high voltage transmission line channel hidden danger early warning method based on SSIM algorithm
CN112613438A (en) Portable online citrus yield measuring instrument
CN114445330A (en) Method and system for detecting appearance defects of components
CN112927233A (en) Marine laser radar and video combined target capturing method
CN114051093B (en) Portable navigation mark lamp field detection system based on image processing technology
CN114022820A (en) Intelligent beacon light quality detection method based on machine vision
CN110120073B (en) Method for guiding recovery of unmanned ship based on lamp beacon visual signal
CN111263044A (en) Underwater shooting and image processing device and method
CN104182972B (en) Ball firing automatic scoring round target system and method under a kind of field complex illumination
CN107016343A (en) A kind of traffic lights method for quickly identifying based on Bel's format-pattern
CN108182679B (en) Haze detection method and device based on photos
CN111612797B (en) Rice image information processing system
CN109900358A (en) A kind of Sky Types identifying system and method based on image luminance information
CN115187568A (en) Power switch cabinet state detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination