LU508565B1 - Intelligent image enhancement and real-time visual recognition system - Google Patents

Intelligent image enhancement and real-time visual recognition system Download PDF

Info

Publication number
LU508565B1
LU508565B1 LU508565A LU508565A LU508565B1 LU 508565 B1 LU508565 B1 LU 508565B1 LU 508565 A LU508565 A LU 508565A LU 508565 A LU508565 A LU 508565A LU 508565 B1 LU508565 B1 LU 508565B1
Authority
LU
Luxembourg
Prior art keywords
image
module
real
visual recognition
information
Prior art date
Application number
LU508565A
Other languages
German (de)
Inventor
Yanpeng Zhang
Original Assignee
Univ Harbin Science & Tech
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Harbin Science & Tech filed Critical Univ Harbin Science & Tech
Priority to LU508565A priority Critical patent/LU508565B1/en
Application granted granted Critical
Publication of LU508565B1 publication Critical patent/LU508565B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides an intelligent image enhancement and real-time visual recognition system. An image acquisition module acquires images of a monitoring area, an image enhancement module corrects the acquired images to improve their clarity, a central processing unit preprocesses the acquired images, and a real-time visual recognition module analyzes targets in the acquired images and transmits results to the central processing unit. The present invention has low delay and high recognition accuracy rate, and is able to accurately enhance the original images.

Description

INTELLIGENT IMAGE ENHANCEMENT AND REAL-TIME
VISUAL RECOGNITION SYSTEM
TECHNICAL FIELD
The present invention relates to the technical field of video image data processing, and in particular to an intelligent image enhancement and real-time visual recognition system.
BACKGROUND
In today’s social environment, image surveillance systems are widely used in various occasions. These visual image vision systems monitor the designated area day and night and store the surveillance records on readable media. When a user needs to query whether a specific object appeared in the designated area during a certain time period, all the surveillance records stored on the readable media in the designated area during that time period need to be retrieved and carefully reviewed frame by frame through manual methods to analyze the situation of a specific object.
When retrieving existing surveillance images, the complex image information leads to a large workload for the system, making it impossible to accurately obtain the effective information in the surveillance images. This can result in the inability to timely select the corresponding surveillance images when retrieval is required. In an environment with multiple different active areas, in order to ensure the rapid flow of materials and personnel, it is usually not advisable to set up artificial security measures at each entrance or exit or in each active area, so as not to affect the logistics efficiency and normal personnel communication. Existing image enhancement methods include histogram equalization, contrast stretching, Gamma correction, histogram specification, etc. In addition, CNN (convolutional neural network) can also be used to achieve the purpose of image enhancement. However, the above image enhancement methods enhance the image uniformly in all regions, that is, the enhancement effect of the feature region and other regions of the image is the same,
which cannot well reflect the feature region of the original image, and the image HUS08565 enhancement effect is poor, and real-time monitoring cannot be achieved.
Intelligent systems can be used to enhance images, set up multi-camera modules for real-time monitoring of multiple areas, enhance the color and improve the range in terms of clarity enhancement, enhancing the display effects of videos and images.
SUMMARY
The example of the present invention provides an intelligent image enhancement and real-time visual recognition system, thereby solving the problems that occur in related technologies, enhancing images, and also improving the range in terms of clarity enhancement, and enhancing the display effects of videos and images.
According to one aspect of the examples of the present invention, an intelligent image enhancement and real-time visual recognition system is provided, including: an image acquisition module, an image enhancement module, a central processing unit, a real-time visual recognition module, a data storage module and a camera calibration module; the image acquisition module using a plurality of camera modules to collect images in different areas, and the areas collected by the plurality of camera modules are adjacent; the image enhancement module acquiring gray component images in original images, gray values of the pixel points of the gray component images being corrected through a CPLD (complex programmable logic device) processor according to a relative light and dark relationship between pixel points of the gray component images and the pixel points in the gray component images after Gaussian smoothing, and the gray component images after the gray value correction being synthesized to obtain an enhanced image; the central processing unit being used for preprocessing pictures and videos obtained by the image acquisition module, and an AI (artificial intelligence) computing module using the YOLOV4 (you only look once version 4) algorithm to identify the same type of targets in the preprocessed pictures and videos;
the real-time visual recognition module recognizing the images collected by the HUS08565 image acquisition module through computer vision methods, and recognizing activity trajectories of monitoring targets in the areas and feature information of detection targets; the data storage module storing historical enhanced images generated by the image enhancement module and image data of the real-time visual recognition module for implementing a simple storage system for large-scale small files, reducing a frequency of disk IOs (input/output operations), improving image access efficiency, and for storing image information processed by the image processing module; and the camera calibration module being electrically connected to the plurality of camera modules, being used for controlling the plurality of camera modules to perform calibration burning under a single-color temperature light source, selecting at least one single-color temperature light source corresponding to color temperatures, and making the plurality of camera modules complete the calibration of all calibration items.
Further, in the intelligent image enhancement and real-time visual recognition system, the camera modules are equipped with visual scanning devices and language interaction devices, scanning and collecting information on the detection targets in predetermined areas; the first camera module monitors in the first predetermined area, and the plurality of second camera modules monitor in the plurality of second predetermined areas, the first predetermined area is adjacent to the plurality of second predetermined areas, and the predetermined areas cover an entire monitoring area to perform real-time monitoring of the entire monitoring area.
Further, in the intelligent image enhancement and real-time visual recognition system, the information scanned by the plurality of camera modules includes dynamic information and static information; the static information are the features of the detection targets, and the dynamic information are the activity trajectories of the detection targets; and the monitoring targets can be prompted through the language interaction devices arranged by the camera modules, and the language interaction devices are used for sending voice interaction information to confirm the information of the monitoring targets. 508565
Further, in the intelligent image enhancement and real-time visual recognition system, when performing image enhancement, the CPLD processor corrects the gray values of the pixel points of the gray component images, places the gray component images in a logarithmic domain for processing, and obtains the enhanced gray component images, and a Gaussian template is used for convolving the enhanced gray component images to obtain low-pass filtered gray component images; the low-pass filtered gray component images are placed in the logarithmic domain for processing to obtain the gray component images after Gaussian smoothing; the gray values of the pixel points of the enhanced gray component images are corrected according to the relative light and dark relationship.
Further, in the intelligent image enhancement and real-time visual recognition system, the central processing unit uses YOLOV4 as a training algorithm implanted in the AI computing module, main components of the central processing unit are
CSPDarkNet53+SPP+PANet+YOLOv3-head, where CSP is used to enhance a learning ability of CNN (convolutional neural network) convolutional neural networks; Darknet53 is a backbone convolutional network used for initializing network weights, a structure of the of Darknet53 has 5 large residual blocks for fusing the feature information of feature maps of different sizes; and data images received by the central processing unit process other information according to requirements of an overall function to ensure a coordinated operation of the entire system.
Further, in the intelligent image enhancement and real-time visual recognition system, the data storage module associates each retrieved image with an index unit, and each of the index units includes a plurality of data items, data items including image number, image offset, and image size; each of the index units corresponds to a data unit in a data area, and each of the data units includes a plurality of data items, the plurality of data items including image number, image size, image data, and padding data.
Further, in the intelligent image enhancement and real-time visual recognition system, the single-color temperature light source selected by the camera calibration module covers a part of the cameras in the plurality of camera modules, and calibrates HUS08565 the corresponding cameras that need to be calibrated in the covered part of the cameras for the corresponding calibration items.
Further, in the intelligent image enhancement and real-time visual recognition 5 system, the real-time visual recognition module recognizes the recognition image by providing a stable light source, obtains the information of the detection targets, and transmits the information of the detection targets to a central controller.
Compared with the prior art, the beneficial effects of the present invention are as follows. By placing the plurality of camera modules in different predetermined ranges of the monitoring area, multi-range real-time information monitoring and scanning of the monitoring targets is performed, and the gray values of the pixel points in the gray component images are compared with the gray values of the pixel points in the image after Gaussian smoothing centered on them, and by performing gray correction on the gray component images, it is not necessary to consider the influence of pixel points farther away on the original pixel points, which can more accurately reflect the difference of the pixel points on the reflection component, so that the image color obtained after enhancement processing remains constant, and the contrast and clarity are higher. The currently latest object detection algorithm YOLOV4 in deep learning is adopted to ensure the accuracy of object recognition and feature extraction, and analyze and predict the activity trajectories of the target objects to assist in the accurate identification of the monitoring target location.
BRIEF DESCRIPTION OF THE FIGURES
The accompanying figures, which are a part of the specification, describe the examples of the present invention and, together with the description, serve to explain the principles of the present invention.
With reference to the accompanying figures, the present invention can be more clearly understood according to the following detailed description, in which:
FIG. 1 is a structural block diagram of an intelligent image enhancement and real-time visual recognition system provided in the present invention; and
FIG. 2 is a workflow diagram of an intelligent image enhancement and real-time HUS08565 visual recognition system provided in the present invention;
Among them, in FIG. 1, 100-image acquisition module; 200-image enhancement module; 300-central processing unit, 400-real-time visual recognition module; and 500-camera calibration module; 600-data storage module.
DESCRIPTION OF THE INVENTION
Now, various exemplary examples of the present invention will be described in detail with reference to the accompanying figures. It is to be noted that: unless otherwise specified, the relative arrangement, numerical expressions, and numerical values of the components and steps described in these examples do not limit the scope of the present invention.
At the same time, it is to be understood that, for the convenience of description, the dimensions of the various parts shown in the figures are not drawn to scale.
The description of at least one exemplary example is actually merely illustrative and not intended to limit the present invention or its invention or use in any way.
Techniques, methods, and equipment known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods, and equipment should be regarded as part of the specification.
It is to be noted that: similar reference numerals and letters denote similar items in the following figures, and therefore, once an item is defined in one figure, it does not need to be further discussed in subsequent figures.
In addition, the technical solutions between the various examples of the present invention can be combined with each other, but it must be based on the fact that those of ordinary skill in the art can implement them. When there is a contradiction or impossibility in the combination of technical solutions, it is to be considered that such a combination of technical solutions does not exist and is not within the protection scope of the present invention.
It is to be noted that all directional indications (such as up, down, left, right, front, back, etc.) in the examples of the present invention are only used to explain the relative positional relationship and movement of the various components under a HUS08565 certain specific posture (as shown in the figures). If the specific posture changes, the directional indication will also change accordingly.
The following describes an intelligent image enhancement and real-time visual recognition system according to an exemplary example of the present invention in conjunction with FIGS. 1 and 2. It is be noted that the above application scenarios are only shown for ease of understanding the spirit and principles of the present invention, and the examples of the present invention are not limited in this regard. On the contrary, the examples of the present invention can be applied to any applicable scenarios.
The present invention also provides an intelligent image enhancement and real-time visual recognition system.
FIG. 1 schematically shows a structural block diagram of an intelligent image enhancement and real-time visual recognition system according to an example of the present invention. As shown in FIG. 1, the system includes: an image acquisition module 100, an image enhancement module 200, a central processing unit 300, a real-time visual recognition module 400, a data storage module 600 and a camera calibration module 500; the image acquisition module 100 using a plurality of camera modules to collect images in different areas, and the areas collected by the plurality of camera modules are adjacent; the image enhancement module 200 acquiring gray component images in original images, gray values of the pixel points of the gray component images being corrected through a CPLD processor according to a relative light and dark relationship between pixel points of the gray component images and the pixel points in the gray component images after Gaussian smoothing, and the gray component images after the gray value correction being synthesized to obtain an enhanced image; the central processing unit 300 being used for preprocessing pictures and videos obtained by the image acquisition module 100, and an AI computing module using the
YOLOV4 algorithm to identify the same type of targets in the preprocessed pictures and videos; HUS08565 the real-time visual recognition module 400 recognizing the images collected by the image acquisition module 100 through computer vision methods, and recognizing activity trajectories of monitoring targets in the areas and feature information of detection targets; the data storage module 600 storing historical enhanced images generated by the image enhancement module 200 and image data of the real-time visual recognition module 400 for implementing a simple storage system for large-scale small files, reducing a frequency of disk IOs, improving image access efficiency, and for storing image information processed by the image processing module; and the camera calibration module 500 being electrically connected to the plurality of camera modules, being used for controlling the plurality of camera modules to perform calibration burning under a single-color temperature light source, selecting at least one single-color temperature light source corresponding to color temperatures, and making the plurality of camera modules complete the calibration of all calibration items.
Specifically, the image acquisition module places the plurality of camera modules in different predetermined ranges of the monitoring area to perform multi-range real-time information scanning of the detection targets. The camera modules transmit the scanned information data of the monitoring targets to the image enhancement module 200. The image enhancement module 200, through the CPLD processor, according to the relative light and dark relationship between pixel points of the gray component images and the pixel points in the gray component images after
Gaussian smoothing, corrects the gray values of the pixel points of the gray component images, and synthesizes the gray component images after gray value correction to obtain the enhanced image, and transmits the enhanced image to the central processing unit 300.
Further, the central processing unit 300 recognizes the images collected by the image acquisition module 100 through a real-time visual recognition system using computer vision methods to recognize the monitoring targets in the areas, thereby enabling the tracking of the activity trajectories of the detection targets and the feature HUS08565 information of the detection targets and preprocessing of the images and videos obtained by the image acquisition module 100, and the AI computing module uses the
YOLOV4 algorithm to recognize the same type of targets in the preprocessed pictures and videos and determines whether to proceed with the next instruction.
Further, the central processing unit 300 stores the organized data in the data storage module 600, the data storage module 600 stores the historical enhanced images generated by the image enhancement module 200 for implementing a simple storage system for large-scale small files, reducing the frequency of disk 10s and improving the image access efficiency.
Further, this example also includes the camera calibration module 500 electrically connected to all the camera modules for controlling the plurality of camera modules to perform calibration burning under a single-color temperature light source.
As can be seen from the above, by placing the plurality of camera modules in different predetermined ranges of the monitoring area, multi-range real-time information monitoring and scanning of the monitoring targets is performed, and the gray values of the pixel points in the gray component images are compared with the gray values of the pixel points in the image after Gaussian smoothing centered on them, and by performing gray correction on the gray component images, it is not necessary to consider the influence of pixel points farther away on the original pixel points, which can more accurately reflect the difference of the pixel points on the reflection component, so that the image color obtained after enhancement processing remains constant, and the contrast and clarity are higher. The currently latest object detection algorithm YOLOV4 in deep learning is adopted to ensure the accuracy of object recognition and feature extraction, and analyze and predict the activity trajectories of the target objects to assist in the accurate identification of the monitoring target location. Finally, the historical enhanced images generated by the image enhancement module 200 are stored for implementing the simple storage system for large-scale small files, reducing the frequency of disk IOs and improving the image access efficiency. 508565
Specifically, the camera modules are equipped with visual scanning devices and language interaction devices, scanning and collecting information on the detection targets in predetermined areas; the first camera module monitors in the first predetermined area, and the plurality of second camera modules monitor in the plurality of second predetermined areas, the first predetermined area is adjacent to the plurality of second predetermined areas, and the predetermined areas cover an entire monitoring area to perform real-time monitoring of the entire monitoring area.
As can be seen from the above, since the camera modules perform monitoring scans within their respective predetermined ranges, normal personnel cross-regional communication is not affected; at the same time, the camera modules in different areas share their respective area scanning information in real time, allowing each region’s camera module to determine whether there is the same monitoring target based on the multi-area shared scanning information and thereby form its activity trajectory, and then determine whether there is an obvious anomaly in the monitoring target based on the activity trajectory. When the possibility is greater than the threshold, a voice interaction message is sent, and the voice interaction message is used to prompt the monitoring target to provide identity authentication information to avoid misjudgment, thereby fully realizing all-round real-time monitoring based on visual scanning and trajectory recognition.
Specifically, the information scanned by the plurality of camera modules includes dynamic information and static information; the static information are the features of the detection targets, and the dynamic information are the activity trajectories of the detection targets; and the monitoring targets can be prompted through the language interaction devices arranged by the camera modules, and the language interaction devices are used for sending voice interaction information to confirm the information of the monitoring targets.
As can be seen from the above, the camera modules scan and detect the dynamic information and the static information of the targets to determine whether there is an obvious anomaly in the monitoring targets. When the possibility is greater than the threshold, a voice interaction message is sent, and the voice interaction message is HUS08565 used to prompt the monitoring targets to provide identity authentication information.
Specifically, when performing image enhancement, the CPLD processor corrects the gray values of the pixel points of the gray component images, places the gray component images in a logarithmic domain for processing, and obtains the enhanced gray component images, and a Gaussian template is used for convolving the enhanced gray component images to obtain low-pass filtered gray component images; the low-pass filtered gray component images are placed in the logarithmic domain for processing to obtain the gray component images after Gaussian smoothing; the gray values of the pixel points of the enhanced gray component images are corrected according to the relative light and dark relationship.
As can be seen from the above, initializing the gray values of the pixel points in the enhanced gray component images to constants includes: initializing the gray values of the pixel points in the enhanced gray component images to the averages of the gray values of the pixel points in the enhanced gray component images, and by performing gray correction on the gray component images, it is not necessary to consider the influence of pixel points farther away on the original pixel points, which can more accurately reflect the difference of the pixel points on the reflection component, so that the image color obtained after enhancement processing remains constant, and the contrast and clarity are higher.
Specifically, the central processing unit 300 uses YOLOV4 as a training algorithm implanted in the AI computing module, main components of the central processing unit are CSPDarkNet53+SPP+PANet+YOLOv3-head, where CSP is used to enhance a learning ability of CNN convolutional neural networks; Darknet53 is a backbone convolutional network used for initializing network weights, a structure of the of Darknet53 has 5 large residual blocks for fusing the feature information of feature maps of different sizes, and data images received by the central processing unit process other information according to requirements of an overall function to ensure a coordinated operation of the entire system.
As can be seen from the above, the YOLOV4 object detection algorithm in deep learning is adopted, and then the advanced feature extraction algorithm SIFT is called HUS08565 to ensure the accuracy of object recognition and feature extraction, and the activity trajectory of the predicted object is analyzed to assist in the accurate identification of the specific object location. By carrying the portable embedded AI platform NVIDIA
Xavier Nx, the computing speed is further improved, ensuring real-time performance and maintaining low power consumption.
Specifically, the single-color temperature light source selected by the camera calibration module 500 covers a part of the cameras in the plurality of camera modules, and calibrates the corresponding cameras that need to be calibrated in the covered part of the cameras for the corresponding calibration items.
As can be seen from the above, only one type of LED lamp with a single-color temperature needs to be integrated in each single-color temperature light source, the cost is low, and even if some LED lamps with a single-color temperature are damaged, only some single-color temperature light sources with a single-color temperature need to be replaced, which will not affect other single-color temperature light sources, and all LED lamps can be fully utilized, and the utilization rate of the light source is high.
The replacement between single-color temperature light sources can be realized by manual operation of the inspector, which is convenient and easy to operate.
Specifically, the real-time visual recognition module 400 recognizes the recognition image by providing a stable light source, obtains the information of the detection targets, and transmits the information of the detection targets to a central controller.
As can be seen from the above, the real-time visual recognition module 400 uses computer vision methods to perform preset value comparison and recognition on the monitoring data online, and transmits the compared and recognized data to the central processing unit 300 for data storage, which has a very high recognition accuracy rate.
Those skilled in the art will readily envision other examples of the present invention after considering the specification and practicing the invention disclosed herein. The present invention is intended to cover any variations, uses, or adaptations of the present invention that follow the general principles of the present invention and include common knowledge or conventional technical means in the technical field not HUS08565 disclosed in the present invention. The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the present invention being indicated by the following claims.
It is to be understood that the present invention is not limited to the precise structure described above and illustrated in the figures, and that various modifications and changes can be made without departing from its scope. The scope of the present invention is limited only by the appended claims.

Claims (8)

CLAIMS LU508565
1. An intelligent image enhancement and real-time visual recognition system, comprising: an image acquisition module, an image enhancement module, a central processing unit, a real-time visual recognition module, a data storage module and a camera calibration module; the image acquisition module using a plurality of camera modules to collect images in different areas, and the areas collected by the plurality of camera modules are adjacent; the image enhancement module acquiring gray component images in original images, gray values of the pixel points of the gray component images being corrected through a CPLD (complex programmable logic device) processor according to a relative light and dark relationship between pixel points of the gray component images and the pixel points in the gray component images after Gaussian smoothing, and the gray component images after the gray value correction being synthesized to obtain an enhanced image; the central processing unit being used for preprocessing pictures and videos obtained by the image acquisition module, and an AI (artificial intelligence) computing module using the YOLOV4 (you only look once version 4) algorithm to identify the same type of targets in the preprocessed pictures and videos; the real-time visual recognition module recognizing the images collected by the image acquisition module through computer vision methods, and recognizing activity trajectories of monitoring targets in the areas and feature information of detection targets; the data storage module storing historical enhanced images generated by the image enhancement module and image data of the real-time visual recognition module for implementing a simple storage system for large-scale small files, reducing a frequency of disk IOs (input/output operations), improving image access efficiency, and for storing image information processed by the image processing module; and the camera calibration module being electrically connected to the plurality of camera modules, being used for controlling the plurality of camera modules to HUS08565 perform calibration burning under a single-color temperature light source, selecting at least one single-color temperature light source corresponding to color temperatures, and making the plurality of camera modules complete the calibration of all calibration items.
2. The intelligent image enhancement and real-time visual recognition system according to claim 1, wherein the camera modules are equipped with visual scanning devices and language interaction devices, scanning and collecting information on the detection targets in predetermined areas; the first camera module monitors in the first predetermined area, and the plurality of second camera modules monitor in the plurality of second predetermined areas, the first predetermined area is adjacent to the plurality of second predetermined areas, and the predetermined areas cover an entire monitoring area to perform real-time monitoring of the entire monitoring area.
3. The intelligent image enhancement and real-time visual recognition system according to claim 2, wherein the information scanned by the plurality of camera modules comprises dynamic information and static information; the static information are the features of the detection targets, and the dynamic information are the activity trajectories of the detection targets; and the monitoring targets can be prompted through the language interaction devices arranged by the camera modules, and the language interaction devices are used for sending voice interaction information to confirm the information of the monitoring targets.
4. The intelligent image enhancement and real-time visual recognition system according to claim 1, wherein when performing image enhancement, the CPLD processor corrects the gray values of the pixel points of the gray component images, places the gray component images in a logarithmic domain for processing, and obtains the enhanced gray component images, and a Gaussian template is used for convolving the enhanced gray component images to obtain low-pass filtered gray component images; the low-pass filtered gray component images are placed in the logarithmic domain for processing to obtain the gray component images after Gaussian smoothing; the gray values of the pixel points of the enhanced gray component images are corrected according to the relative light and dark relationship. HUS08565
5. The intelligent image enhancement and real-time visual recognition system according to claim 1, wherein the central processing unit uses YOLOV4 as a training algorithm implanted in the AI computing module, main components of the central processing unit are CSPDarkNet53+SPP+PANet+YOLOv3-head, where CSP is used to enhance a learning ability of CNN (convolutional neural network) convolutional neural networks; Darknet53 is a backbone convolutional network used for initializing network weights, a structure of the of Darknet53 has 5 large residual blocks for fusing the feature information of feature maps of different sizes; and data images received by the central processing unit process other information according to requirements of an overall function to ensure a coordinated operation of the entire system.
6. The intelligent image enhancement and real-time visual recognition system according to claim 1, wherein the data storage module associates each retrieved image with an index unit, and each of the index units comprises a plurality of data items, data items comprising image number, image offset, and image size; each of the index units corresponds to a data unit in a data area, and each of the data units comprises a plurality of data items, the plurality of data items comprising image number, image size, image data, and padding data.
7. The intelligent image enhancement and real-time visual recognition system according to claim 1, wherein the single-color temperature light source selected by the camera calibration module covers a part of the cameras in the plurality of camera modules, and calibrates the corresponding cameras that need to be calibrated in the covered part of the cameras for the corresponding calibration items.
8. The intelligent image enhancement and real-time visual recognition system according to claim 1, wherein the real-time visual recognition module recognizes the recognition image by providing a stable light source, obtains the information of the detection targets, and transmits the information of the detection targets to a central controller.
LU508565A 2024-10-15 2024-10-15 Intelligent image enhancement and real-time visual recognition system LU508565B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
LU508565A LU508565B1 (en) 2024-10-15 2024-10-15 Intelligent image enhancement and real-time visual recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
LU508565A LU508565B1 (en) 2024-10-15 2024-10-15 Intelligent image enhancement and real-time visual recognition system

Publications (1)

Publication Number Publication Date
LU508565B1 true LU508565B1 (en) 2025-04-23

Family

ID=95554492

Family Applications (1)

Application Number Title Priority Date Filing Date
LU508565A LU508565B1 (en) 2024-10-15 2024-10-15 Intelligent image enhancement and real-time visual recognition system

Country Status (1)

Country Link
LU (1) LU508565B1 (en)

Similar Documents

Publication Publication Date Title
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN106355367A (en) Warehouse monitoring management device
CN110119734A (en) Cutter detecting method and device
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
CN115346169B (en) Method and system for detecting sleep post behaviors
CN117115098A (en) Defect location and detection methods, systems, media and equipment for key substation equipment
CN118552772A (en) Construction dangerous area personnel intrusion detection method and model building method thereof
LU508565B1 (en) Intelligent image enhancement and real-time visual recognition system
CN116546287A (en) Multi-linkage wild animal online monitoring method and system
CN116563226A (en) Object detection method for power distribution equipment based on PPYoloe
CN108073873A (en) Human face detection and tracing system based on high-definition intelligent video camera
CN119967122A (en) Inspection control method, device and electronic equipment
Daogang et al. Anomaly identification of critical power plant facilities based on YOLOX-CBAM
CN118379688A (en) Safety helmet wearing identification method, device, storage medium and electronic terminal
CN109190555B (en) Intelligent shop patrol system based on picture comparison
Kumar et al. Object Detection using OpenCV and Deep Learning
Karthikeyan et al. Smart observer for distant water fishing in Taiwan
CN117119009A (en) A computer room visual management and operation platform based on the Internet of Things
Beiyi et al. Detection and tracking of safety helmet in construction site
CN115985040A (en) Fire monitoring method, device, equipment and medium
CN109124565B (en) Eye state detection method
Aleksyuk et al. Application of the Yolov 8 Neural Network for Detection of Personal Protection Equipment in Hazardous Enterprises
CN120823566B (en) AI-based video analytics-based safety production monitoring methods and systems
LU504993B1 (en) System and method for managing personnel in power plant in production
CN120823453B (en) Method, device, computer equipment, readable storage medium and program product for detecting wearing of safety helmet by power grid operator

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20250423