CN114708821A - Intelligent LED display screen system based on multi-sensor data fusion - Google Patents
Intelligent LED display screen system based on multi-sensor data fusion Download PDFInfo
- Publication number
- CN114708821A CN114708821A CN202210278615.5A CN202210278615A CN114708821A CN 114708821 A CN114708821 A CN 114708821A CN 202210278615 A CN202210278615 A CN 202210278615A CN 114708821 A CN114708821 A CN 114708821A
- Authority
- CN
- China
- Prior art keywords
- data
- display screen
- led display
- determining
- intelligent led
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/22—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
- G09G3/30—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels
- G09G3/32—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
Abstract
The invention provides an intelligent LED display screen system based on multi-sensor data fusion, which comprises: the data sensing terminal is used for acquiring various sensing data in real time based on a plurality of data acquisition terminals, and the various sensing data comprise light sensing data and video data; the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction; the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode; when the using environment changes, the intelligent automatic adjustment of the intelligent LED display screen can be realized, and the working mode and the display parameters do not need to be set manually.
Description
Technical Field
The invention relates to the technical field of intelligent control, in particular to an intelligent LED display screen system based on multi-sensor data fusion.
Background
At present, an LED Display screen System (LED Display Control System) is a System for controlling an LED large screen to Display correctly according to user requirements, and is classified into a halftone screen and a single screen according to a networking mode. The networking version is also called as an LED information release control system, and can control each LED terminal through a cloud system. The single board is also called an LED display screen controller and an LED display screen control card, and is a device which forms a core component of the LED display screen and is mainly responsible for converting an external video input signal or onboard multimedia file into a digital signal which is easy to identify by the LED large screen so as to lighten the LED large screen.
However, most of the display control of the current display screen system is artificial intelligence control, and when the illumination condition or the use environment changes, the working mode and the display parameters need to be set manually, so that the intelligent automatic adjustment of the intelligent LED display screen cannot be realized.
Therefore, the invention provides an intelligent LED display screen system for multi-sensor data fusion.
Disclosure of Invention
The invention provides an intelligent LED display screen system based on multi-sensor data fusion, which is used for realizing intelligent automatic adjustment of a display intelligent LED display screen when the use environment changes without manually setting a working mode and display parameters.
The invention provides an intelligent LED display screen system based on multi-sensor data fusion, which comprises:
the data sensing terminal is used for acquiring various sensing data in real time based on a plurality of data acquisition terminals, and the various sensing data comprise light sensing data and video data;
the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction;
and the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode.
Preferably, the data sensing terminal includes:
the video acquisition module is used for acquiring a monitoring video in a preset range in real time based on a camera arranged on the intelligent LED display screen;
the illumination acquisition module is used for acquiring light sensation data in real time based on a light sensor arranged on the intelligent LED display screen;
and the state monitoring module is used for judging whether the data sensing end has a fault or not based on the monitoring video and the light sensing data, obtaining a judgment result and sending a corresponding fault instruction based on the judgment result.
Preferably, the state monitoring module includes:
the curve fitting unit is used for fitting and obtaining a corresponding light sensation data dynamic curve based on the light sensation data obtained in real time;
the data fusion unit is used for aligning and fusing the light sensation dynamic curve and the monitoring video to obtain corresponding total dynamic data;
the first monitoring unit is used for judging that the monitoring video in the total dynamic data is paused, if so, a first fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
the second monitoring unit is used for judging whether a light sensation data dynamic curve in the total dynamic data is broken or not, if so, a second fault instruction is sent out, and otherwise, a corresponding judgment result is reserved;
wherein the fault instruction comprises: a first fault instruction and a second fault instruction.
Preferably, the fusion-determining end comprises:
the first determining module is used for determining a corresponding working mode based on the user control instruction when the user control instruction is received;
and the second determining module is used for determining a corresponding working mode based on the multiple sensing data and the preset decision mode when the user control instruction is not received.
Preferably, the first determining module includes:
the first analysis unit is used for analyzing the user control instruction to obtain a corresponding control parameter when receiving the user control instruction;
and the first determining unit is used for determining the corresponding working mode based on the control parameter.
Preferably, the second determining module includes:
the first analysis unit is used for carrying out primary analysis on the monitoring video in a preset period to obtain a corresponding primary analysis result;
the state judging unit is used for judging to start the intelligent LED display screen when the primary analysis result is that a pre-stored user exists in a preset range, and otherwise, judging not to start the intelligent LED display screen;
the second analysis unit is used for carrying out secondary analysis on the monitoring video in a preset period to obtain a corresponding secondary analysis result when the intelligent LED display screen is judged to be started;
the second determining unit is used for determining a display mode of the intelligent LED display screen based on the light sensation data and the secondary analysis result, and taking the display mode as a working mode of the intelligent LED display screen;
and the third determining unit is used for turning off the intelligent LED display screen as a corresponding working mode when the intelligent LED display screen is judged not to be started.
Preferably, the first analysis unit includes:
the image comparison subunit is used for comparing each frame of video frame contained in the monitoring video in a preset period with a prestored background image and determining a difference image corresponding to each frame of video frame;
a preliminary screening subunit, configured to screen a human body image from the difference image based on a preliminary screening method;
the face identification subunit is used for identifying the human body image based on a machine learning self-adaptive algorithm to obtain a corresponding face area;
a reference point determining subunit, configured to determine a corresponding reference point in the face image based on a preset determination method;
the image cutting subunit is used for cutting the face image based on the reference point to obtain a corresponding complete face area;
the image standardization subunit is used for carrying out standardization processing on the complete face area based on the distance between the reference points to obtain a corresponding standard-size face area and a corresponding standard-size face area set;
the image dividing subunit is used for dividing the standard-size face area into a preset number of sub-areas;
the image sampling subunit is used for sequentially performing sliding window sampling on the sub-regions in the standard-size face region to obtain corresponding sampling data;
the degree determining subunit is configured to determine, based on the sampling data, whether symmetric sub-regions corresponding to the sub-regions are included in the standard-size face region, and determine, based on the total number of sub-regions in which corresponding symmetric sub-regions exist in the standard-size face region, a forward degree of a corresponding face;
the mean value determining subunit is used for screening out a standard size face region corresponding to the maximum face forward degree from the standard size face region set to serve as an image to be corrected, and calculating a corresponding visual mean value based on visual data corresponding to each pixel point in the image to be corrected;
a first region determining subunit, configured to use a region formed by pixels whose visual data in the image to be corrected is greater than the visual mean as a sub-region to be attenuated, and use a region formed by pixels whose visual data in the image to be corrected is less than the visual mean as an sub-region to be enhanced;
the data determining subunit is used for determining first reflection data corresponding to the sub-region to be weakened and second reflection data corresponding to the sub-region to be enhanced;
the first normalization subunit is used for reducing the visual data corresponding to each pixel point in the sub-area to be weakened based on the first reflection data and the corresponding visual mean value, and obtaining a corresponding first standard sub-area;
the second normalization subunit is used for increasing the visual data corresponding to each pixel point in the sub-region to be enhanced based on the second reflection data and the corresponding visual mean value to obtain a corresponding second standard sub-region;
the region fusion subunit is configured to fuse the first standard sub-region, the second standard sub-region and the image to be corrected to obtain a corresponding standard face image;
and the first result determining subunit is used for judging whether a pre-stored face image matched with the standard face image exists in a pre-stored user library or not, if so, taking the pre-stored user in a preset range as a primary analysis result, and otherwise, taking the non-stored user in the preset range as the primary analysis result.
Preferably, the second analysis unit includes:
the edge determining subunit is used for determining an edge line in each frame of video frame in the monitoring video in a preset period based on a preset determining mode when the intelligent LED display screen is judged to be started;
a video frame dividing subunit, configured to divide the corresponding video frame into a plurality of sub-blocks based on the edge lines;
a spatial data determining subunit, configured to determine corresponding spatial data based on the sub-block;
a curve determining subunit, configured to generate a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block;
a curve fusion subunit, configured to fuse the component histogram distribution curves to obtain a corresponding total histogram distribution curve;
a second region determining subunit, configured to determine a minimum value in the total histogram distribution curve, count a total number of pixels between adjacent minimum values as a corresponding pixel capacity, and use an image region corresponding to the sub-block in a curve segment corresponding to the maximum pixel capacity in the total histogram distribution curve as a corresponding sampling region;
the data calculation subunit is used for calculating corresponding relative illumination data based on the visual data corresponding to the sampling area;
and the second result determining subunit is used for taking the spatial data and the relative illumination data as corresponding secondary analysis results.
Preferably, the state control terminal includes:
the state monitoring module is used for monitoring the current working state of the intelligent LED display screen;
the first judgment module is used for judging whether the current working state is closed, if so, judging whether the working mode is the closing of the intelligent LED display screen, if so, sending a holding instruction, and otherwise, sending a starting instruction;
the first control module is used for controlling the working state of the intelligent LED display screen based on the current working state and the working mode when the current working state is not closed;
and the second control module is used for controlling the working state of the intelligent LED display screen based on the working mode when the starting instruction is received.
Preferably, the first control module includes:
the parameter determining unit is used for determining a corresponding first working parameter based on the current working state when the current working state is not closed, and determining a corresponding second working parameter based on the working mode;
the instruction generating unit is used for generating a corresponding parameter adjusting instruction based on the difference value of the first working parameter and the second working parameter;
and the state adjusting unit is used for adjusting the working state of the intelligent LED display screen based on the parameter adjusting instruction.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of an intelligent LED display screen system based on multi-sensor data fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data sensing terminal according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a status monitoring module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a fusion determining end according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a first determining module according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a second determining module according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a first analysis unit according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a second analysis unit according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a status control node according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a first control module according to an embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the invention provides an intelligent LED display screen system based on multi-sensor data fusion, which comprises the following components in part by weight with reference to FIG. 2:
the data sensing terminal is used for acquiring various sensing data in real time based on a plurality of data acquisition terminals, and the various sensing data comprise light sensing data and video data;
the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction;
and the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode.
In this embodiment, the data acquisition terminal includes: a camera and a light sensor.
In this embodiment, sensing data includes: based on the video data that the camera obtained, based on the light sense data that photosensitive sensor obtained.
In this embodiment, the light sensing data is data obtained by the light sensor, that is, light sensing data within a preset range of the setting position of the light sensor.
In this embodiment, the video data is data obtained by the camera, that is, a video obtained within a preset range of the setting position of the camera.
In this embodiment, the preset decision manner is a method for determining the working mode of the intelligent LED display screen based on the sensing data.
In this embodiment, the working mode is a display mode (working mode corresponding to different display parameters) corresponding to whether the intelligent LED display screen is turned on or off and turned on.
In this embodiment, the user control instruction is an instruction for setting the operating mode of the intelligent LED display screen, which is input by a user based on a remote control device or program control.
The beneficial effects of the above technology are: when the service environment changes, the light sensation data and the video data which are acquired by the plurality of sensors arranged on the intelligent LED display screen and the preset decision-making mode are combined with the user control instruction, the intelligent automatic adjustment of the intelligent LED display screen can be realized, the working mode and the display parameters do not need to be set manually, the automatic intelligent adjustment of the intelligent LED display screen is realized, and the automation degree and the service performance of the intelligent LED display screen are improved.
Example 2:
on the basis of the embodiment 1, the data sensing terminal, referring to fig. 2, includes:
the video acquisition module is used for acquiring a monitoring video in a preset range in real time based on a camera arranged on the intelligent LED display screen;
the illumination acquisition module is used for acquiring light sensation data in real time based on a light sensor arranged on the intelligent LED display screen;
and the state monitoring module is used for judging whether the data sensing end has a fault or not based on the monitoring video and the light sensing data, obtaining a judgment result and sending a corresponding fault instruction based on the judgment result.
In this embodiment, the determination result is a result obtained by determining whether the data sensing terminal has a fault based on the monitoring video and the light sensing data.
In this embodiment, the fault instruction is an instruction for reminding a user that a fault occurs at the corresponding sensor terminal.
The beneficial effects of the above technology are: the light sensation data and the video data in the use environment on the intelligent LED display screen can be acquired in real time based on each sensor end arranged on the intelligent LED display screen, a foundation is provided for the follow-up automatic adjustment of the intelligent LED display screen, the fault monitoring of the sensor end is realized based on the set state monitoring module, the normal use of the sensor end is ensured, and the effectiveness of the automatic adjustment function of the intelligent LED display screen is also ensured.
Example 3:
on the basis of embodiment 2, the status monitoring module, referring to fig. 3, includes:
the curve fitting unit is used for fitting and obtaining a corresponding light sensation data dynamic curve based on the light sensation data obtained in real time;
the data fusion unit is used for aligning and fusing the light sensation dynamic curve and the monitoring video to obtain corresponding total dynamic data;
the first monitoring unit is used for judging that the monitoring video in the total dynamic data is paused, if so, a first fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
the second monitoring unit is used for judging whether a light sensation data dynamic curve in the total dynamic data is broken or not, if so, a second fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
wherein the fault instruction comprises: a first fault instruction and a second fault instruction.
In this embodiment, the dynamic curve of the light sensation data is a curve obtained by fitting the light sensation data acquired in real time.
In this embodiment, the total dynamic data is data obtained by aligning and fusing the light sensation dynamic curve and the monitoring video.
In this embodiment, the first failure instruction is an instruction for reminding a user that the corresponding camera head end fails.
In this embodiment, the first failure instruction is an instruction for reminding a user that a failure occurs at the corresponding optical sensor terminal.
The beneficial effects of the above technology are: the corresponding total dynamic data are obtained based on the fact that the light sensation data obtained in real time are fitted into a curve and the monitoring video obtained in real time is aligned and fused, whether the corresponding sensor end breaks down or not can be judged by judging whether the data in the total dynamic data break or not, normal operation of the corresponding sensor end is further guaranteed, the sensing data can be successfully obtained, and a foundation is provided for guaranteeing that the intelligent LED display screen can be normally and intelligently adjusted.
Example 4:
on the basis of the embodiment 1, the fusion determining end, referring to fig. 4, includes:
the first determining module is used for determining a corresponding working mode based on the user control instruction when the user control instruction is received;
and the second determining module is used for determining a corresponding working mode based on the multiple sensing data and the preset decision mode when the user control instruction is not received.
The beneficial effects of the above technology are: the corresponding working mode is determined based on the user control instruction or multiple sensing data and the preset decision mode when the user control instruction is judged to be received or not, so that the intelligent LED display screen realizes two control modes of setting the working mode based on the user control and automatically setting the working mode based on the sensing data, and the performance of the intelligent LED display screen is improved.
Example 5:
on the basis of embodiment 4, the first determining module, referring to fig. 5, includes:
the first analysis unit is used for analyzing the user control instruction to obtain a corresponding control parameter when receiving the user control instruction;
and the first determining unit is used for determining the corresponding working mode based on the control parameter.
In this embodiment, the control parameter is a setting parameter for controlling the working mode of the intelligent LED display screen obtained by analyzing the user control instruction.
The beneficial effects of the above technology are: the working mode of controlling the intelligent LED display screen based on the user control instruction is realized, and the control precision of the intelligent LED display screen is ensured.
Example 6:
on the basis of embodiment 4, the second determining module, referring to fig. 6, includes:
the first analysis unit is used for carrying out primary analysis on the monitoring video in a preset period to obtain a corresponding primary analysis result;
the state judgment unit is used for judging to start the intelligent LED display screen when the primary analysis result is that a pre-stored user exists in a preset range, and otherwise, judging not to start the intelligent LED display screen;
the second analysis unit is used for carrying out secondary analysis on the monitoring video in a preset period to obtain a corresponding secondary analysis result when the intelligent LED display screen is judged to be started;
the second determining unit is used for determining a display mode of the intelligent LED display screen based on the light sensation data and the secondary analysis result, and taking the display mode as a working mode of the intelligent LED display screen;
and the third determining unit is used for turning off the intelligent LED display screen as a corresponding working mode when the intelligent LED display screen is judged not to be started.
In this embodiment, the primary analysis result is a result obtained by performing primary analysis on the monitoring video in the preset period.
In this embodiment, the secondary analysis result is a result obtained by performing secondary analysis on the monitoring video in the preset period, and includes spatial data and relative illumination data.
In this embodiment, the display mode is a working mode corresponding to different display parameters when the intelligent LED display screen is turned on.
The beneficial effects of the above technology are: when a user control instruction is not received, intelligent control is carried out on the intelligent LED display screen based on sensing data and a preset decision mode, so that the intelligent LED display screen can be automatically set in a corresponding optimal working mode in a pre-treatment use environment, and intelligent adjustment of the intelligent LED display screen is realized.
Example 7:
on the basis of embodiment 6, the first analysis unit, with reference to fig. 7, comprises:
the image comparison subunit is used for comparing each frame of video frame contained in the monitoring video in a preset period with a prestored background image and determining a difference image corresponding to each frame of video frame;
a preliminary screening subunit, configured to screen a human body image from the difference image based on a preliminary screening method;
the face identification subunit is used for identifying the human body image based on a machine learning self-adaptive algorithm to obtain a corresponding face area;
a reference point determining subunit, configured to determine, based on a preset determination method, a corresponding reference point in the face image;
an image clipping subunit, configured to clip the face image based on the reference point to obtain a corresponding complete face region;
the image standardization subunit is used for carrying out standardization processing on the complete face area based on the distance between the reference points to obtain a corresponding standard-size face area and a corresponding standard-size face area set;
the image dividing subunit is used for dividing the standard-size face area into sub-areas with preset number;
the image sampling subunit is used for sequentially performing sliding window sampling on the sub-regions in the standard-size face region to obtain corresponding sampling data;
the degree determining subunit is configured to determine, based on the sampling data, whether symmetric sub-regions corresponding to the sub-regions are included in the standard-size face region, and determine, based on the total number of sub-regions in which corresponding symmetric sub-regions exist in the standard-size face region, a forward degree of a corresponding face;
the mean value determining subunit is used for screening out a standard size face region corresponding to the maximum face forward degree from the standard size face region set to serve as an image to be corrected, and calculating a corresponding visual mean value based on visual data corresponding to each pixel point in the image to be corrected;
a first region determining subunit, configured to use a region formed by pixels whose visual data in the image to be corrected is greater than the visual mean as a sub-region to be attenuated, and use a region formed by pixels whose visual data in the image to be corrected is less than the visual mean as an sub-region to be enhanced;
the data determining subunit is used for determining first reflection data corresponding to the sub-region to be weakened and second reflection data corresponding to the sub-region to be enhanced;
the first normalization subunit is used for reducing the visual data corresponding to each pixel point in the sub-area to be weakened based on the first reflection data and the corresponding visual mean value, and obtaining a corresponding first standard sub-area;
the second normalization subunit is used for increasing the visual data corresponding to each pixel point in the sub-region to be enhanced based on the second reflection data and the corresponding visual mean value to obtain a corresponding second standard sub-region;
the region fusion subunit is configured to fuse the first standard sub-region, the second standard sub-region and the image to be corrected to obtain a corresponding standard face image;
and the first result determining subunit is used for judging whether a pre-stored face image matched with the standard face image exists in a pre-stored user library or not, if so, taking the pre-stored user in a preset range as a primary analysis result, and otherwise, taking the non-stored user in the preset range as the primary analysis result.
In this embodiment, the difference image is an image area corresponding to each frame of video frame determined by comparing each frame of video frame included in the monitoring video in the preset period with the pre-stored background image.
In this embodiment, the preliminary screening method is a method of screening a human body image from a difference image.
In this embodiment, the human body image is an image corresponding to a human body existing within a shooting range of the camera.
In this embodiment, the face region is a face image obtained by recognizing a human body image based on a machine learning adaptive algorithm.
In this embodiment, the predetermined determination method is a method of determining a reference point in the face image.
In this embodiment, the reference point is, for example, an eyeball center point, a nose tip center point, or the like.
In this embodiment, the complete face region is an image region obtained by cutting the face image based on the reference point.
In this embodiment, the standard size face region is a face region obtained by normalizing the entire face region based on the distance between the reference points.
In this embodiment, the standard size face region set is a set of standard size face regions.
In this embodiment, the sub-region is an image region obtained by dividing a standard-size face region.
In this embodiment, the sampling data is data obtained by sequentially performing sliding window sampling on sub-regions in the standard-size face region.
In this embodiment, the symmetric sub-region is a sub-region of the standard-size face region that is symmetric to the sampling data of the corresponding sub-region.
In this embodiment, determining the forward degree of the corresponding face based on the total number of sub-regions in which the corresponding symmetric sub-regions exist in the standard-size face region includes:
wherein, alpha is the face positive degree, u is the standard size faceThe total number of the sub-regions corresponding to the symmetrical sub-regions exists in the region, v is the total number of the sub-regions contained in the standard-size face region,is a pair ofCarrying out rounding;
for example, if u is 8 and v is 9, α is 0.8.
In this embodiment, the image to be corrected is a standard-size face region corresponding to the maximum face forward degree screened from the standard-size face region set.
In this embodiment, calculating a corresponding visual mean value based on the visual data corresponding to each pixel point in the image to be corrected is: the average chroma value of the pixel points contained in the image to be corrected is the visual average corresponding to the chroma value, and the average brightness value of the pixel points contained in the image with correction is the visual average corresponding to the brightness value.
In this embodiment, the visual data includes chrominance values and luminance values.
In this embodiment, the to-be-enhanced region is a region formed by pixel points of which the visual data in the to-be-corrected image is smaller than the visual mean.
In this embodiment, the sub-region to be weakened is a region formed by pixels of which the visual data in the image to be corrected is larger than the visual mean.
In this embodiment, determining the first reflection data corresponding to the sub-area to be weakened includes: and converting the sub-region to be weakened into a logarithmic domain, obtaining the corresponding incident component and the corresponding reflection component, carrying out filtering treatment on the sub-region to be weakened to obtain the corresponding reflection component, and taking the corresponding reflection component as the corresponding first reflection data.
In this embodiment, the second reflection data corresponding to the enhancer region includes: and converting the to-be-enhanced enhancer region into a logarithmic domain, obtaining corresponding incident components and reflection components, performing filtering processing on the to-be-enhanced enhancer region to obtain corresponding reflection components, and taking the corresponding reflection components as corresponding first reflection data.
In this embodiment, the first standard sub-region is an image region obtained by reducing visual data corresponding to each pixel point included in the sub-region to be weakened based on the first reflection data and the corresponding visual mean.
In this embodiment, reducing the visual data corresponding to each pixel point included in the sub-region to be attenuated based on the first reflection data and the corresponding visualization mean includes: and reducing the visual data of the corresponding pixel point by a corresponding difference value based on the visual data difference value between the first reflection data and the visual mean value.
In this embodiment, the second standard sub-region is an image region obtained by increasing the visual data corresponding to each pixel point included in the sub-region to be enhanced based on the second reflection data and the corresponding visualization mean value.
In this embodiment, increasing the visual data corresponding to each pixel point included in the sub-region to be enhanced based on the second reflection data and the corresponding visualization mean includes: and increasing the visual data of the corresponding pixel point by a corresponding difference value based on the visual data difference value between the second reflection data and the visual mean value.
In this embodiment, the standard face image is an image region obtained by fusing the first standard sub-region, the second standard sub-region, and the image to be corrected.
In this embodiment, the pre-stored user library is a pre-stored user that can call the intelligent LED display screen to be started.
In this embodiment, the pre-stored face image is a user database pre-stored in a pre-stored user database and capable of prompting the intelligent LED display screen to be started.
The beneficial effects of the above technology are: the method comprises the steps of obtaining a standard forward face image by comparing, screening, identifying, cutting, standardizing, dividing and correcting each frame of video frame contained in a monitoring video in a preset period, matching the standard face image with a pre-stored face image in a pre-stored user library, and realizing enabling control and self-starting control on an intelligent LED display screen based on a matching result.
Example 8:
on the basis of embodiment 6, the second analysis unit, with reference to fig. 8, includes:
the edge determining subunit is used for determining an edge line in each frame of video frame in the monitoring video in a preset period based on a preset determining mode when the intelligent LED display screen is judged to be started;
a video frame dividing subunit, configured to divide the corresponding video frame into a plurality of sub-blocks based on the edge lines;
a spatial data determining subunit, configured to determine corresponding spatial data based on the sub-block;
a curve determining subunit, configured to generate a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block;
a curve fusion subunit, configured to fuse the component histogram distribution curves to obtain a corresponding total histogram distribution curve;
a second region determining subunit, configured to determine a minimum value in the total histogram distribution curve, count a total number of pixels between adjacent minimum values as a corresponding pixel capacity, and use an image region corresponding to the sub-block in a curve segment corresponding to the maximum pixel capacity in the total histogram distribution curve as a corresponding sampling region;
the data calculation subunit is used for calculating corresponding relative illumination data based on the visual data corresponding to the sampling area;
and the second result determining subunit is used for taking the spatial data and the relative illumination data as corresponding secondary analysis results.
In this embodiment, the predetermined determining method is a method for determining a spatial edge line in each frame of video frame in the monitored video in a predetermined period.
In this embodiment, the sub-block is an image block obtained by dividing the corresponding video frame based on the edge line.
In this embodiment, the spatial data is size data of a space where the intelligent LED display screen is located.
In this embodiment, the visual data component includes a chrominance value component and a luminance value component.
In this embodiment, generating a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block includes: determining a chromatic value range and a brightness value range from the block, dividing the corresponding chromatic value range and the corresponding brightness value range into a plurality of chromatic levels and a plurality of brightness levels by taking the appropriate chromatic value interval and the proper brightness interval as units, representing the chromatic levels or the brightness levels by a horizontal axis, representing the total number of pixel points contained in the corresponding chromatic levels or the corresponding brightness levels by a vertical axis, making a bar statistical graph, and connecting and fitting the central points of all bars of the bar statistical graph to obtain a corresponding component histogram distribution curve.
In this embodiment, the total histogram distribution curve is a curve obtained by fusing the component histogram distribution curves.
In this embodiment, the pixel capacity is the total number of pixels between adjacent minimum values in the total histogram distribution curve.
In this embodiment, the sampling region is an image region corresponding to the sub-block, where a curve segment corresponding to the maximum pixel point capacity in the total histogram distribution curve is located.
In this embodiment, calculating corresponding relative illumination data based on the visual data corresponding to the sampling region includes:
determining a central point coordinate value and central point visual data of a sampling region, and determining a coordinate value and visual data of each pixel point contained in the sampling region;
calculating a corresponding relative illumination value based on the coordinate value of the central point, the coordinate value of each pixel point and the visual data:
wherein M is relativeIllumination value, ε1Is a first coefficient (representing the conversion coefficient between the ratio of chromatic value to pixel point distance and the relative illumination value), epsilon2Is a second coefficient (a conversion coefficient between the characteristic brightness value and the relative illumination value), i is the ith pixel point contained in the sampling area, n is the total number of the pixel points contained in the sampling area, siFor the chrominance value, s, corresponding to the ith pixel point contained in the sample area0Is a center point chroma value, x, contained in the center point visual dataiAn abscissa value, y, corresponding to the ith pixel point included in the sampling regioniIs the vertical coordinate value, x corresponding to the ith pixel point contained in the sampling region0Is the abscissa value corresponding to the coordinate value of the center point, y0Is the ordinate value corresponding to the coordinate value of the center point, |iFor the luminance value corresponding to the ith pixel point contained in the sampling region,/0The central point brightness value contained in the central point visual data;
for example, the coordinate value of the center point is (0,0), the chromaticity value of the center point is 5, the luminance value of the center point is 5, 5, the sampling region comprises three pixels, the coordinate values of the three pixels are (1,1), (-1,1) (0,1), the chromaticity values of the three pixels are 2, 3, 4, the luminance values of the three pixels are 6, 7, 8, epsilon1Is 0.5,. epsilon20.5, then M is 4.8;
and determining corresponding relative illumination data based on the relative illumination value and a corresponding relative illumination data list (representing the corresponding relation between the relative illumination value and the relative illumination data).
The beneficial effects of the above technology are: the monitoring video in the preset period is subjected to secondary analysis based on the histogram distribution curve, so that corresponding relative illumination data and space data are obtained, and a data basis is provided for subsequently determining the display mode of the intelligent LED display screen.
Example 9:
on the basis of embodiment 1, the state control terminal, referring to fig. 9, includes:
the state monitoring module is used for monitoring the current working state of the intelligent LED display screen;
the first judgment module is used for judging whether the current working state is closed, if so, judging whether the working mode is that the intelligent LED display screen is closed, if so, sending a holding instruction, and otherwise, sending a starting instruction;
the first control module is used for controlling the working state of the intelligent LED display screen based on the current working state and the working mode when the current working state is not closed;
and the second control module is used for controlling the working state of the intelligent LED display screen based on the working mode when the starting instruction is received.
In this embodiment, the holding instruction is an instruction for controlling the intelligent LED display to hold the current working state.
In this embodiment, the turn-on command is a command for controlling the turn-on of the intelligent LED display screen.
The beneficial effects of the above technology are: and the intelligent LED display screen control device is used for determining a corresponding control instruction based on the latest determined working mode and the latest working state of the intelligent LED display screen, so that the intelligent adjustment of the working state of the intelligent LED display screen is realized.
Example 10:
on the basis of embodiment 1, the first control module, with reference to fig. 10, includes:
the parameter determining unit is used for determining a corresponding first working parameter based on the current working state when the current working state is not closed, and determining a corresponding second working parameter based on the working mode;
the instruction generating unit is used for generating a corresponding parameter adjusting instruction based on the difference value of the first working parameter and the second working parameter;
and the state adjusting unit is used for adjusting the working state of the intelligent LED display screen based on the parameter adjusting instruction.
In this embodiment, the first working parameter is a working parameter corresponding to the current working state of the intelligent LED display screen, for example: display luminance and display chromaticity.
In this embodiment, the second operating parameter is the operating parameter corresponding to the newly determined operating mode.
In this embodiment, the parameter adjusting instruction is a corresponding instruction for adjusting the operating parameter of the intelligent LED display screen, which is generated based on a parameter difference determined by a difference between the first operating parameter and the second operating parameter.
The beneficial effects of the above technology are: and determining a corresponding parameter adjusting instruction based on the current state of the intelligent LED display screen and the latest determined working mode, and realizing the parameter adjustment of the intelligent LED display screen.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. The utility model provides an intelligence LED display screen system based on multisensor data fusion which characterized in that includes:
the data acquisition terminal is used for acquiring a plurality of sensing data in real time based on a plurality of data acquisition terminals, and the plurality of sensing data comprise light sensing data and video data;
the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction;
and the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode.
2. The intelligent LED display screen system based on multi-sensor data fusion of claim 1, wherein the data sensing terminal comprises:
the video acquisition module is used for acquiring a monitoring video in a preset range in real time based on a camera arranged on the intelligent LED display screen;
the illumination acquisition module is used for acquiring light sensation data in real time based on a light sensor arranged on the intelligent LED display screen;
and the state monitoring module is used for judging whether the data sensing end has a fault or not based on the monitoring video and the light sensation data, obtaining a judgment result and sending a corresponding fault instruction based on the judgment result.
3. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 2, wherein the status monitoring module comprises:
the curve fitting unit is used for fitting and obtaining a corresponding light sensation data dynamic curve based on the light sensation data obtained in real time;
the data fusion unit is used for aligning and fusing the light sensation dynamic curve and the monitoring video to obtain corresponding total dynamic data;
the first monitoring unit is used for judging that the monitoring video in the total dynamic data is paused, if so, a first fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
the second monitoring unit is used for judging whether a light sensation data dynamic curve in the total dynamic data is broken or not, if so, a second fault instruction is sent out, and otherwise, a corresponding judgment result is reserved;
wherein the fault instruction comprises: a first fault instruction and a second fault instruction.
4. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 1, wherein the fusion determination end comprises:
the first determining module is used for determining a corresponding working mode based on the user control instruction when the user control instruction is received;
and the second determining module is used for determining a corresponding working mode based on the multiple sensing data and the preset decision mode when the user control instruction is not received.
5. The intelligent LED display screen system based on multi-sensor data fusion of claim 4, wherein the first determination module comprises:
the first analysis unit is used for analyzing the user control instruction to obtain a corresponding control parameter when receiving the user control instruction;
and the first determining unit is used for determining the corresponding working mode based on the control parameter.
6. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 4, wherein the second determination module comprises:
the first analysis unit is used for carrying out primary analysis on the monitoring video in a preset period to obtain a corresponding primary analysis result;
the state judging unit is used for judging to start the intelligent LED display screen when the primary analysis result is that a pre-stored user exists in a preset range, and otherwise, judging not to start the intelligent LED display screen;
the second analysis unit is used for carrying out secondary analysis on the monitoring video in a preset period to obtain a corresponding secondary analysis result when the intelligent LED display screen is judged to be started;
the second determining unit is used for determining a display mode of the intelligent LED display screen based on the light sensation data and the secondary analysis result, and taking the display mode as a working mode of the intelligent LED display screen;
and the third determining unit is used for turning off the intelligent LED display screen as a corresponding working mode when the intelligent LED display screen is judged not to be started.
7. The intelligent LED display screen system based on multi-sensor data fusion of claim 6, wherein the first analysis unit comprises:
the image comparison subunit is used for comparing each frame of video frame contained in the monitoring video in a preset period with a prestored background image and determining a difference image corresponding to each frame of video frame;
a preliminary screening subunit, configured to screen a human body image from the difference image based on a preliminary screening method;
the face identification subunit is used for identifying the human body image based on a machine learning self-adaptive algorithm to obtain a corresponding face area;
a reference point determining subunit, configured to determine a corresponding reference point in the face image based on a preset determination method;
the image cutting subunit is used for cutting the face image based on the reference point to obtain a corresponding complete face area;
the image standardization subunit is used for carrying out standardization processing on the complete face area based on the distance between the reference points to obtain a corresponding standard-size face area and a corresponding standard-size face area set;
the image dividing subunit is used for dividing the standard-size face area into sub-areas with preset number;
the image sampling subunit is used for sequentially performing sliding window sampling on the sub-regions in the standard-size face region to obtain corresponding sampling data;
the degree determining subunit is configured to determine, based on the sampling data, whether symmetric sub-regions corresponding to the sub-regions are included in the standard-size face region, and determine, based on the total number of sub-regions in which corresponding symmetric sub-regions exist in the standard-size face region, a forward degree of a corresponding face;
the mean value determining subunit is used for screening out a standard size face region corresponding to the maximum face forward degree from the standard size face region set to serve as an image to be corrected, and calculating a corresponding visual mean value based on visual data corresponding to each pixel point in the image to be corrected;
a first region determining subunit, configured to use a region formed by pixels whose visual data in the image to be corrected is greater than the visual mean as a sub-region to be attenuated, and use a region formed by pixels whose visual data in the image to be corrected is less than the visual mean as an sub-region to be enhanced;
the data determining subunit is used for determining first reflection data corresponding to the sub-region to be weakened and second reflection data corresponding to the sub-region to be enhanced;
the first normalization subunit is used for reducing the visual data corresponding to each pixel point in the sub-area to be weakened based on the first reflection data and the corresponding visual mean value, and obtaining a corresponding first standard sub-area;
the second normalization subunit is used for increasing the visual data corresponding to each pixel point in the sub-region to be enhanced based on the second reflection data and the corresponding visual mean value to obtain a corresponding second standard sub-region;
the region fusion subunit is configured to fuse the first standard sub-region, the second standard sub-region and the image to be corrected to obtain a corresponding standard face image;
and the first result determining subunit is used for judging whether a pre-stored face image matched with the standard face image exists in a pre-stored user library or not, if so, taking the pre-stored user in a preset range as a primary analysis result, and otherwise, taking the non-stored user in the preset range as the primary analysis result.
8. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 6, wherein the second analysis unit comprises:
the edge determining subunit is used for determining an edge line in each frame of video frame in the monitoring video in a preset period based on a preset determining mode when the intelligent LED display screen is judged to be started;
a video frame dividing subunit, configured to divide the corresponding video frame into a plurality of sub-blocks based on the edge lines;
a spatial data determining subunit, configured to determine corresponding spatial data based on the sub-block;
a curve determining subunit, configured to generate a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block;
a curve fusion subunit, configured to fuse the component histogram distribution curves to obtain a corresponding total histogram distribution curve;
a second region determining subunit, configured to determine a minimum value in the total histogram distribution curve, count a total number of pixels between adjacent minimum values as a corresponding pixel capacity, and use an image region corresponding to the sub-block in a curve segment corresponding to the maximum pixel capacity in the total histogram distribution curve as a corresponding sampling region;
the data calculation subunit is used for calculating corresponding relative illumination data based on the visual data corresponding to the sampling area;
and the second result determining subunit is used for taking the spatial data and the relative illumination data as corresponding secondary analysis results.
9. The intelligent LED display screen system based on multi-sensor data fusion of claim 1, wherein the state control terminal comprises:
the state monitoring module is used for monitoring the current working state of the intelligent LED display screen;
the first judgment module is used for judging whether the current working state is closed, if so, judging whether the working mode is the closing of the intelligent LED display screen, if so, sending a holding instruction, and otherwise, sending a starting instruction;
the first control module is used for controlling the working state of the intelligent LED display screen based on the current working state and the working mode when the current working state is not closed;
and the second control module is used for controlling the working state of the intelligent LED display screen based on the working mode when the starting instruction is received.
10. The intelligent LED display screen system based on multi-sensor data fusion of claim 1, wherein the first control module comprises:
the parameter determining unit is used for determining a corresponding first working parameter based on the current working state when the current working state is not closed, and determining a corresponding second working parameter based on the working mode;
the instruction generating unit is used for generating a corresponding parameter adjusting instruction based on the difference value of the first working parameter and the second working parameter;
and the state adjusting unit is used for adjusting the working state of the intelligent LED display screen based on the parameter adjusting instruction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210278615.5A CN114708821B (en) | 2022-03-17 | 2022-03-17 | Intelligent LED display screen system based on multi-sensor data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210278615.5A CN114708821B (en) | 2022-03-17 | 2022-03-17 | Intelligent LED display screen system based on multi-sensor data fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114708821A true CN114708821A (en) | 2022-07-05 |
CN114708821B CN114708821B (en) | 2022-10-14 |
Family
ID=82169422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210278615.5A Active CN114708821B (en) | 2022-03-17 | 2022-03-17 | Intelligent LED display screen system based on multi-sensor data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114708821B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115798400A (en) * | 2023-01-09 | 2023-03-14 | 永林电子股份有限公司 | LED display control method and device based on image processing and LED display system |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266273A (en) * | 2008-05-12 | 2008-09-17 | 徐立军 | Multi- sensor system fault self-diagnosis method |
US20100156755A1 (en) * | 2008-12-19 | 2010-06-24 | Ching-Pao Chou | Display Panel Control Device |
CN102421008A (en) * | 2011-12-07 | 2012-04-18 | 浙江捷尚视觉科技有限公司 | Intelligent video quality detecting system |
CN103021370A (en) * | 2012-12-26 | 2013-04-03 | 广东欧珀移动通信有限公司 | System and method for improving anti-interference capability of liquid-crystal display screen |
JP2014182291A (en) * | 2013-03-19 | 2014-09-29 | Canon Inc | Light emission device and method for controlling the same |
CN105185310A (en) * | 2015-10-10 | 2015-12-23 | 西安诺瓦电子科技有限公司 | Display screen brightness adjusting method |
CN107622750A (en) * | 2017-09-11 | 2018-01-23 | 合肥缤赫信息科技有限公司 | A kind of LED display tele-control system |
CN108831357A (en) * | 2018-05-02 | 2018-11-16 | 广州市统云网络科技有限公司 | A kind of LED display working condition automated measurement &control method |
CN108922494A (en) * | 2018-07-20 | 2018-11-30 | 奥克斯空调股份有限公司 | A kind of light sensation module failure detection method, device, display screen and air conditioner |
CN110211528A (en) * | 2019-05-17 | 2019-09-06 | 海纳巨彩(深圳)实业科技有限公司 | A kind of system that LED display display brightness is adjusted |
CN110379355A (en) * | 2019-08-16 | 2019-10-25 | 深圳供电局有限公司 | Control system and control method for large-screen display wall |
CN111479352A (en) * | 2020-04-22 | 2020-07-31 | 聚好看科技股份有限公司 | Display apparatus and illumination control method |
-
2022
- 2022-03-17 CN CN202210278615.5A patent/CN114708821B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101266273A (en) * | 2008-05-12 | 2008-09-17 | 徐立军 | Multi- sensor system fault self-diagnosis method |
US20100156755A1 (en) * | 2008-12-19 | 2010-06-24 | Ching-Pao Chou | Display Panel Control Device |
CN102421008A (en) * | 2011-12-07 | 2012-04-18 | 浙江捷尚视觉科技有限公司 | Intelligent video quality detecting system |
CN103021370A (en) * | 2012-12-26 | 2013-04-03 | 广东欧珀移动通信有限公司 | System and method for improving anti-interference capability of liquid-crystal display screen |
JP2014182291A (en) * | 2013-03-19 | 2014-09-29 | Canon Inc | Light emission device and method for controlling the same |
CN105185310A (en) * | 2015-10-10 | 2015-12-23 | 西安诺瓦电子科技有限公司 | Display screen brightness adjusting method |
CN107622750A (en) * | 2017-09-11 | 2018-01-23 | 合肥缤赫信息科技有限公司 | A kind of LED display tele-control system |
CN108831357A (en) * | 2018-05-02 | 2018-11-16 | 广州市统云网络科技有限公司 | A kind of LED display working condition automated measurement &control method |
CN108922494A (en) * | 2018-07-20 | 2018-11-30 | 奥克斯空调股份有限公司 | A kind of light sensation module failure detection method, device, display screen and air conditioner |
CN110211528A (en) * | 2019-05-17 | 2019-09-06 | 海纳巨彩(深圳)实业科技有限公司 | A kind of system that LED display display brightness is adjusted |
CN110379355A (en) * | 2019-08-16 | 2019-10-25 | 深圳供电局有限公司 | Control system and control method for large-screen display wall |
CN111479352A (en) * | 2020-04-22 | 2020-07-31 | 聚好看科技股份有限公司 | Display apparatus and illumination control method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115798400A (en) * | 2023-01-09 | 2023-03-14 | 永林电子股份有限公司 | LED display control method and device based on image processing and LED display system |
CN115798400B (en) * | 2023-01-09 | 2023-04-18 | 永林电子股份有限公司 | LED display control method and device based on image processing and LED display system |
Also Published As
Publication number | Publication date |
---|---|
CN114708821B (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110769246B (en) | Method and device for detecting faults of monitoring equipment | |
CN105791709A (en) | Automatic exposure processing method and apparatus with back-light compensation | |
CN115345802B (en) | Remote monitoring method for operation state of electromechanical equipment | |
CN103702111B (en) | A kind of method detecting camera video color cast | |
US11615166B2 (en) | System and method for classifying image data | |
CN107103330A (en) | A kind of LED status recognition methods and device | |
CN114708821B (en) | Intelligent LED display screen system based on multi-sensor data fusion | |
CN115065798B (en) | Big data-based video analysis monitoring system | |
US20210204123A1 (en) | Iris recognition workflow | |
CN106941588B (en) | Data processing method and electronic equipment | |
CN112995510B (en) | Method and system for detecting environment light of security monitoring camera | |
CN111355902A (en) | Method for acquiring images in vehicle by using camera and vehicle-mounted monitoring camera | |
CN111800294B (en) | Gateway fault diagnosis method and device, network equipment and storage medium | |
CN113031386A (en) | Method, apparatus, device and medium for detecting abnormality of dual-filter switcher | |
CN115602099A (en) | Display screen display adjusting method and system, computer equipment and storage medium | |
CN112770021A (en) | Camera and filter switching method | |
CN211509165U (en) | Vehicle-mounted monitoring camera | |
CN111145219B (en) | Efficient video moving target detection method based on Codebook principle | |
CN111351078B (en) | Lampblack identification method of range hood and range hood | |
CN112084902B (en) | Face image acquisition method and device, electronic equipment and storage medium | |
CN112308814A (en) | Method and system for automatically identifying switch on-off position state of disconnecting link of power system | |
CN115499692B (en) | Digital television intelligent control method and system based on image processing | |
CN111908289B (en) | Method, device and equipment for detecting illumination in elevator car and storage medium | |
CN118042271B (en) | Control system based on image sensor | |
CN117651355B (en) | Light display control method, system and storage medium of COB (chip on board) lamp strip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |