CN112738325A - Intelligent LED identification method based on Android mobile phone - Google Patents
Intelligent LED identification method based on Android mobile phone Download PDFInfo
- Publication number
- CN112738325A CN112738325A CN202011563555.9A CN202011563555A CN112738325A CN 112738325 A CN112738325 A CN 112738325A CN 202011563555 A CN202011563555 A CN 202011563555A CN 112738325 A CN112738325 A CN 112738325A
- Authority
- CN
- China
- Prior art keywords
- led
- array
- coordinates
- mobile phone
- leds
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Telephone Function (AREA)
Abstract
An intelligent LED identification method based on an Android mobile phone comprises the following steps: step 1: initializing the LED lamp bank by adopting a random binary coding algorithm; step 2: shooting through a mobile phone camera to obtain a video; and step 3: intercepting the video every two seconds to obtain a picture set P; and 4, step 4: analyzing each picture Pi by using a YOLOv4 model, and identifying the position coordinates and the on-off states of the LEDs in the picture; and 5: according to the identification result of each time, the LED numbers under the corresponding coordinates are found back; step 6: the front end of the mobile phone forms corresponding light spot coordinates on a front end interface according to the returned coordinates x and y to achieve visualization; and 7: after the user finishes editing and clicks to send, the edited data is analyzed into a data packet form which can be received by the terminal to realize visualization. The method is based on the neural network model trained by a large amount of data and an excellent optimization algorithm, and realizes the identification of the LED lamp group under multiple scenes.
Description
Technical Field
The invention relates to the field of intelligent LED illumination, in particular to a method for identifying LED coordinates and serial numbers and editing at a mobile terminal based on video processing.
Background
Along with the layout of a 5G new business state, the intelligent industry has a good development direction and an industry transformation zone, the intelligent lighting single product has a explosive new growth, the product is transformed from single product to multi-product interconnection and intercommunication, which is a new cross-border thought of 'intelligence +', the 'intelligence +' can accelerate the transformation and upgrade of the lighting industry to a high-end, intelligent, green and service direction from multiple aspects, and can boost the high-quality development of the lighting industry, and 'intelligence + LED' is one of branches with deep application prospects.
Along with the development and progress of cities and the continuous improvement of life quality of people, the demand of the night scenes of the cities on the intelligent LED scenes is gradually increased, consumers also have strong interest in adopting the intelligent LED scenes, but few products capable of realizing intelligent control on small colored lamps are available on the market at present. Based on the current situation, the invention is designed and developed.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for identifying the coordinates and the serial numbers of the LEDs and editing the LED coordinates and the serial numbers at the mobile terminal based on videos shot by a mobile phone, and meets the requirements of users on more intelligence and high efficiency of the LED lamps.
In order to solve the technical problems, the invention provides the following technical scheme:
an intelligent LED identification method based on an Android mobile phone comprises the following steps:
step 1: initializing the LED lamp bank by adopting a random binary coding algorithm;
step 2: shooting through a mobile phone camera to obtain a video;
and step 3: cutting the video every two seconds to obtain a picture set P { Pi | i ═ 1,2, …, nP };
and 4, step 4: each picture Pi is analyzed using the YOLOv4 model to identify the position coordinates and the on-off status of the LEDs in the picture, as follows:
step 4.1: the conversion from a pixel matrix to a target vector is realized by utilizing the forward calculation of a model, the model adopts multi-target detection, when a first frame of LEDs is fully bright, all LED targets in a picture can be identified, the quantity is M, and each target contains five parameters: { x, y, w, h, p };
step 4.2: in the five parameters, p represents confidence, M LEDs are traversed, the confidence is examined, when p is less than or equal to 0.3, the LEDs are discarded, and K LEDs are remained after one round of screening;
step 4.3: establishing a shaping array C [ K ], initializing the array element value to 0, storing the LED on-off state after each recognition, and establishing a structural body LED, wherein the internal data is as follows: { m, x, y, Ci } for storing the coordinates of the identified LED and the binary code formed after initialization, where m represents the number of the LED;
step 4.4: accurately identifying the LED by using an anti-shake algorithm, preprocessing a reference frame and a current frame of an input video, performing row and column gray projection, and performing cross-correlation operation on data of the gray projection to obtain horizontal shake components, vertical shake components proGk (i) and proGk (j); dividing an image into a plurality of macro blocks, taking the obtained jitter component as the offset of an initial point of block matching motion estimation in each macro block, calculating an SAD value by taking SDSP as a template, and matching to a minimum value point through cross-correlation operation to obtain an optimal matching point; the displacement between the reference point and the best matching point in the reference frame is the block motion vector of the macro block, and the block motion vector of each macro block is calculated by the method, and the block motion vector with the largest occurrence frequency is used as the global motion vector between the reference frame and the current frame to obtain a stable video sequence, so that the aim of accurately positioning the LED coordinates is fulfilled;
step 4.5: when a new round of detection results are stored, when a target of the Cj is detected, the Cj is in a light-on state, the Cj is added with 1, the binary system is represented that the bit is 1, and the Cj is in a light-on state; the undetected LED is not added with 1, which represents that the bit is 0 under binary system and is in a light-off state;
and 5: according to the identification result of each time, the LED numbers under the corresponding coordinates are found back, and the process is as follows:
step 5.1: when the LED is on or off for Y times, the model detects the 01 state of the Y bit and stores the state into an array C, and a corresponding index A [ i ] is searched according to the value stored in the array C [ i ];
step 5.2: after finding, assigning the value stored in the corresponding ai to an integer variable m of the structure LED, namely the serial number of the LED, and then transmitting the LED structure data to the front end;
step 6: the front end of the mobile phone forms corresponding light spot coordinates on the interface of the mobile phone according to the returned coordinates x and y so as to achieve the purpose of visualization;
and 7: after the user finishes editing and clicks to send, the edited data is analyzed into a data packet form which can be received by the terminal so as to realize the purpose of visualization.
Further, the process of step 1) is as follows:
step 1.1: establishing a one-dimensional integer array A [2 x N ] according to the number N of the LEDs, initializing the element value of the array to be 0, wherein the index represents binary coding, the digit of the binary number is determined by an integer variable Y, and the value of Y is obtained by a formula 2^ Y which is more than or equal to N and more than or equal to 2^ Y (Y-1);
step 1.2: random binary coding: firstly, sequentially binary coding is carried out on all LEDs, namely N is traversed, 1 to N are sequentially stored in A [2 x N ], and binary bits are required to be skipped to be a plurality of 0 or 1 continuous coding numerical values during first-time sequential coding;
step 1.3: randomly scrambling all the LED codes by using a fast random scrambling algorithm, wherein the process of scrambling the algorithm is as follows: acquiring the length L of an original array A to be disordered, and setting i to be 0; the random number r starts to be generated in the range of 0-L. Taking the element of the random number position of the array A as the ith element of the new array B, moving the Lth element of the array A to the position of a random number r, subtracting 1 from L, and adding 1 to i; repeating the random number selection process until all elements of the array A are copied into the array B, wherein the array B is the needed disordered array; at this time, the binary number formed by the index of the array B represents the flashing and extinguishing instruction of the LED lamp, and the value stored in the corresponding index is the number of the lamp;
step 1.4: transmitting the flashing rules of the N small colored lamps back to the terminal, adopting two-layer packaging, firstly analyzing the information such as the LED number, the color, the maintenance period and the like transmitted back through the function into RGB specific data of each lamp, then packaging the specific data into a data packet acceptable by the lamp, and finally transmitting the data packet to the terminal; finally, the LED is subjected to color conversion at the frequency of every two seconds; before initialization, firstly setting a first frame as full-bright LED so as to be capable of identifying all small colored lamps in a lens;
step 1.5: when the LED is in a flashing color, the purple light with the shortest wavelength and the smallest diffused halo is used to improve the identification precision, and the purple light brightness within 0-255 is randomly used during training to adapt to identification in different application scenes.
The invention has the beneficial effects that: the method is based on the neural network model trained by a large amount of data and an excellent optimization algorithm, and can realize the identification of the LED lamp group under multiple scenes.
Drawings
FIG. 1 is an overall flow chart of the present invention.
Detailed Description
The following describes in detail a specific embodiment of the method for identifying the LED coordinates and numbers based on video processing and editing at the mobile terminal according to the present invention with reference to the following embodiments.
Referring to fig. 1, an intelligent LED identification method based on an Android mobile phone includes the following steps:
step 1: initializing the LED lamp bank by adopting a random binary coding algorithm, wherein the process is as follows:
step 1.1: and establishing a one-dimensional integer array A [2 x N ] according to the number N of the LEDs, wherein the value of an array element is initialized to 0. The index represents binary code, the digit of the binary number is determined by integer variable Y, and the value of Y is obtained by formula 2^ Y ≧ N ≧ 2^ (Y-1);
step 1.2: random binary coding: all the LEDs are firstly binary coded sequentially, namely N is traversed, and the LEDs from 1 to N are stored in A [2 x N ] sequentially. When the first time of sequential coding is carried out, a plurality of continuous coding numerical values with 0 or 1 in binary digits are required to be skipped, so that the phenomenon that the subsequent identification effect is influenced due to too dark or too bright overall brightness caused by that most LEDs do not emit light or too many LEDs emit light after random scrambling can be avoided;
step 1.3: randomly scrambling all the LED codes by using a fast random scrambling algorithm, wherein the process of scrambling the algorithm is as follows: acquiring the length L of an original array A to be disordered, and setting i to be 0; the random number r starts to be generated in the range of 0-L. Taking the element of the random number position of the array A as the ith element of the new array B, moving the Lth element of the array A to the position of a random number r, subtracting 1 from L, and adding 1 to i; repeating the random number selection process until all elements of the array A are copied into the array B, wherein the array B is the needed disordered array; the whole algorithm is a stable and quick algorithm, the time complexity and the space complexity are both O (n), unstable user experience caused by unstable disorganization in use can be avoided, at the moment, a binary number formed by the index of the array B represents a flashing and extinguishing instruction of the LED lamp, and a value stored in the corresponding index is the number of the lamp;
step 1.4: the flashing and extinguishing rules of the N small colored lamps are transmitted back to the terminal, two-layer packaging is adopted, information such as LED numbers, colors, maintenance periods and the like transmitted back through functions is firstly analyzed into RGB specific data of each lamp, then the specific data are packaged into data packets acceptable by the lamps, and finally the data packets are transmitted to the terminal. Finally, the LED is subjected to color conversion at the frequency of every two seconds; before initialization, firstly setting a first frame as full-bright LED so as to be capable of identifying all small colored lamps in a lens;
step 1.5: when the LED is in a flashing color, the purple light with the shortest wavelength and the smallest diffused halo is used to improve the identification precision, and the purple light brightness within 0-255 is randomly used during training to adapt to identification in different application scenes;
step 2: shooting through a mobile phone camera to obtain a video;
and step 3: intercepting the video every two seconds to obtain a picture set P ═{Pi|i=1,2,…,nP};
And 4, step 4: p for each picture using the Yolov4 modeliAnd analyzing and identifying the position coordinates and the on-off state of the LED in the picture, wherein the process is as follows:
step 4.1: the conversion from a pixel matrix to a target vector is realized by utilizing the forward calculation of a model, the model adopts multi-target detection, when a first frame of LEDs is fully bright, all LED targets in a picture can be identified, the quantity is M, and each target contains five parameters: { x, y, w, h, p };
step 4.2: in the five parameters, p represents confidence, M LEDs are traversed, the confidence is examined, when p is less than or equal to 0.3, the LEDs are discarded, and K LEDs are remained after one round of screening;
step 4.3: establishing a shaping array C [ K ], initializing the array element value to 0, storing the LED on-off state after each recognition, and establishing a structural body LED, wherein the internal data is as follows: { m, x, y, Ci } for storing the coordinates of the identified LED and the binary code formed after initialization, where m represents the number of the LED;
step 4.4: accurately identifying the LED by using an anti-shake algorithm, preprocessing a reference frame and a current frame of an input video, performing row and column gray projection, and performing cross-correlation operation on data of the gray projection to obtain horizontal shake components, vertical shake components proGk (i) and proGk (j); dividing the image into several macro blocks, in each macro block, using the obtained jitter component as the offset of initial point of block matching motion estimation, using SDSP as template to calculate SAD value, and making cross-correlation operationMatching to the minimum value point to obtain the optimal matching point; the displacement between the reference point and the best matching point in the reference frame is the block motion vector of the macro block, and the block motion vector of each macro block is calculated by the method, and the block motion vector with the largest occurrence frequency is used as the global motion vector between the reference frame and the current frame to obtain a stable video sequence, so that the aim of accurately positioning the LED coordinates is fulfilled;
step 4.5: when a new round of detection results are stored, when a target of the Cj is detected, the Cj is in a light-on state, the Cj is added with 1, the binary system is represented that the bit is 1, and the Cj is in a light-on state; the undetected LED is not added with 1, which represents that the bit is 0 under binary system and is in a light-off state;
and 5: according to the identification result of each time, the LED numbers under the corresponding coordinates are found back, and the process is as follows:
step 5.1: when the LED is on or off for Y times, the model detects the 01 state of the Y bit and stores the state into an array C, and a corresponding index A [ i ] is searched according to the value stored in the array C [ i ];
step 5.2: after finding, assigning the value stored in the corresponding ai to an integer variable m of the structure LED, namely the serial number of the LED, and then transmitting the LED structure data to the front end;
step 6: the front end of the mobile phone forms corresponding light spot coordinates on the interface of the mobile phone according to the returned coordinates x and y so as to achieve the purpose of visualization;
and 7: after the user finishes editing and clicks to send, the edited data is analyzed into a data packet form which can be received by the terminal so as to realize the purpose of visualization.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.
Claims (2)
1. An intelligent LED identification method based on an Android mobile phone is characterized by comprising the following steps:
step 1: initializing the LED lamp bank by adopting a random binary coding algorithm;
step 2: shooting through a mobile phone camera to obtain a video;
and step 3: cutting the video every two seconds to obtain a picture set P { Pi | i ═ 1,2, …, nP };
and 4, step 4: each picture Pi is analyzed using the YOLOv4 model to identify the position coordinates and the on-off status of the LEDs in the picture, as follows:
step 4.1: the conversion from a pixel matrix to a target vector is realized by utilizing the forward calculation of a model, the model adopts multi-target detection, when a first frame of LEDs is fully bright, all LED targets in a picture can be identified, the quantity is M, and each target contains five parameters: { x, y, w, h, p };
step 4.2: in the five parameters, p represents confidence, M LEDs are traversed, the confidence is examined, when p is less than or equal to 0.3, the LEDs are discarded, and K LEDs are remained after one round of screening;
step 4.3: establishing a shaping array C [ K ], initializing the array element value to 0, storing the LED on-off state after each recognition, and establishing a structural body LED, wherein the internal data is as follows: { m, x, y, Ci } for storing the coordinates of the identified LED and the binary code formed after initialization, where m represents the number of the LED;
step 4.4: accurately identifying the LED by using an anti-shake algorithm, preprocessing a reference frame and a current frame of an input video, performing row and column gray projection, and performing cross-correlation operation on data of the gray projection to obtain horizontal shake components, vertical shake components proGk (i) and proGk (j); dividing an image into a plurality of macro blocks, taking the obtained jitter component as the offset of an initial point of block matching motion estimation in each macro block, calculating an SAD value by taking SDSP as a template, and matching to a minimum value point through cross-correlation operation to obtain an optimal matching point; the displacement between the reference point and the best matching point in the reference frame is the block motion vector of the macro block, and the block motion vector of each macro block is calculated by the method, and the block motion vector with the largest occurrence frequency is used as the global motion vector between the reference frame and the current frame to obtain a stable video sequence, so that the aim of accurately positioning the LED coordinates is fulfilled;
step 4.5: when a new round of detection results are stored, when a target of the Cj is detected, the Cj is in a light-on state, the Cj is added with 1, the binary system is represented that the bit is 1, and the Cj is in a light-on state; the undetected LED is not added with 1, which represents that the bit is 0 under binary system and is in a light-off state;
and 5: according to the identification result of each time, the LED numbers under the corresponding coordinates are found back, and the process is as follows:
step 5.1: when the LED is on or off for Y times, the model detects the 01 state of the Y bit and stores the state into an array C, and a corresponding index A [ i ] is searched according to the value stored in the array C [ i ];
step 5.2: after finding, assigning the value stored in the corresponding ai to an integer variable m of the structure LED, namely the serial number of the LED, and then transmitting the LED structure data to the front end;
step 6: the front end of the mobile phone forms corresponding light spot coordinates on the interface of the mobile phone according to the returned coordinates x and y so as to achieve the purpose of visualization;
and 7: after the user finishes editing and clicks to send, the edited data is analyzed into a data packet form which can be received by the terminal so as to realize the purpose of visualization.
2. The Android mobile phone-based smart LED recognition method of claim 1, wherein the process of the step 1) is as follows:
step 1.1: establishing a one-dimensional integer array A [2 x N ] according to the number N of the LEDs, initializing the element value of the array to be 0, wherein the index represents binary coding, the digit of the binary number is determined by an integer variable Y, and the value of Y is obtained by a formula 2^ Y which is more than or equal to N and more than or equal to 2^ Y (Y-1);
step 1.2: random binary coding: firstly, sequentially binary coding is carried out on all LEDs, namely N is traversed, 1 to N are sequentially stored in A [2 x N ], and binary bits are required to be skipped to be a plurality of 0 or 1 continuous coding numerical values during first-time sequential coding;
step 1.3: randomly scrambling all the LED codes by using a fast random scrambling algorithm, wherein the process of scrambling the algorithm is as follows: acquiring the length L of an original array A to be disordered, and setting i to be 0; the random number r starts to be generated in the range of 0-L. Taking the element of the random number position of the array A as the ith element of the new array B, moving the Lth element of the array A to the position of a random number r, subtracting 1 from L, and adding 1 to i; repeating the random number selection process until all elements of the array A are copied into the array B, wherein the array B is the needed disordered array; at this time, the binary number formed by the index of the array B represents the flashing and extinguishing instruction of the LED lamp, and the value stored in the corresponding index is the number of the lamp;
step 1.4: transmitting the flashing rules of the N small colored lamps back to the terminal, adopting two-layer packaging, firstly analyzing the information such as the LED number, the color, the maintenance period and the like transmitted back through the function into RGB specific data of each lamp, then packaging the specific data into a data packet acceptable by the lamp, and finally transmitting the data packet to the terminal; finally, the LED is subjected to color conversion at the frequency of every two seconds; before initialization, firstly setting a first frame as full-bright LED so as to be capable of identifying all small colored lamps in a lens;
step 1.5: when the LED is in a flashing color, the purple light with the shortest wavelength and the smallest diffused halo is used to improve the identification precision, and the purple light brightness within 0-255 is randomly used during training to adapt to identification in different application scenes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011563555.9A CN112738325B (en) | 2020-12-25 | 2020-12-25 | Intelligent LED identification method based on Android mobile phone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011563555.9A CN112738325B (en) | 2020-12-25 | 2020-12-25 | Intelligent LED identification method based on Android mobile phone |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112738325A true CN112738325A (en) | 2021-04-30 |
CN112738325B CN112738325B (en) | 2021-11-23 |
Family
ID=75616284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011563555.9A Active CN112738325B (en) | 2020-12-25 | 2020-12-25 | Intelligent LED identification method based on Android mobile phone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112738325B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554155A (en) * | 2021-07-29 | 2021-10-26 | 杭州电子科技大学 | Neural network circuit based on SDSP and WTA algorithm and Hall strip synapse |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140300814A1 (en) * | 2011-12-16 | 2014-10-09 | Guillaume Lemoine | Method for real-time processing of a video sequence on mobile terminals |
CN106686333A (en) * | 2016-11-02 | 2017-05-17 | 四川秘无痕信息安全技术有限责任公司 | Method for producing video added watermarks for Android equipment |
US20170182425A1 (en) * | 2015-12-27 | 2017-06-29 | Liwei Xu | Screen Coding Methods And Camera Based Game Controller For Video Shoot Game |
CN107071421A (en) * | 2017-05-23 | 2017-08-18 | 北京理工大学 | A kind of method for video coding of combination video stabilization |
CN107094048A (en) * | 2017-03-23 | 2017-08-25 | 深圳市科迈爱康科技有限公司 | Information Conduction method based on visible ray, conducting system |
CN109784119A (en) * | 2018-12-11 | 2019-05-21 | 田丰 | Optical code generating means, decoding apparatus, coding, coding/decoding method and system |
CN109902661A (en) * | 2019-03-18 | 2019-06-18 | 北京联诚智胜信息技术股份有限公司 | Intelligent identification Method based on video, picture |
CN111103579A (en) * | 2020-01-15 | 2020-05-05 | 长安大学 | Visible light indoor positioning system and method based on mobile phone camera |
-
2020
- 2020-12-25 CN CN202011563555.9A patent/CN112738325B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140300814A1 (en) * | 2011-12-16 | 2014-10-09 | Guillaume Lemoine | Method for real-time processing of a video sequence on mobile terminals |
US20170182425A1 (en) * | 2015-12-27 | 2017-06-29 | Liwei Xu | Screen Coding Methods And Camera Based Game Controller For Video Shoot Game |
CN106686333A (en) * | 2016-11-02 | 2017-05-17 | 四川秘无痕信息安全技术有限责任公司 | Method for producing video added watermarks for Android equipment |
CN107094048A (en) * | 2017-03-23 | 2017-08-25 | 深圳市科迈爱康科技有限公司 | Information Conduction method based on visible ray, conducting system |
CN107071421A (en) * | 2017-05-23 | 2017-08-18 | 北京理工大学 | A kind of method for video coding of combination video stabilization |
CN109784119A (en) * | 2018-12-11 | 2019-05-21 | 田丰 | Optical code generating means, decoding apparatus, coding, coding/decoding method and system |
CN109902661A (en) * | 2019-03-18 | 2019-06-18 | 北京联诚智胜信息技术股份有限公司 | Intelligent identification Method based on video, picture |
CN111103579A (en) * | 2020-01-15 | 2020-05-05 | 长安大学 | Visible light indoor positioning system and method based on mobile phone camera |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554155A (en) * | 2021-07-29 | 2021-10-26 | 杭州电子科技大学 | Neural network circuit based on SDSP and WTA algorithm and Hall strip synapse |
CN113554155B (en) * | 2021-07-29 | 2024-02-23 | 杭州电子科技大学 | Neural network circuit and Hall strip synapse based on SDSP and WTA algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN112738325B (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111798400B (en) | Non-reference low-illumination image enhancement method and system based on generation countermeasure network | |
CN110889813A (en) | Low-light image enhancement method based on infrared information | |
CN109255758B (en) | Image enhancement method based on all 1 x 1 convolution neural network | |
CN112738325B (en) | Intelligent LED identification method based on Android mobile phone | |
CN109829868B (en) | Lightweight deep learning model image defogging method, electronic equipment and medium | |
CN115775321B (en) | Lighting lamp brightness adjusting method and device, electronic equipment and storage medium | |
CN112004284A (en) | Address positioning system and method for LED lamp string | |
CN111213368B (en) | Rigid body identification method, device and system and terminal equipment | |
CN115797948A (en) | Character recognition method, device and equipment | |
CN113221823B (en) | Traffic signal lamp countdown identification method based on improved lightweight YOLOv3 | |
CN111752386B (en) | Space positioning method, system and head-mounted equipment | |
CN110532860A (en) | The modulation of visible light bar code and recognition methods based on RGB LED lamp | |
US20230171506A1 (en) | Increasing dynamic range of a virtual production display | |
CN108259914A (en) | Cloud method for encoding images based on object library | |
CN114663299A (en) | Training method and device suitable for image defogging model of underground coal mine | |
CN117528857B (en) | LED spotlight intelligent configuration method, system and medium based on artificial intelligence | |
CN112751612A (en) | Method and device for transmitting data to mobile terminal by using three-color LED | |
CN116823973B (en) | Black-white video coloring method, black-white video coloring device and computer readable medium | |
CN113177445B (en) | Video mirror moving identification method and system | |
CN117295207B (en) | Atmosphere lamp equipment, instruction transmission and application methods thereof, and corresponding devices and medium | |
CN114401365B (en) | Target person identification method, video switching method and device | |
CN116523782A (en) | Light-weight multi-scale image defogging method based on attention mechanism | |
CN114998998A (en) | Human behavior identification method for constructing bidirectional motion historical map based on key frame | |
CN116596972A (en) | Dim light scene target tracking method based on feature enhancement and dynamic template updating | |
CN112882677A (en) | Technical method for processing RGB LED multi-color light source |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |