CN115065408B - Multi-priority hierarchical coding method, device and storage medium based on optical camera - Google Patents
Multi-priority hierarchical coding method, device and storage medium based on optical camera Download PDFInfo
- Publication number
- CN115065408B CN115065408B CN202210469380.8A CN202210469380A CN115065408B CN 115065408 B CN115065408 B CN 115065408B CN 202210469380 A CN202210469380 A CN 202210469380A CN 115065408 B CN115065408 B CN 115065408B
- Authority
- CN
- China
- Prior art keywords
- led
- bit string
- bit
- receiving
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000003287 optical effect Effects 0.000 title claims abstract description 31
- 230000005540 biological transmission Effects 0.000 claims abstract description 19
- 239000013598 vector Substances 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 4
- 238000003491 array Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 abstract description 23
- 238000000354 decomposition reaction Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B10/00—Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
- H04B10/11—Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
- H04B10/114—Indoor or close-range type systems
- H04B10/116—Visible light communication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0076—Distributed coding, e.g. network coding, involving channel coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/22—Adaptations for optical transmission
Abstract
The invention relates to a multi-priority hierarchical coding method based on an optical camera, which comprises the following steps: step S11: acquiring a state vector, step S12: parity item decomposition, step S13: assigning a received bit string, step S14: at the previous layer reassignment, steps S12 to S14 are repeatedly performed until the decoding table generation is completed, step S15: obtaining a coding table according to the decoding table; step S21: generating a transmission bit string, step S22: encoding according to the encoding table, step S23: adding a frame preamble, step S24: illuminating the LED and sending a signal; step S31: shooting a continuous video stream, step S32: space-time synchronization of data frames, step S33: LED channel segmentation, step S34: self-adaptive camera parameter adjustment, step S35: LED optical channel estimation, step S36: decoding according to the decoding table. Compared with the prior art, the invention has the advantages of improving the overall throughput and the communication distance of the optical camera communication system.
Description
Technical Field
The present invention relates to the field of visible light communication, and in particular, to a multi-priority layered coding method, apparatus and storage medium based on an optical camera.
Background
Visible Light Communication (VLC) will play an important role in future 6G networks because the visible spectrum is extremely rich and no licensed spectrum resources are needed. Visible light communication technologies can be further divided into different sub-categories according to the type of transmitting and receiving end devices, with photo diodes (LEDs) being used as transmitting ends and image sensors being the receiving end of Optical Camera Communication (OCC) attracting attention. The optical camera can effectively realize one-to-many broadcast communication, namely, one sending end is allowed to transmit information to a plurality of receiving ends at the same time, which has a vital meaning for the coordination work of intelligent vehicles or unmanned aerial vehicles. However, communication distance is a large performance bottleneck in optical camera communication scenarios. If the transmitting end (LED array) is sufficiently far from the receiving end (camera), the camera will not be able to distinguish between two light sources that are closely together on the LED array, resulting in severe inter-channel interference, known as LED aliasing. Especially in a one-to-many communication scenario, if the transmitting end transmits data at a data rate that is the closest to the receiving end, the receiving end at a distance will not be able to receive any data due to the aliasing of the LEDs; if the transmitting end limits the data rate according to the furthest receiving end, the communication capacity of the near receiving end is wasted. Thus, LED aliasing greatly limits the communication distance of the optical communication system, reducing the overall throughput of the communication system.
In order to solve the communication performance bottleneck caused by LED aliasing, researchers have proposed layered coding methods to support dynamically varying channel capacities. Layered coding is a physical layer coding technique that allows receiving ends at different distances to dynamically adjust their received data rates according to observed channel conditions to ensure that the near receiving ends reach as high a data rate as possible without the far receiving ends losing all the data. The document Experimental on hierarchical transmission scheme for visible light communication using LED traffic light and high-speed camera uses two-dimensional wavelet transform to embed data of 3 layers priority into different frequency components of the LED array, and this spatial frequency-based layered coding scheme has a high error rate and has specific requirements on the shape and the number of lamps of the LED array, which limits the practical use of the technology. The document Overlay coding for road-to-vehicle visible light communication using LED array and high-speed camera proposes an encoding method of an overlapped code in order to avoid the limitation caused by wavelet transformation, the method embeds data bits into LED blocks with different sizes, and then spatially overlaps the LED blocks on the same LED array to generate a final overlapped code, so that a long-distance receiving end can receive data embedded in a large-size LED block, but because all LED lamps in one LED block transmit the same information, the problem of low encoding efficiency exists, and the waste of channel resources is caused for a short-distance receiving end with strong resolution capability.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a multi-priority hierarchical coding method, a multi-priority hierarchical coding device and a storage medium based on an optical camera, which improve the overall throughput and the communication distance of broadcast transmission.
The aim of the invention can be achieved by the following technical scheme:
an optical camera-based multi-priority hierarchical coding method comprises the following steps:
step S1: a code table and a decoding table are generated,
step S2: the transmitting end encodes and transmits a signal,
step S3: the receiving end receives and decodes the signal;
the generation of the encoding table and the decoding table comprises the following steps:
step S11: obtaining a state vector of an L-th layer receiving end, wherein L is the total layer number of the coding table and is equal to the number of LED lamps,
step S12: the observed states in the obtained state vector are sequentially divided into odd terms and even terms,
step S13: the observation states represented by the odd terms obtained in the step S12 are distributed to a first receiving bit string, the observation states represented by the even terms are distributed to a second receiving bit string, so as to obtain a decoding table of the current layer, the lengths of the first receiving bit string and the second receiving bit string are L, the a-th previous bit is consistent with the receiving bit string corresponding to the upper layer, the a-th subsequent bit is set as X, the a-th bit of the first receiving bit string is set as 0, the a-th bit of the second receiving bit string is set as 1, wherein a is the current layer number, X is the discarding bit, the bit is 0 to indicate that the lamp is on, the bit is 1 to indicate that the lamp is off,
step S14: taking the observed states represented by the odd terms and the even terms in the step S13 as the state vectors of the previous layer respectively, repeatedly executing the steps S12 to S14 until the decoding table of the first layer is obtained, indicating that the generation of the decoding table is completed,
step S15: the mapping from the observation state in the decoding table to the receiving bit string is inverted, and meanwhile, the receiving bit string is modified into the sending bit string, and the observation state is modified into the on-off state, so that the encoding table is obtained;
the sending end codes and sends signals comprising the following steps:
step S21: the transmitting end sequentially takes 1 bit from the L priority bit streams to be transmitted in each transmission period to form a transmission bit string DT with the length of L,
step S22: the transmission bit string DT is mapped to the LED on-off state LT according to the encoding table,
step S23: adding frame preamble code for receiving end space-time synchronization and channel estimation, and forming data frame transmitted by transmitting end together with LED on-off state LT,
step S24: controlling the on-off of each LED of the transmitting end according to the data frame, and transmitting the optical signal to the receiving end;
the receiving end receives the signal and decodes the signal, and the method comprises the following steps:
step S31: the receiving end shoots a continuous video stream comprising the transmitting end,
step S32: the spatial synchronization and the time synchronization of the data frames are completed,
step S33: the LED channel is divided up,
step S34: adjusting the exposure parameters of the receiving end until the brightness value observed by the camera is 255/L when 1 LED lamp is lighted,
step S35: for each LED channel divided in step S33, measuring the observation brightness level of the current LED channel when the transmitting end lights different numbers of LED lamps according to the bit string used for channel estimation in the frame preamble, determining the discrimination threshold value between different brightness levels, judging the number of the lighted LEDs corresponding to the brightness level of each LED channel according to the discrimination threshold value of the brightness level,
step S36: and obtaining an observation state according to the number of the lighted LEDs, and decoding according to a decoding table to obtain a received bit string.
The L-th layer is the highest layer of the receiving end and comprises L+1 states, and the total is 2 L Sub-states.
The LED states in the observation states that are indistinguishable due to spatial aliasing are bracketed in the middle.
The frame preamble is a string of predefined bit strings.
The step S32 is specifically that each frame of the video stream is divided into pixel blocks S with the same size, a pixel block brightness signal S (t) is calculated, the pixel block brightness signal S (t) is a sequence of brightness change of all pixels in each pixel block along with time, a cross-correlation signal r (t) of the pixel block brightness signal S (t) and a frame preamble p (t) is calculated, x pixel blocks with the largest cross-correlation value are searched and are spliced again into candidate pixel blocks S', the pixel blocks indicate rough positions of the LED arrays in the picture, and the spatial synchronization of the data frames is completed; then, the luminance signal s '(t) of the candidate pixel block s' is recalculated, the time corresponding to the time when the cross correlation signal r '(t) of the candidate pixel block luminance signal s' (t) and the frame preamble p (t) reaches the maximum value is the time when the data frame starts, and the time synchronization of the data frame is completed.
The step S33 specifically includes: the receiving end obtains two frames of images continuously flashing by the LEDs through the step S32 and carries out difference, gamma correction and Gaussian blur are used for the obtained difference images to inhibit noise, and then the light source pixels and the background pixels are separated through the binary conversion of the Ojin, so that the pixel blocks of each LED channel are finally obtained, and the segmentation of the LED channels at the pixel level is realized.
The transmitting end is an LED array.
The receiving end is a camera.
An optical camera-based multi-priority hierarchical coding apparatus comprising a memory, a processor, and a program stored in the memory, the processor implementing the method as described above when executing the program.
A storage medium having stored thereon a program which when executed performs a method as described above.
Compared with the prior art, the invention has the following beneficial effects:
(1) Compared with the existing layered coding scheme based on multi-size LED blocks (LED light sources are not independently flashed), the utilization rate of the physical layer coding space is effectively improved, the sacrifice of the channel capacity of a close-range receiving end is avoided, the overall throughput of one-to-many broadcast transmission is further improved, and the overall throughput is improved by 67%.
(2) The invention carries out deep modeling analysis on the LED aliasing problem in the remote optical camera communication, constructs the observation state space of the receiving end, designs an iterative symmetric coding code table generation algorithm, and has the advantages of simple operation and low error rate compared with the existing layered coding method.
(3) The invention can realize multi-priority broadcast transmission by changing the on-off mode of the LED lamp without modifying the hardware of the existing transmitting end (LED array), and the receiving end can be any mobile electronic equipment provided with an optical camera, thereby having wide applicability.
(4) The invention designs a unique data frame structure for the transmitting end, realizes unified frame synchronization and channel estimation of multiple receiving ends under the condition of low communication overhead, and ensures high signal-to-noise ratio and low error rate in one-to-many communication.
(5) The invention improves the communication distance of the optical camera, so that a long-distance receiving end cannot lose all transmission data due to LED aliasing (inter-channel interference).
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of an application scenario of the present invention;
fig. 3 is an example of a data transmission flow chart of the present invention;
fig. 4 is an example of various concepts and definitions (l=4) involved in the encoding table and the encoding table generation process of the present invention;
FIG. 5 is an example of state vector reassignment (reassignment of layer 4 odd items at layer 3) of the present invention;
FIG. 6 is an example of state vector reassignment (reassignment of even items at layer 4 at layer 3) of the present invention;
fig. 7 is an example of the decoding table (l=4) of the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
An optical camera-based multi-priority hierarchical coding method comprises the following steps:
step S1: a code table and a decoding table are generated,
step S2: the transmitting end encodes and transmits a signal,
step S3: the receiving end receives the signal and decodes the signal.
The method is applied to a one-to-many optical camera communication scene, and multi-priority transmission is realized by using physical layer layered coding. Fig. 2 is a schematic diagram of an application scenario of the present invention. The transmitting end is an LED array. The receiving end is a camera. The broadcast transmitting end (LED array) performs layered coding on the multi-priority bit stream to be transmitted according to a unique coding table. Note that the transmitting side LED array in the device of the present invention is non-uniform (the distances between LEDs-4 and 3, 3 and 2, 2 and 1 in the figures are 1d,2d,3 d). The receiving end (camera) distributed at a specific distance (hierarchy) determines a corresponding decoding table according to the observed LED aliasing state (optical channel state) and dynamically adjusts the received data rate.
The generation of the encoding table and the decoding table comprises the following steps:
step S11: obtaining a state vector of an L-th layer receiving end, wherein L is the total layer number of the coding table and is equal to the number of LED lamps,
step S12: the observed states in the obtained state vector are sequentially divided into odd terms and even terms,
step S13: the observation states represented by the odd terms obtained in the step S12 are distributed to a first receiving bit string, the observation states represented by the even terms are distributed to a second receiving bit string, so as to obtain a decoding table of the current layer, the lengths of the first receiving bit string and the second receiving bit string are L, the a-th previous bit is consistent with the receiving bit string corresponding to the upper layer, the a-th subsequent bit is set as X, the a-th bit of the first receiving bit string is set as 0, the a-th bit of the second receiving bit string is set as 1, wherein a is the current layer number, X is the discarding bit, the bit is 0 to indicate that the lamp is on, the bit is 1 to indicate that the lamp is off,
step S14: taking the observed states represented by the odd terms and the even terms in the step S13 as the state vectors of the previous layer respectively, repeatedly executing the steps S12 to S14 until the decoding table of the first layer is obtained, indicating that the generation of the decoding table is completed,
step S15: and inverting the mapping from the observation state in the decoding table to the receiving bit string, and simultaneously modifying the receiving bit string into the sending bit string, and modifying the observation state into the on-off state to obtain the encoding table.
The L-th layer is the highest layer of the receiving end and comprises L+1 states, and the total is 2 L Sub-states.
In this embodiment, the encoding table and the generation process of the decoding table are described by taking l=4 as an example.
Step S11: acquiring state vector sv of L-layer receiving end L Each element of the vector corresponds to a possible observation state LR of the L-layer receiving end L . As shown in fig. 4, in the present invention, the number of layers of the receiving end is the same as the number of lamps of the LED array of the transmitting end, and in the case where the number of lamps l=4, the 4 th layer is the highest layer of the receiving end. All the 4 LEDs observed by the receiving end at the highest layer are aliased, so that the aliased LEDs observed by the receiving end at the 4 th layer have 5 states #i=0, 1, …, 4), the corresponding LED lighting number of which varies from 0 to 4, these 5 states corresponding to 5 columns of fig. 4. Each LED state comprises a plurality of indistinguishable sub-states, for example, column 3 indicates 2 of 4 LEDs are lit, but due to spatial aliasing, the receiving end cannot distinguish which two are lit, the state comprises +_ according to permutation and combination>Sub-states. The number of sub-states of each state is arranged in sub-state binary from small to large to form a state vector of layer 4>Layer 4 all states and children thereofThe states constitute the state space of layer 4. The state space of layer 4 contains all possible 2 of the sender 4 =16 seed states, e.g., state +.>The number of the lighted LEDs is 1, the state in brackets indicates that the corresponding LED lamps are aliased, and the receiving end cannot judge which lamp is lighted.
Step S12: state vector sv L Performing parity term decomposition, wherein the state Odd term, stateIs an even term.
Step S13: layer L vector sv L In the observation state corresponding to the odd itemsAssigned as a received bit string->Observation state corresponding to even term->Assigned as a received bit string->Where X is a discard bit, the bit representing the corresponding position is discarded at this layer. Namely, status->Corresponding 8 seed status assignment->Status->Corresponding 8 seed status assignment->For all observed states LR of layer L L Are mapped to a specific received bit string DR L Then, the decoding table DeT of the L-th layer is obtained L 。
Step S14: will beAnd->Step S12 to step S14 are re-executed as the state vector of the previous layer (i.e. layer 3), respectively, to obtain the decoding table DeT of layer L-1 L-1 。
To be used forFor example, a->Status of inclusion-> These sub-states are subdivided into different states at the previous layer (layer 3). The receiving end of the layer 3 is changed into 1 independent LED lamp and 3 aliased LED lamps due to the fact that the distance is shortened, so that the receiving end is +.>Repartition into 2 states in layer 3, respectively +.>And repartition into 2 states in layer 3, respectively +.>And->Odd items->Assigned as a received bit string->To even numberAssigned as a received bit string->Wherein, the first bit 0 represents that the observation state corresponding to the present state at the 4 th layer is an odd term. This allocation procedure is shown in fig. 5.
In order to make the encoding process symmetrical,the allocation procedure of (2) is shown in fig. 6. Completion->And->After the reassignment of (3), a layer 3 decoding table DeT is obtained 3 。
Repeating until the layer 1 decoding table DeT is completed 1 Indicating completion of decoding table generation, as shown in FIG. 7As shown.
The sending end codes and sends signals comprising the following steps:
step S21: the transmitting end sequentially takes 1 bit from the L priority bit streams to be transmitted in each transmission period to form a transmission bit string DT with the length of L,
step S22: the transmission bit string DT is mapped to the on-off state LT of the LED array according to the encoding table,
step S23: adding a frame preamble for receiving end space-time synchronization and channel estimation, and forming a data frame sent by a sending end together with the LED on-off state LT, wherein the frame preamble is a predefined 0/1 bit string,
step S24: and controlling the on-off of each LED of the transmitting end according to the data frame, and transmitting the optical signal to the receiving end.
The receiving end receives the signal and decodes the signal, and the method comprises the following steps:
step S31: the receiving end shoots a continuous video stream comprising the transmitting end,
step S32: the spatial synchronization and the time synchronization of the data frames are completed,
the step S32 is specifically that each frame of the video stream is divided into pixel blocks S with the same size, a pixel block brightness signal S (t) is calculated, the pixel block brightness signal S (t) is a sequence of brightness change of all pixels in each pixel block along with time, a cross-correlation signal r (t) of the pixel block brightness signal S (t) and a frame preamble p (t) is calculated, x pixel blocks with the largest cross-correlation value are searched and are spliced again into candidate pixel blocks S', the pixel blocks indicate rough positions of the LED arrays in the picture, and the spatial synchronization of the data frames is completed; then, the luminance signal s '(t) of the candidate pixel block s' is recalculated, the time corresponding to the time when the cross correlation signal r '(t) of the candidate pixel block luminance signal s' (t) and the frame preamble p (t) reaches the maximum value is the time when the data frame starts, and the time synchronization of the data frame is completed.
Step S33: the LED channel is divided up,
the receiving end obtains two frames of images continuously flashing by the LEDs through the step S32 and carries out difference, gamma correction and Gaussian blur are used for the obtained difference images to inhibit noise, and then the light source pixels and the background pixels are separated through the binary conversion of the Ojin, so that the pixel blocks of each LED channel are finally obtained, and the segmentation of the LED channels at the pixel level is realized.
Step S34: adjusting the exposure parameters of the receiving end until the brightness value observed by the camera is 255/L when 1 LED lamp is lighted,
step S35: for each LED channel divided in step S33, measuring the observation brightness level of the current LED channel when the transmitting end lights different numbers of LED lamps according to the bit string used for channel estimation in the frame preamble, determining the discrimination threshold value between different brightness levels, judging the number of the lighted LEDs corresponding to the brightness level of each LED channel according to the discrimination threshold value of the brightness level,
step S36: and obtaining an observation state according to the number of the lighted LEDs, and decoding according to a decoding table to obtain a received bit string.
Fig. 2 is an example of a data transmission flow chart of the present invention. Fig. 2 shows that a transmitting end needs to transmit bit streams with multiple priorities from left to right, 1 bit is taken from each bit stream in each transmitting period to form a transmitting bit string to be recorded as DT, and the DT is mapped into a switch state LT of an LED array according to a coding table, wherein 1 indicates that an LED lamp at a corresponding position is on, and 0 indicates that the LED lamp at the corresponding position is off. After the LED light is transmitted through the optical channel, the LED light is supposed to be received by a receiving end positioned at the layer 3, and the observed LED state is recorded as LR 3 =1[011]Wherein middle brackets bracket status "[011 ]]"means that the corresponding LEDs have not been able to spatially distinguish the status of the individual lamps due to channel aliasing. The receiving end of layer 3 decodes LR according to its decoding table 3 Obtaining a layer 3 received bit string DR 3 =10xx, where X is a discard bit, indicating that the corresponding data bit is discarded at layer 3 due to low priority. After several transmission cycles, layer 3 may get a received bit stream, where the two streams of low priority are discarded.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Claims (10)
1. An optical camera-based multi-priority hierarchical coding method is characterized by comprising the following steps:
step S1: a code table and a decoding table are generated,
step S2: the transmitting end encodes and transmits a signal,
step S3: the receiving end receives and decodes the signal;
the generation of the encoding table and the decoding table comprises the following steps:
step S11: obtaining a state vector of an L-th layer receiving end, wherein L is the total layer number of the coding table and is equal to the number of LED lamps,
step S12: the observed states in the obtained state vector are sequentially divided into odd terms and even terms,
step S13: the observation states represented by the odd terms obtained in the step S12 are distributed to a first receiving bit string, the observation states represented by the even terms are distributed to a second receiving bit string, so as to obtain a decoding table of the current layer, the lengths of the first receiving bit string and the second receiving bit string are L, the a-th previous bit is consistent with the receiving bit string corresponding to the upper layer, the a-th subsequent bit is set as X, the a-th bit of the first receiving bit string is set as 0, the a-th bit of the second receiving bit string is set as 1, wherein a is the current layer number, X is the discarding bit, the bit is 0 to indicate that the lamp is on, the bit is 1 to indicate that the lamp is off,
step S14: taking the observed states represented by the odd terms and the even terms in the step S13 as the state vectors of the previous layer respectively, repeatedly executing the steps S12 to S14 until the decoding table of the first layer is obtained, indicating that the generation of the decoding table is completed,
step S15: the mapping from the observation state in the decoding table to the receiving bit string is inverted, and meanwhile, the receiving bit string is modified into the sending bit string, and the observation state is modified into the on-off state, so that the encoding table is obtained;
the sending end codes and sends signals comprising the following steps:
step S21: the transmitting end sequentially takes 1 bit from the L priority bit streams to be transmitted in each transmission period to form a transmission bit string DT with the length of L,
step S22: the transmission bit string DT is mapped to the LED on-off state LT according to the encoding table,
step S23: adding frame preamble code for receiving end space-time synchronization and channel estimation, and forming data frame transmitted by transmitting end together with LED on-off state LT,
step S24: controlling the on-off of each LED of the transmitting end according to the data frame, and transmitting the optical signal to the receiving end;
the receiving end receives the signal and decodes the signal, and the method comprises the following steps:
step S31: the receiving end shoots a continuous video stream comprising the transmitting end,
step S32: the spatial synchronization and the time synchronization of the data frames are completed,
step S33: the LED channel is divided up,
step S34: adjusting the exposure parameters of the receiving end until the brightness value observed by the camera is 255/L when 1 LED lamp is lighted,
step S35: for each LED channel divided in step S33, measuring the observation brightness level of the current LED channel when the transmitting end lights different numbers of LED lamps according to the bit string used for channel estimation in the frame preamble, determining the discrimination threshold value between different brightness levels, judging the number of the lighted LEDs corresponding to the brightness level of each LED channel according to the discrimination threshold value of the brightness level,
step S36: and obtaining an observation state according to the number of the lighted LEDs, and decoding according to a decoding table to obtain a received bit string.
2. The optical camera-based multi-priority hierarchical coding method as set forth in claim 1, wherein the L-th layer is a highest layer of the receiving end, and includes L+1 states, 2 total L Sub-states.
3. The method of claim 1, wherein LED states in the observed states that are indistinguishable due to spatial aliasing are bracketed.
4. The optical camera-based multi-priority hierarchical coding method of claim 1, wherein the frame preamble is a string of predefined bit strings.
5. The multi-priority hierarchical coding method according to claim 1, wherein the step S32 is specifically to divide each frame of the video stream into pixel blocks S of the same size, calculate pixel block luminance signals S (t), the pixel block luminance signals S (t) being a sequence of luminance changes of all pixels in each pixel block over time, calculate cross-correlation signals r (t) of the pixel block luminance signals S (t) and the frame preamble p (t), and find x pixel blocks with the largest cross-correlation value to be re-spliced into candidate pixel blocks S', the pixel blocks indicating the rough positions of the LED arrays in the picture, and complete the spatial synchronization of the data frame; then, the luminance signal s '(t) of the candidate pixel block s' is recalculated, the time corresponding to the time when the cross correlation signal r '(t) of the candidate pixel block luminance signal s' (t) and the frame preamble p (t) reaches the maximum value is the time when the data frame starts, and the time synchronization of the data frame is completed.
6. The multi-priority hierarchical coding method based on an optical camera according to claim 1, wherein the step S33 is specifically: the receiving end obtains two frames of images continuously flashing by the LEDs through the step S32 and carries out difference, gamma correction and Gaussian blur are used for the obtained difference images to inhibit noise, and then the light source pixels and the background pixels are separated through the binary conversion of the Ojin, so that the pixel blocks of each LED channel are finally obtained, and the segmentation of the LED channels at the pixel level is realized.
7. The optical camera-based multi-priority hierarchical coding method of claim 1, wherein the transmitting end is an LED array.
8. The optical camera-based multi-priority hierarchical coding method of claim 1, wherein the receiving end is a camera.
9. An optical camera based multi-priority layered coding apparatus comprising a memory, a processor, and a program stored in the memory, wherein the processor implements the method of any of claims 1-8 when executing the program.
10. A storage medium having a program stored thereon, wherein the program, when executed, implements the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210469380.8A CN115065408B (en) | 2022-04-28 | 2022-04-28 | Multi-priority hierarchical coding method, device and storage medium based on optical camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210469380.8A CN115065408B (en) | 2022-04-28 | 2022-04-28 | Multi-priority hierarchical coding method, device and storage medium based on optical camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115065408A CN115065408A (en) | 2022-09-16 |
CN115065408B true CN115065408B (en) | 2024-02-27 |
Family
ID=83197322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210469380.8A Active CN115065408B (en) | 2022-04-28 | 2022-04-28 | Multi-priority hierarchical coding method, device and storage medium based on optical camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115065408B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003069507A (en) * | 2001-06-08 | 2003-03-07 | Sharp Corp | Wireless optical communication system of space-division multiplexing and space-division multiple access |
CN102289634A (en) * | 2011-08-31 | 2011-12-21 | 北京航空航天大学 | Restrictive region permission authentication device based on visible optical communication and file encryption method |
CN103618687A (en) * | 2013-12-03 | 2014-03-05 | 东南大学 | Wireless optical orthogonal multi-carrier communication method with low peak to average power ratio |
CN112887031A (en) * | 2021-01-11 | 2021-06-01 | 吉林大学 | OCC double-exposure-duration camera receiving mode and distance sensing-based inter-vehicle communication implementation method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011055288A (en) * | 2009-09-02 | 2011-03-17 | Toshiba Corp | Visible light communication apparatus and data receiving method |
-
2022
- 2022-04-28 CN CN202210469380.8A patent/CN115065408B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003069507A (en) * | 2001-06-08 | 2003-03-07 | Sharp Corp | Wireless optical communication system of space-division multiplexing and space-division multiple access |
CN102289634A (en) * | 2011-08-31 | 2011-12-21 | 北京航空航天大学 | Restrictive region permission authentication device based on visible optical communication and file encryption method |
CN103618687A (en) * | 2013-12-03 | 2014-03-05 | 东南大学 | Wireless optical orthogonal multi-carrier communication method with low peak to average power ratio |
CN112887031A (en) * | 2021-01-11 | 2021-06-01 | 吉林大学 | OCC double-exposure-duration camera receiving mode and distance sensing-based inter-vehicle communication implementation method |
Non-Patent Citations (3)
Title |
---|
可见光通信中的多维编码;亢烨;柯熙政;中国激光;20150210;第42卷(第2期);全文 * |
基于纠删码的细粒度云存储调度方案;廖辉;薛广涛;钱诗友;李明禄;;计算机应用;20170310(第03期);全文 * |
绿色照明的新领域――LED可见光通信研究;钟菲;赵紫斐;张学敏;刘洋;;科学技术创新;20181205(第34期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115065408A (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nagura et al. | Tracking an LED array transmitter for visible light communications in the driving situation | |
Nishimoto et al. | High-speed transmission of overlay coding for road-to-vehicle visible light communication using LED array and high-speed camera | |
CN105515657B (en) | A kind of visible light camera communication system using LED lamp MIMO array framework | |
JP7433442B2 (en) | Point cloud data transmission device, transmission method, processing device, and processing method | |
US10200120B2 (en) | Signal encoding and decoding method, device and system | |
JP2008252466A (en) | Optical communication system, transmitter and receiver | |
KR102095668B1 (en) | DSM-PSK optical wireless transmission method and apparatus | |
Das et al. | Color-independent VLC based on a color space without sending target color information | |
JP7395354B2 (en) | Receiving method and receiving device | |
CN106937121A (en) | Image decoding and coding method, decoding and code device, decoder and encoder | |
CN110944187A (en) | Point cloud encoding method and encoder | |
WO2015014237A1 (en) | Method and apparatus for receiving visible light signal | |
CN102474565B (en) | For the bitstream syntax of graphic model compression in wireless HD1.1 | |
CN111614399B (en) | Visible light high-speed communication method between intelligent terminals | |
CN107370538B (en) | Radio data transmission method, camera and system | |
CN115065408B (en) | Multi-priority hierarchical coding method, device and storage medium based on optical camera | |
CN107222260B (en) | Visible light communication coding and code expanding method based on variable data area length | |
Chen et al. | Hierarchical scheme for detecting the rotating MIMO transmission of the in-door RGB-LED visible light wireless communications using mobile-phone camera | |
CN107294603B (en) | Visible light bilayer is superimposed Transmission system and method | |
KR101581739B1 (en) | Spatio-temporal error diffusion for imaging devices | |
CN112887031A (en) | OCC double-exposure-duration camera receiving mode and distance sensing-based inter-vehicle communication implementation method | |
Wu et al. | OnionCode: enabling multi-priority coding in LED-based optical camera communications | |
CN109541544A (en) | A kind of asynchronous visible light localization method | |
Rapson et al. | Applying NOMA to undersampled optical camera communication for vehicular communication | |
KR102063418B1 (en) | Method and apparatus for modulating and demodulating in communication system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |