CN109474825B - Pulse sequence compression method and system - Google Patents

Pulse sequence compression method and system Download PDF

Info

Publication number
CN109474825B
CN109474825B CN201811217843.1A CN201811217843A CN109474825B CN 109474825 B CN109474825 B CN 109474825B CN 201811217843 A CN201811217843 A CN 201811217843A CN 109474825 B CN109474825 B CN 109474825B
Authority
CN
China
Prior art keywords
information
pulse
sub
block
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811217843.1A
Other languages
Chinese (zh)
Other versions
CN109474825A (en
Inventor
马思伟
李洋
王苫社
张翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201811217843.1A priority Critical patent/CN109474825B/en
Publication of CN109474825A publication Critical patent/CN109474825A/en
Application granted granted Critical
Publication of CN109474825B publication Critical patent/CN109474825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Abstract

The invention mainly provides a pulse sequence compression method and a system, comprising the following steps: converting the original pulse signal into a gray scale image sequence; carrying out block division on the gray-scale image sequence to obtain a plurality of sub-blocks; predicting each sub-block to obtain a predicted pixel value, and calculating a residual error between a real value and the predicted pixel value; transforming and quantizing the information of the residual; the block division information, prediction information, and residual information are further compressed by entropy encoding and stored as a binary code stream. The method fully utilizes the correlation of pulse sequence signals in space and time, thereby effectively compressing the pulse sequence. Experimental results show that the method can obviously compress the pulse sequence and can be effectively applied to a compression, transmission and storage system of actual pulse sequence content.

Description

Pulse sequence compression method and system
Technical Field
The invention belongs to the field of digital signal processing, and particularly relates to a compression method and a compression system of a pulse sequence recorded by a Dynamic Vision Sensor (DVS).
Background
The dynamic vision sensor senses and encodes the world by imitating the retina and acquires visual information as a neural signal, so the dynamic vision sensor is a promising neuromorphic vision sensor that can be used for autonomous motion control of a mobile robot. Although researchers have used various sensors to perceive the environment, such as frame-based cameras, structured light sensors, stereo cameras, and the like, there are still many limitations and drawbacks. As a promising solution, dynamic vision sensors respond to impulses generated by simulating the retina and responding to pixel-level brightness changes in the scene due to motion. DVS has great advantages over conventional frame-based cameras, particularly with respect to moving fields, in terms of data rate, speed and dynamic range. In addition, the pulses generated by the DVS may be directly transmitted to a Spiking Neural Network (SNN) for visual processing and motion control.
With the development of video technology, there is a demand for higher dynamic range and temporal resolution of video in many scenes, and the advantages of dynamic visual perceptron are reflected in these scenes. The frame rate of a conventional camera is generally dozens, and higher frame rates tend to greatly increase the technical and production costs. The dynamic vision sensor records pulse signals reflecting motion information, the frame rate can reach ten thousand frames, and the dynamic vision sensor has wide application prospect under high-speed motion photography such as automatic driving and the like.
The dynamic vision sensor is a novel retina-like vision sensor. In the dynamic vision sensor, each pixel point independently responds to and encodes brightness change by generating asynchronous events, and the generated event stream eliminates the time domain redundancy in continuous images output by a traditional camera to a certain extent; moreover, the method has extremely high time resolution, and can capture ultra-fast motion; in addition, it has a very high dynamic range, i.e. works well both day and night. Thus, dynamic vision sensors may also be employed in monitoring systems.
The pulse signals generated by the DVS are generally stored in the form of Address Event Representation (AER), and each data is composed of an Address of an Event (position of a corresponding pixel), and a property of the Event (light or dark), and the like. Because the DVS frame rate is very high, the data volume is also very large, and it needs to occupy very large transmission bandwidth and storage space, and the requirement for software and hardware is objectively too high. In addition, the conventional DVS pulse sequence processing method cannot be integrated into the latest video encoding and decoding standard, and cannot perform subsequent operations such as compression.
Disclosure of Invention
In order to overcome the defects of the existing pulse sequence compression technology, the invention provides a method capable of effectively compressing an AER pulse sequence. By combining the pulse signals into a gray image and then performing lossy encoding, the transmission bandwidth and storage cost can be greatly reduced.
Specifically, according to an aspect of the present invention, there is provided a pulse train compression method, including:
converting the original pulse signal into a gray scale image sequence;
carrying out block division on the gray-scale image sequence to obtain a plurality of sub-blocks;
predicting each sub-block to obtain a predicted pixel value, and calculating a residual error between a real value and the predicted pixel value;
transforming and quantizing the information of the residual;
the block division information, prediction information, and residual information are further compressed by entropy encoding and stored as a binary code stream.
Preferably, the converting the original pulse signal into a gray map sequence includes:
converting the pulse signals stored in the original address plus time and polarity format into pulse images with the depth of 1 bit;
and synthesizing the pulse images into a single-channel image with multiple bit depths in one frame by a plurality of continuous frames.
Preferably, the block division includes:
temporally dividing the grayscale map sequence into a plurality of full-resolution cubes, each cube having the same spatial resolution as the grayscale map sequence;
each full-resolution cube is spatially divided into smaller sub-blocks.
Preferably, the prediction is a block prediction in the spatial domain, and the predicted pixel values are generated by boundary pixels on adjacent sub-blocks that have been encoded within the same full resolution cube.
Preferably, the prediction is a block prediction in the time domain, and the pixel values in the current encoded sub-block are predicted using encoded sub-blocks in an adjacent full resolution cube.
Preferably, the entropy coding performs adaptive binary arithmetic coding on the prediction information and the residual information of each sub-block through a context model.
Preferably, the binary arithmetic coding is based on a recursive interval division mode, and a coding interval and an interval lower limit are stored in a recursive process; and carrying out arithmetic coding on each binary file after the current syntax element is binarized by using a self-adaptive probability model according to the probability model parameter of the binary file.
According to another aspect of the present invention, there is also disclosed a pulse train compression system comprising:
the conversion module is used for converting the original pulse signal into a gray map sequence;
the block division module is used for carrying out block division on the gray map sequence to obtain a plurality of sub-blocks;
the prediction module is used for predicting each sub-block to obtain a predicted pixel value and calculating a residual error between a real value and the predicted pixel value;
and the transformation and quantization module is used for carrying out transformation and quantization operation on the information of the residual error.
And the entropy coding module is used for further compressing the information of the block division, the prediction information and the residual error information through entropy coding and storing the information as a binary code stream.
The invention has the advantages that: the method can simply and effectively reduce the transmission bandwidth and the storage cost, has low complexity, and can be effectively applied to a compression, transmission and storage system related to the pulse sequence.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a pulse sequence encoding method provided by the present invention;
FIG. 2 is a diagram illustrating the pulse signal to image domain provided by the present invention;
FIG. 3 is a schematic diagram of an 8-bit depth grayscale image synthesized by pulse signals provided by the present invention;
FIG. 4 is a block partitioning diagram provided by the present invention;
fig. 5 is a schematic structural diagram of a lossy compression coding system provided by the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The concept of Bit Depth (Bit Depth) is widely used in the field of digital video and audio, and is mainly used to express the number of bits used by a single Color component of a pixel in a digital image, also commonly referred to as Color Depth (Color Depth) or quantization Depth, and to express the number of bits used by sound samples in digital sound.
In the field of digital images, the bit depth determines the number of colors that can be represented by the digital image and thus the degree of accuracy of the color representation. For example, 1bit may express 2 colors (called monochrome, usually black and white), 2 bits may express 4 colors, 4 bits may express 16 colors, 8 bits may express 256 colors, and so on. When describing the color depth of an image in detail, it is often expressed using "bits per pixel" (bpp), for example, a digital cinema uses 36bpp, i.e. 36 bits/pixel. With a certain bit depth, different grays (brightness) can be expressed. However, when the number of bits used to express each color is less than 8, the color of the image appears to be conspicuous stripes or patches, which is called hue separation (Posterization). The human eye can only distinguish about 1000 thousands of different colors, so if just for viewing, 24bpp video is generally sufficient, and storing video at bit depths higher than 24bpp is redundant. However, images above 24bpp are still useful, and can maintain higher quality in digital post-processing.
According to the requirement of the DCI Digital Cinema System Specification (DCSS), the bit depth of each color component in a Digital Cinema image is 12 bits, and each pixel is composed of three color components, so that the bit depth of each pixel is 36 bits, i.e., 36 bpp. The digital cinema sound samples are at a frequency of 48 kHz/channel or 96 kHz/channel, and each sample is quantized to a depth of 24 bits.
First, the method of the present invention converts the original pulse signal recorded in address format into a sequence of grayscale images of n bit depth. In practical application, the value of n can be flexibly selected, and the conventional value is 8 or 10. These pulse signals are typically strongly correlated in both spatial and temporal domains, so the present invention performs lossy compression by removing these correlations in subsequent steps.
For the lossy compression process, the obtained gray-scale image sequence is respectively coded after being subjected to block division, then a current block is predicted through intra-frame prediction or inter-frame prediction, the difference value of a real value and a predicted value is transformed and quantized, then coding is carried out, the correlation in a space domain and a time domain is eliminated, the compression efficiency is improved, finally required information such as block division information, prediction information, residual data and the like is combined and further compressed through self-adaptive entropy coding, and finally the information is stored into binary code stream. The transformation makes the energy of the residual matrix more concentrated, and the compression efficiency is further improved by a specific quantization matrix. It is not reversible in the quantization operation because part of the information is ignored, and is also a source of loss during lossy encoding. In order to control the amount of distortion, a Quantization Parameter (QP) is used to adjust the quantization matrix.
Example 1 lossy compression coding algorithm for pulse sequences
Specifically, as shown in fig. 1, according to an aspect of the present invention, the present invention provides a pulse sequence compression method, including the following steps:
s1, converting the pulse signal into a gray-scale image sequence
Each pulse signal is represented by a quadruple (x, y, t, p) with a fixed number of bits, x and y respectively represent the abscissa and ordinate of the pulse signal on the image, p represents the polarity of the pulse signal (represented by 1bit, with the value of 0 or 1), and t represents the time axis. The invention converts the pulse signal to the image domain and then converts each successive n frames to an image of n bit depth.
First, as shown in fig. 2, the method converts time domain and spatial domain information of an original pulse signal into a pulse image of 1bit depth. For a pulse sequence of length n { (x)1,y1,t1,p1),(x2,y2,t2,p2),...,(xn,yn,tn,pn) T is generated according to the length of the t on the time axisn-t1And +1 frame, the initial value of all the pixel intensity is 0, and the pixel depth of each frame is 1bit (namely, the value is 0 or 1). Then for each pulse signal (x)i,yi,ti,pi) Will t bei-t1+1 frame (x)i,yi) The pixel at (1) is set to 1, thereby representing the pulse signal in the form of an image.
Then, as shown in fig. 3, each consecutive several frames of the impulse images are synthesized into a single-channel image with n bit depth, where n is 8 or 10, that is, the impulse images with 8 frames or 10 frames and 1bit depth are synthesized into a gray-scale image with 8 bit or 10 bit depth, and other values besides 8 and 10 may be selected as required. The method for synthesizing the z frame to the z + n-1 frame into a frame of n-bit depth gray scale map can be expressed by the following formula:
Figure GDA0002385655020000051
wherein a isu,v,z+iRepresents the value at the coordinate (u, v) on the pulse image of the z + i th frame, bu,vThe value at (u, v) of the gray scale map representing the composite of the n frames of pulse images.
S2, dividing blocks
As shown in fig. 4, the present invention first time divides the gray-scale map sequence obtained in the first step into a plurality of full-resolution cubes, each cube having the same spatial resolution as the gray-scale map sequence. Each full-resolution cube is then spatially divided into smaller sub-blocks for subsequent operations.
S3, prediction
When each sub-block is coded, the invention predicts the sub-block and then codes the prediction information and the residual error between the true value and the predicted value. Prediction can be divided into two modes according to the range of searching matching blocks at the time of prediction.
The first is block prediction in the spatial domain, in this mode, the predicted pixel value of the current sub-block is generated by the boundary pixel on the encoded adjacent sub-block in the same full resolution cube, and then the residual between the predicted value and the true value is used as the input of the subsequent entropy coding for the next encoding process. Let the current pixel value be f (u, v, z), where (u, v) and z represent the spatial and temporal coordinates of the point in the entire gray-scale map sequence, respectively, from the reconstructed values in the encoded neighboring blocks
Figure GDA0002385655020000061
And (3) predicting:
Figure GDA0002385655020000062
wherein a isk,lFor the prediction coefficients, k, l are the coordinates of the reference pixels. The error between the true value and the predicted value of the current pixel is:
Figure GDA0002385655020000063
for each coding block, in order to optimize the compression effect, an optimization objective is considered when selecting a prediction reference block:
min{R+λ·D}
r represents the number of bits required to encode all relevant information (e.g., reference block information, prediction residual, etc.) using the current prediction method, D is the distortion encoded using the current prediction method, and λ is a lagrange multiplier used to adjust the relationship between the code rate and the distortion. It should be noted that the prediction residual between the current block and the reference block is not necessarily the minimum, but the value function value after entropy encoding is necessarily the minimum.
The second prediction mode is block prediction in the time domain, using encoded sub-blocks in neighboring full resolution cubes to predict pixel values in the currently encoded sub-block. Because the motion information reflected by the pulse signal has stronger correlation in the time domain, efficient compression can be realized through prediction in the time domain.
The specific implementation method is that a best matching sub-block is searched in a previously coded full-pixel cube for the current sub-block, the displacement from the best matching sub-block to the current coding sub-block is a motion vector, and the difference value between the best matching sub-block and the current coding sub-block is a prediction residual error. Similarly, in the temporal block prediction, the present invention also aims to minimize the number of bits required to encode information such as a motion vector MV and a prediction residual when a current matching block is used.
S4, transformation and quantization
The distribution of the pulse signals on the space is disordered, the scattered distribution on the space domain can be converted into the relatively concentrated distribution on the transform domain through transformation, and the compression efficiency can be further improved by combining quantization and 'z' scanning.
The Transform part may be selected from a variety of specific Transform modes, such as Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), and the like. Taking the most commonly used discrete cosine transform example, the xth row and xth column elements of the two-dimensional DCT transform matrix C can be represented as:
Figure GDA0002385655020000071
wherein
Figure GDA0002385655020000072
The coefficient matrix of the two-dimensional DCT of the signal matrix f of N × N can be represented by matrix multiplication as
Figure GDA0002385655020000073
In the quantization process, the compression ratio can be controlled by adjusting the quantization step size according to the actual scene, and the larger the quantization step size is, the larger the compression ratio is, but larger errors are also brought. The quantization process can be expressed as:
Figure GDA0002385655020000074
wherein the content of the first and second substances,
Figure GDA0002385655020000075
is the element of the u-th row and v-th column in the matrix output by the DCT unit,
Figure GDA0002385655020000076
is its corresponding quantized value.
S5, entropy coding
Finally, the present invention uses a context-based adaptive binary arithmetic coding (CABAC) method to encode the various information previously obtained. Mainly relates to three parts of dualization, context modeling and arithmetic coding.
The binary method includes truncation Rice binary (TR), K-order exponential Golomb binary (EGK) and fixed-length binary (F L), etc., and may be regulated based on the requirement or different sequence characteristics
Figure GDA0002385655020000081
In the context model, the coded symbol information in the neighboring blocks of the current coding block can be used as the context of the symbol in the current coding block. The variables of the probability model are adaptively updated after each binary symbol encoding.
And finally, binary arithmetic coding is carried out, and a coding interval and an interval lower limit are stored in the recursive process based on a recursive interval division mode. And carrying out arithmetic coding on each binary code stream after the current syntax element is binarized according to the probability model parameters by using the self-adaptive probability model. The output of the arithmetic coding is the final code stream.
Example 2
As shown in fig. 5, a schematic structural diagram of a lossy compression encoding system 20 provided in the present invention includes:
a conversion module 21, configured to convert the original pulse signal into a grayscale map sequence;
a block division module 22, configured to perform block division on the grayscale map sequence to obtain a plurality of sub-blocks;
the prediction module 23 is configured to predict each sub-block to obtain a predicted pixel value, and calculate a residual between a true value and the predicted pixel value;
and a transform and quantization module 24 for performing transform and quantization operations on the information of the residual.
An entropy coding module 25 for further compressing the block division information, the prediction information, and the residual information by entropy coding and storing as a binary code stream.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (7)

1. A method of pulse train compression, comprising:
converting the original pulse signal into a gray scale image sequence; restoring the pulse signals stored in the original address plus time and polarity format into pulse images with the depth of 1 bit; synthesizing a plurality of continuous pulse images into a single-channel image with multiple bit depths;
carrying out block division on the gray-scale image sequence to obtain a plurality of sub-blocks;
predicting each sub-block to obtain a predicted pixel value, and calculating a residual error between a real value and the predicted pixel value;
transforming and quantizing the information of the residual;
the block division information, prediction information, and residual information are further compressed by entropy encoding and stored as a binary code stream.
2. A method of pulse sequence compression as claimed in claim 1,
the block division includes:
temporally dividing the grayscale map sequence into a plurality of full-resolution cubes, each cube having the same spatial resolution as the grayscale map sequence;
each full-resolution cube is spatially divided into smaller sub-blocks.
3. A method of pulse sequence compression as claimed in claim 1,
the prediction is a block prediction in the spatial domain, the predicted pixel values being generated by boundary pixels on adjacent sub-blocks that have been encoded within the same full resolution cube.
4. A method of pulse sequence compression as claimed in claim 1,
the prediction is a block prediction in the time domain, using the encoded sub-blocks in the neighboring full resolution cube to predict the pixel values in the current encoded sub-block.
5. A method of pulse sequence compression as claimed in claim 1,
the entropy coding performs adaptive binary arithmetic coding on the prediction information and the residual information of each sub-block through a context model.
6. A method of pulse train compression as claimed in claim 5,
the binary arithmetic coding is based on a recursive interval division mode, and a coding interval and an interval lower limit are saved in a recursive process; and carrying out arithmetic coding on each binary file after the current syntax element is binarized according to the probability model parameters by using a self-adaptive probability model.
7. A pulse train compression system, comprising:
the conversion module is used for converting the original pulse signal into a gray map sequence; restoring the pulse signals stored in the original address plus time and polarity format into pulse images with the depth of 1 bit; synthesizing a plurality of continuous pulse images into a single-channel image with multiple bit depths;
the block division module is used for carrying out block division on the gray map sequence to obtain a plurality of sub-blocks;
the prediction module is used for predicting each sub-block to obtain a predicted pixel value and calculating a residual error between a real value and the predicted pixel value;
a transformation and quantization module for performing transformation and quantization operations on the information of the residual;
and the entropy coding module is used for further compressing the information of the block division, the prediction information and the residual error information through entropy coding and storing the information as a binary code stream.
CN201811217843.1A 2018-10-18 2018-10-18 Pulse sequence compression method and system Active CN109474825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811217843.1A CN109474825B (en) 2018-10-18 2018-10-18 Pulse sequence compression method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811217843.1A CN109474825B (en) 2018-10-18 2018-10-18 Pulse sequence compression method and system

Publications (2)

Publication Number Publication Date
CN109474825A CN109474825A (en) 2019-03-15
CN109474825B true CN109474825B (en) 2020-07-10

Family

ID=65664230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811217843.1A Active CN109474825B (en) 2018-10-18 2018-10-18 Pulse sequence compression method and system

Country Status (1)

Country Link
CN (1) CN109474825B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155560B (en) * 2022-02-08 2022-04-29 成都考拉悠然科技有限公司 Light weight method of high-resolution human body posture estimation model based on space dimension reduction
CN114819121B (en) * 2022-03-28 2022-09-27 中国科学院自动化研究所 Signal processing device and signal processing method based on impulse neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1455600A (en) * 2003-05-19 2003-11-12 北京工业大学 Interframe predicting method based on adjacent pixel prediction
CN107330843A (en) * 2017-06-15 2017-11-07 深圳大学 A kind of gray level image coding hidden method and device, coding/decoding method and device
CN108632630A (en) * 2018-05-28 2018-10-09 中国科学技术大学 A kind of bi-level image coding method of combination bit arithmetic and probabilistic forecasting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7953152B1 (en) * 2004-06-28 2011-05-31 Google Inc. Video compression and encoding method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1455600A (en) * 2003-05-19 2003-11-12 北京工业大学 Interframe predicting method based on adjacent pixel prediction
CN107330843A (en) * 2017-06-15 2017-11-07 深圳大学 A kind of gray level image coding hidden method and device, coding/decoding method and device
CN108632630A (en) * 2018-05-28 2018-10-09 中国科学技术大学 A kind of bi-level image coding method of combination bit arithmetic and probabilistic forecasting

Also Published As

Publication number Publication date
CN109474825A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN110225341B (en) Task-driven code stream structured image coding method
CN102369522B (en) The parallel pipeline formula integrated circuit of computing engines realizes
CN101742319A (en) Background modeling-based static camera video compression method and background modeling-based static camera video compression system
CN103918186B (en) Context-adaptive data encoding
CN101883284A (en) Video encoding/decoding method and system based on background modeling and optional differential mode
CN109474825B (en) Pulse sequence compression method and system
CN109379590B (en) Pulse sequence compression method and system
JP3794749B2 (en) Video signal encoding device
JP2017192078A (en) Picture encoder and control method thereof
Mafijur Rahman et al. A low complexity lossless Bayer CFA image compression
AU2001293994B2 (en) Compression of motion vectors
WO2023164020A2 (en) Systems, methods and bitstream structure for video coding and decoding for machines with adaptive inference
CN111080729A (en) Method and system for constructing training picture compression network based on Attention mechanism
Murakami et al. Vector quantization of color images
CN114979711B (en) Layered compression method and device for audio and video or image
CN109819251B (en) Encoding and decoding method of pulse array signal
CN117441186A (en) Image decoding and processing method, device and equipment
CN113938687A (en) Multi-reference inter-frame prediction method, system, device and storage medium
CN109769104B (en) Unmanned aerial vehicle panoramic image transmission method and device
CN1647544A (en) Coding and decoding method and device
JP2024513693A (en) Configurable position of auxiliary information input to picture data processing neural network
CN109584137B (en) Pulse sequence format conversion method and system
CN114600166A (en) Image processing method, image processing apparatus, and storage medium
JPH06169452A (en) Picture compression system having weighting on screen
JPH03133290A (en) Picture coder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant