CN116132759B - Audio and video stream synchronous transmission method and device, electronic equipment and storage medium - Google Patents

Audio and video stream synchronous transmission method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116132759B
CN116132759B CN202310416903.7A CN202310416903A CN116132759B CN 116132759 B CN116132759 B CN 116132759B CN 202310416903 A CN202310416903 A CN 202310416903A CN 116132759 B CN116132759 B CN 116132759B
Authority
CN
China
Prior art keywords
video
video frame
audio
frame sub
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310416903.7A
Other languages
Chinese (zh)
Other versions
CN116132759A (en
Inventor
郭光泉
李金萍
周正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bolin Images Science Technology Co ltd
Original Assignee
Shenzhen Bolin Images Science Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bolin Images Science Technology Co ltd filed Critical Shenzhen Bolin Images Science Technology Co ltd
Priority to CN202310416903.7A priority Critical patent/CN116132759B/en
Publication of CN116132759A publication Critical patent/CN116132759A/en
Application granted granted Critical
Publication of CN116132759B publication Critical patent/CN116132759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The invention relates to the technical field of audio and video synchronous transmission, and discloses an audio and video stream synchronous transmission method, which comprises the following steps: acquiring synchronously acquired video data and audio data, and partitioning a video frame to obtain video frame sub-blocks; calculating the image quality and the number of the video frame sub-blocks, and carrying out image correction on the video frame sub-blocks according to the image quality and the number to obtain target video frame sub-blocks; encoding video data according to the target video frame sub-blocks to obtain a video encoding file, and converting the audio data into audio binary bit positions; embedding the audio binary bit into the video coding file to obtain a mixed coding file; creating a time stamp for the mixed code file to obtain a time mixed code file, and packaging and transmitting the time mixed code file to a preset destination address. The invention also provides an audio and video stream synchronous transmission device, electronic equipment and a storage medium. The invention can improve the synchronous rate of synchronous transmission of the audio and video streams.

Description

Audio and video stream synchronous transmission method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of synchronous audio and video transmission technologies, and in particular, to a synchronous audio and video stream transmission method, apparatus, electronic device, and storage medium.
Background
With the advent of big data age, people have increasingly demanded audio and video information, and research on audio and video processing technology is in continuous development, and in particular, wireless audio and video stream transmission application is increasingly wide, and research on high-definition audio and video stream synchronous transmission technology is always a main topic in the industry, and in some network environments, because of limitations of communication and broadband, audio and video streams are asymmetric in uplink and downlink of a transmission channel, the uplink channel transmits a large amount of data such as audio and video, and the downlink channel only transmits some instructions, but the instructions of the downlink channel usually need a few seconds to transmit, so that the synchronization rate of audio and video stream transmission is poor.
The current synchronous transmission scheme of audio and video streams mainly includes that a transmitting end firstly packages audio and video data collected in a period of time, for example, a frame of video image is collected, the frame of image and the audio data collected in the period of time for collecting the frame of video are packaged into a package, and a receiving end receives the package and then unpacks the package to play the package respectively. The control method solved by the transmitting end is simple, but is not ideal under the condition of higher definition requirement, the definition is high, which means that the data volume of each audio and video packet is large, the transmission continuity is difficult to ensure, and the synchronization rate of audio and video stream data transmission is poor. Therefore, how to improve the synchronization rate of audio/video streaming becomes a problem to be solved.
Disclosure of Invention
The invention provides an audio and video stream synchronous transmission method, an audio and video stream synchronous transmission device, electronic equipment and a storage medium, and mainly aims to solve the problem of poor synchronous rate during audio and video stream synchronous transmission.
In order to achieve the above object, the present invention provides a method for synchronously transmitting an audio and video stream, comprising:
acquiring synchronously acquired video data and audio data, and partitioning each video frame in the video data to obtain video frame sub-blocks;
calculating the image quality of the video frame sub-blocks, counting the number of the video frame sub-blocks, and carrying out image correction on the video frame sub-blocks according to the image quality and the number to obtain target video frame sub-blocks;
encoding the video data according to the target video frame sub-block to obtain a video encoding file, and converting the audio data into a binary bit stream to obtain an audio binary bit;
embedding the audio binary bit into the video coding file to obtain a mixed coding file of the video data and the audio data;
creating a time stamp for the mixed code file to obtain a time mixed code file, and packaging and transmitting the time mixed code file to a preset destination address.
Optionally, the partitioning each video frame in the video data includes:
acquiring the frame rate of the video data, and extracting each video frame of the video data according to the frame rate;
and dividing each video frame according to the size of each video frame to obtain video frame sub-blocks of each video frame.
Optionally, the calculating the image quality of the video frame sub-block includes:
convolving the video frame sub-block by using a preset filter to obtain edge characteristics of the video frame sub-block;
performing bicubic interpolation on the video frame sub-blocks to obtain interpolation video frame sub-blocks, and generating structural features of the video frame sub-blocks according to the interpolation video frame sub-blocks;
extracting structural features of the video frame sub-blocks using the following formula, including:
where w represents structural features, i, j represent the abscissa and ordinate of the pixel points in the interpolated video frame sub-block, M, N represents the length and width of the interpolated video frame sub-block,a local binary pattern operator with radius R of the pixel point field and number I of the pixel points in the field in the interpolation video frame sub-block is represented, k represents a local binary pattern of the local binary pattern operator, Representing preset local binary pattern operator weights;
carrying out local normalization on the video frame sub-blocks to obtain normalized images of the video frame sub-blocks, and calculating brightness characteristics of the video frame sub-blocks according to the normalized images;
and mapping the edge characteristics, the structural characteristics and the brightness characteristics with a pre-constructed characteristic quality relation to obtain the image quality of the video frame image.
Optionally, the performing image correction on the video frame sub-block according to the image quality and the number to obtain a target video frame sub-block includes:
determining an image correction strategy of the video frame sub-block according to the image quality and the number;
and carrying out image correction on the video frame sub-block based on the image correction strategy to obtain a target video frame sub-block.
Optionally, the encoding the video data according to the target video frame sub-block to obtain a video encoded file includes:
carrying out intra-frame prediction on each video frame in the video data according to the target video frame sub-block to obtain an image residual error of each video frame;
performing discrete cosine transform on the image residual to obtain a video transform coefficient of the video data;
And quantizing the video transformation coefficient by using a preset quantization step length to obtain a video coding file.
Optionally, the embedding the audio binary bit into the video coding file to obtain a hybrid coding file of the video data and the audio data includes:
carrying out transformation coefficient correction on the video coding file according to the audio binary bit to obtain an audio-video correction transformation coefficient;
and correcting the transformation coefficient by using the following formula to obtain an audio/video correction transformation coefficient:
wherein (1)>Representing the xth audio video modified transform coefficient, < ->Representing the x quantized video transform coefficients in the video coding file, < >>Representing the y-th audio binary bit corresponding to the x-th quantized video transform coefficient;
reordering the audio and video correction transformation coefficients to obtain a correction transformation coefficient sequence;
and performing entropy coding on the modified transformation coefficient to obtain a mixed coding file of the video data and the audio data.
Optionally, the packaging and transmitting the time hybrid code file to a preset destination address includes:
Calling a preset packing function according to the destination address and the time hybrid code file;
packaging the destination address and the time hybrid code file by using the packaging function to obtain a hybrid data packet;
and transmitting the mixed data packet to the destination address.
In order to solve the above problems, the present invention further provides an audio/video stream synchronous transmission device, which includes:
the video frame blocking module is used for acquiring synchronously acquired video data and audio data, and blocking each video frame in the video data to obtain video frame sub-blocks;
the video frame sub-block image correction module is used for calculating the image quality of the video frame sub-block, counting the number of the video frame sub-blocks, and carrying out image correction on the video frame sub-block according to the image quality and the number to obtain a target video frame sub-block;
the video coding and audio conversion module is used for coding the video data according to the target video frame sub-block to obtain a video coding file, and converting the audio data into a binary bit stream to obtain an audio binary bit;
the audio data embedding module is used for embedding the audio binary bit into the video coding file to obtain a mixed coding file of the video data and the audio data;
And the audio and video synchronous transmission module is used for creating a time stamp for the mixed coding file to obtain a time mixed coding file, and packaging and transmitting the time mixed coding file to a preset destination address.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the audio and video stream synchronous transmission method described above.
In order to solve the above-mentioned problems, the present invention also provides a computer readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-mentioned audio/video stream synchronous transmission method.
According to the embodiment of the invention, each video frame in the video data is segmented, and the image correction is carried out on the video frame sub-blocks obtained by the segmentation to obtain the target video frame sub-blocks, so that the data volume can be reduced, the image quality of the video data can be ensured, and the continuity in data transmission can be realized; encoding the video data through the target video frame sub-block, converting the audio data into a binary bit stream, and embedding the audio data into a video encoding file to realize synchronous encoding of the video data and the audio data, thereby improving the synchronous rate of synchronous transmission of the audio and the video; creating a time stamp for the mixed code file to prove the generation time of the time mixed code file, packaging the time mixed code file added with the time stamp and transmitting the time mixed code file to a destination address, and realizing synchronous transmission of audio and video data. Therefore, the audio and video stream synchronous transmission method, the audio and video stream synchronous transmission device, the electronic equipment and the computer readable storage medium can solve the problem of poor synchronous rate during audio and video stream synchronous transmission.
Drawings
Fig. 1 is a flow chart of an audio/video stream synchronous transmission method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for calculating image quality of a video frame sub-block according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for generating a video encoded file according to an embodiment of the present application;
fig. 4 is a functional block diagram of an audio/video stream synchronous transmission device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device for implementing the audio/video stream synchronous transmission method according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the application provides an audio and video stream synchronous transmission method. The execution main body of the audio and video stream synchronous transmission method comprises, but is not limited to, at least one of a server, a terminal and the like which can be configured to execute the method provided by the embodiment of the application. In other words, the audio and video stream synchronous transmission method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, a flowchart of an audio/video stream synchronous transmission method according to an embodiment of the invention is shown. In this embodiment, the audio/video stream synchronous transmission method includes:
s1, acquiring synchronously acquired video data and audio data, and partitioning each video frame in the video data to obtain video frame sub-blocks.
In the embodiment of the invention, the video data and the audio data are data acquired by the audio and video acquisition equipment respectively in the same time period under the same environment, and the calculated amount of video frame images can be reduced based on video frame blocking by blocking each video frame in the video data.
In an embodiment of the present invention, the partitioning each video frame in the video data includes:
acquiring the frame rate of the video data, and extracting each video frame of the video data according to the frame rate;
and dividing each video frame according to the size of each video frame to obtain video frame sub-blocks of each video frame.
In the embodiment of the present invention, the video data is formed by a plurality of video frames, for example, if the frame rate of one video data is constant to 60 frames per second (fps), the frame number of the video data in one second is 60 frames, and 60 video frame images in one second are provided, the setting parameters of the video data acquisition device are different, the frame rates of the video data are different, each video frame is extracted from the video data by the frame rate, so as to obtain each video frame in the video data, and the video frames are segmented to obtain the video frame sub-blocks.
S2, calculating the image quality of the video frame sub-blocks, counting the number of the video frame sub-blocks, and carrying out image correction on the video frame sub-blocks according to the image quality and the number to obtain target video frame sub-blocks.
In the embodiment of the invention, the image quality represents the distortion degree of each video frame sub-block, and the larger the distortion degree is, the worse the image quality is; the smaller the distortion degree is, the higher the quality of the image is, the image correction is carried out on the video frames according to the image quality of the video frame sub-blocks, and the image distortion degree of the image quality of each video frame in the subsequent transmission is ensured to be smaller.
In an embodiment of the present invention, referring to fig. 2, the calculating the image quality of the video frame sub-block includes:
s21, convolving the video frame sub-block by using a preset filter to obtain edge characteristics of the video frame sub-block;
s22, performing bicubic interpolation on the video frame sub-blocks to obtain interpolation video frame sub-blocks, and generating structural features of the video frame sub-blocks according to the interpolation video frame sub-blocks;
s23, carrying out local normalization on the video frame sub-blocks to obtain normalized images of the video frame sub-blocks, and calculating brightness characteristics of the video frame sub-blocks according to the normalized images;
And S24, mapping the edge characteristics, the structural characteristics and the brightness characteristics with a pre-constructed characteristic quality relation to obtain the image quality of the video frame image.
In the embodiment of the invention, the Gabor filter can be utilized to extract the edge characteristics of the image to obtain the edge characteristics of the video frame sub-block; in the embodiment of the invention, bicubic interpolation is also called cubic convolution interpolation, and gray values of 16 pixel points around a point to be sampled in a video frame sub-block are utilized to conduct cubic interpolation, so that not only is the gray effect of 4 direct adjacent points considered, but also the effect of the gray value change rate among the adjacent points considered, and the amplification effect which is closer to a high-resolution image can be obtained through three operations, thereby obtaining a gradient information graph of the video frame sub-block.
In the embodiment of the invention, the interpolation video frame sub-block can be horizontally and vertically convolved by utilizing the Gabor (Gabor) filter to obtain the gradient information diagram of the video frame sub-block, and then the structural characteristics of the video frame sub-block are calculated according to the gradient information diagram.
In the embodiment of the invention, the structural features of the video frame sub-blocks are extracted by using the following formula, including:
where w represents structural features, i, j represent the abscissa and ordinate of the pixel points in the interpolated video frame sub-block, M, N represents the length and width of the interpolated video frame sub-block, A local binary pattern operator with radius R of the pixel point field and number I of the pixel points in the field in the interpolation video frame sub-block is represented, k represents a local binary pattern of the local binary pattern operator,representing preset local binary pattern operator weights;
in the embodiment of the present invention, the local binary patterns (Local Binary Pattern, LBP) represent different local gradient patterns, derived from the difference between the central pixel and its surrounding pixels in the image gradient map, so that the same pattern is unchanged for the central pixel value (i.e. the gradient amplitude at the position), for example, in the embodiment of the present invention, if the number of pixels I in the domain is 8, the radius of the domain is R and is set to 1, and if there are k possible LBP patterns with i+2 for 10 kinds, i.e. a 10-dimensional vector can be used to represent the feature structure.
In the embodiment of the invention, the brightness characteristic represents the change of the brightness of the pixels in the video frame sub-block, and the brightness of the video frame sub-block is calculated by local normalization, so that the characteristic of the video frame sub-block can be effectively represented, and the image quality of the video frame sub-block is further captured.
In the embodiment of the invention, the image quality of the video frame sub-blocks is comprehensively calculated through superposition of edge characteristics, structural characteristics and brightness characteristics, and the image quality of each video frame sub-block is obtained by mapping the characteristic quality relation between the image quality of the pre-constructed characteristic pre-image quality.
In the embodiment of the invention, the image correction is carried out on the video frame sub-block, so that the image quality of the target video frame sub-block is ensured, the calculated amount of the subsequent video data coding is reduced, and the stable transmission of the hybrid coding file is realized.
In the embodiment of the present invention, the performing image correction on the video frame sub-block according to the image quality and the number to obtain a target video frame sub-block includes:
determining an image correction strategy of the video frame sub-block according to the image quality and the number;
and carrying out image correction on the video frame sub-block based on the image correction strategy to obtain a target video frame sub-block.
For example, when the image quality is greater than a preset quality requirement and the number of the video frame sub-blocks is greater than a preset number threshold, removing the repeated pixel data of the preset number in the video frame sub-blocks, reducing the data volume of the target video frame sub-blocks, and ensuring the continuity of subsequent package transmission and the quality of the target video frame sub-blocks to meet the requirement; and if the image quality is not greater than a preset quality requirement or the number of the video frame sub-blocks is not greater than a preset number threshold, performing image enhancement on the video frame sub-blocks, such as denoising, histogram equalization, gamma conversion and the like, so as to improve the image quality of the target video frame sub-blocks.
In the embodiment of the invention, the image quality and the number of the video frame sub-blocks are calculated, so that the image quality of the target video frame sub-blocks after image correction can be ensured, the image calculation amount is reduced, and the continuity in data transmission is ensured.
And S3, coding the video data according to the target video frame sub-block to obtain a video coding file, and converting the audio data into a binary bit stream to obtain an audio binary bit.
In the embodiment of the invention, the encoding is a process of reducing the volume or code rate of video data and simultaneously not influencing the quality of the video data, and a target video frame sub-block is obtained to obtain a video encoding file formed by an encoding stream. The brightness and the chromaticity values of two adjacent pixels of the video frame sub-block are relatively close, namely the colors are gradually changed and are not suddenly changed into completely different colors, and the video coding is to compress the target video frame sub-block by utilizing the correlation to obtain a video coding file, so that the video data and the audio data can be synchronously coded, and the synchronization of the video data and the audio data is realized.
In an embodiment of the present invention, referring to fig. 3, the encoding the video data according to the target video frame sub-block to obtain a video encoded file includes:
S31, carrying out intra-frame prediction on each video frame in the video data according to the target video frame sub-block to obtain an image residual error of each video frame;
s32, performing discrete cosine transform on the image residual error to obtain a video transformation coefficient of the video data;
s33, quantizing the video transformation coefficient by using a preset quantization step length to obtain a video coding file.
In the embodiment of the present invention, the intra prediction uses correlation in the video spatial domain, and uses adjacent coded pixels in the same frame image to predict the current pixel, so as to achieve the purpose of effectively removing video temporal redundancy, for example, 8 bits (bit) may be needed for storing the brightness value of one pixel, but if two adjacent pixels do not change much, the original value of one pixel is stored, and the change value of the second pixel relative to the first pixel is stored, then the second value needs 2 bits, so that the storage space can be saved.
In the embodiment of the invention, the discrete cosine transform (Discrete Cosine Transform, DCT) converts a two-dimensional image from a space domain to a frequency domain, namely, calculates which two-dimensional cosine waves the image is composed of, each two-dimensional cosine wave is called a Discrete Cosine Transform (DCT) coefficient, and the superposition of all the two-dimensional waves is a target video frame sub-block of the original video data, thereby realizing the encoding of the video data.
In the embodiment of the invention, the energy of the video transformation coefficient after discrete cosine transformation is mainly concentrated at the upper left corner, the rest of the coefficients are close to zero, the quantization principle is that the transformed video transformation coefficient is divided by a constant, namely a quantization step length, and the quantized result is integer multiple of the quantization step length or more zero values, so that the video data is encoded, and a video encoding file is obtained.
In the embodiment of the invention, the binary bit stream converts the audio data into a binary data sequence, if the audio data contains characters, the binary bit stream is represented by binary ASCII codes of one byte, if the audio data contains digits, the binary bit stream is represented by binary digits of one byte, and the bits represent the minimum unit of the computer storage device, and each bit can only be 0 or 1, namely, the audio data is converted into the binary sequence consisting of 0 and 1, so that the binary bit of the audio data is obtained.
S4, embedding the audio binary bit into the video coding file to obtain the mixed coding file of the video data and the audio data.
In the embodiment of the invention, the audio binary bit embedding is to build a relation between the binary bit of audio data and Discrete Cosine Transform (DCT) coefficients in a video coding file, and if the DCT coefficients are even numbers, 0 of the audio bit is embedded into the video coding file; if the DCT coefficient is odd, the bit with the audio bit being 1 is embedded into the video coding file, and then the complete audio binary bit is embedded into the video coding file.
In an embodiment of the present invention, the embedding the audio binary bit into the video coding file to obtain the hybrid coding file of the video data and the audio data includes:
carrying out transformation coefficient correction on the video coding file according to the audio binary bit to obtain an audio-video correction transformation coefficient;
reordering the audio and video correction transformation coefficients to obtain a correction transformation coefficient sequence;
and performing entropy coding on the modified transformation coefficient to obtain a mixed coding file of the video data and the audio data.
In the embodiment of the present invention, the modification of the transform coefficient is performed according to the quantized video transform coefficient in the video encoded file, for example, if the video encoded file data at the position is non-zero, the quantized video transform coefficient at the position is used as an even number to transmit 0 of the audio binary bit, the quantized video transform coefficient at the position is used as an odd number to transmit 1 of the audio bit, and if the quantized video transform coefficient does not satisfy the relationship, the modification of the transform coefficient is performed on the video encoded file to obtain the audio-video modified transform coefficient including the audio data.
The embodiment of the invention carries out transformation coefficient correction by using the following formula to obtain an audio/video correction transformation coefficient:
wherein (1)>Representing the xth audio video modified transform coefficient, < ->Representing the x quantized video transform coefficients in the video coding file, < >>Representing the y-th audio binary bit corresponding to the x-th quantized video transform coefficient;
in the embodiment of the invention, the quantized video transformation coefficients in the video coding file are in a matrix form, so that the audio and video correction coefficients need to be reordered, wherein the reordering is to rearrange the obtained audio and video correction transformation coefficients in sequence from left to right and from top to bottom; and rearranging each audio and video correction transformation coefficient into a row vector with the size of 64 dimensions to obtain a correction transformation coefficient sequence.
In the embodiment of the invention, entropy coding is a lossless coding mode, common entropy coding comprises Huffman coding, arithmetic coding, run length coding (RLE), context-based adaptive variable length coding (CAVLC), context-based adaptive binary arithmetic coding (CABAC) and the like, the modified conversion coefficient with audio and video data information is subjected to lossless coding through entropy coding, and audio binary bits are embedded into the video coding file under the condition of not losing data, so that synchronous transmission of video data and audio data is realized.
S5, creating a time stamp for the mixed coding file to obtain a time mixed coding file, and packaging and transmitting the time mixed coding file to a preset destination address.
In the embodiment of the invention, the time stamp is created for the mixed code file to represent the timer of the mixed code data, the time stamp can record the insertion, deletion and updating actions of the mixed time code, and the complete verifiable data representing the existing time mixed code file at a specific time point can provide an electronic evidence for a user to prove the generation time of the time mixed code file.
In the embodiment of the invention, the time at the time can be used as the time stamp of the mixed code file to obtain the time mixed code file.
In the embodiment of the invention, the IP address to which the mixed time coding file needs to be transmitted is used for transmitting the mixed time coding file to the destination address, so that synchronous transmission of the audio data and the video data is completed.
In the embodiment of the present invention, the time hybrid encoded file is packaged and transmitted to a preset destination address, which includes:
calling a preset packing function according to the destination address and the time hybrid code file;
Packaging the destination address and the time hybrid code file by using the packaging function to obtain a hybrid data packet;
and transmitting the mixed data packet to the destination address.
In the embodiment of the invention, the packing function is a function of packing the destination address and the time hybrid coding file into a hybrid data packet, so that the time hybrid coding file is transmitted to the destination address.
According to the embodiment of the invention, each video frame in the video data is segmented, and the image correction is carried out on the video frame sub-blocks obtained by the segmentation to obtain the target video frame sub-blocks, so that the data volume can be reduced, the image quality of the video data can be ensured, and the continuity in data transmission can be realized; encoding the video data through the target video frame sub-block, converting the audio data into a binary bit stream, and embedding the audio data into a video encoding file to realize synchronous encoding of the video data and the audio data, thereby improving the synchronous rate of synchronous transmission of the audio and the video; creating a time stamp for the mixed code file to prove the generation time of the time mixed code file, packaging the time mixed code file added with the time stamp and transmitting the time mixed code file to a destination address, and realizing synchronous transmission of audio and video data. Therefore, the synchronous transmission method of the audio and video streams can solve the problem of poor synchronous rate during synchronous transmission of the audio and video streams.
Fig. 4 is a functional block diagram of an audio/video stream synchronous transmission device according to an embodiment of the present invention.
The audio/video stream synchronous transmission device 400 of the present invention can be installed in an electronic device. Depending on the implementation function, the audio/video stream synchronous transmission device 400 may include a video frame blocking module 401, a video frame sub-block image correction module 402, a video encoding and audio conversion module 403, an audio data embedding module 404, and an audio/video synchronous transmission module 405. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the video frame blocking module 401 is configured to obtain video data and audio data that are collected synchronously, and block each video frame in the video data to obtain a video frame sub-block;
the video frame sub-block image correction module 402 is configured to calculate an image quality of the video frame sub-block, count a number of the video frame sub-blocks, and perform image correction on the video frame sub-block according to the image quality and the number to obtain a target video frame sub-block;
The video coding and audio conversion module 403 is configured to code the video data according to the target video frame sub-block to obtain a video coding file, and convert the audio data into a binary bit stream to obtain an audio binary bit;
the audio data embedding module 404 is configured to embed the audio binary bit into the video encoded file to obtain a hybrid encoded file of the video data and the audio data;
the audio/video synchronous transmission module 405 is configured to create a timestamp for the hybrid encoded file, obtain a time hybrid encoded file, and package and transmit the time hybrid encoded file to a preset destination address.
In detail, each module in the audio/video stream synchronous transmission device 400 in the embodiment of the present invention adopts the same technical means as the audio/video stream synchronous transmission method described in fig. 1 to 3, and can produce the same technical effects, which are not described herein.
Fig. 5 is a schematic structural diagram of an electronic device for implementing an audio/video stream synchronous transmission method according to an embodiment of the present invention.
The electronic device 500 may comprise a processor 501, a memory 502, a communication bus 503 and a communication interface 504, and may further comprise a computer program stored in the memory 502 and executable on the processor 501, such as an audio/video stream synchronous transmission method program.
The processor 501 may be formed by an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed by a plurality of integrated circuits packaged with the same function or different functions, including one or more central processing units (Central Processing unit, CPU), a microprocessor, a digital processing chip, a graphics processor, a combination of various control chips, and so on. The processor 501 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing programs or modules stored in the memory 502 (for example, executing an audio/video stream synchronous transmission method program, etc.), and calling data stored in the memory 502.
The memory 502 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 502 may in some embodiments be an internal storage unit of the electronic device, such as a mobile hard disk of the electronic device. The memory 502 may also be an external storage device of the electronic device in other embodiments, for example, a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device. The memory 502 may be used to store not only application software installed in an electronic device and various data, such as codes of an audio/video stream synchronous transmission method program, but also temporarily store data that has been output or is to be output.
The communication bus 503 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable connected communication between the memory 502 and the at least one processor 501 etc.
The communication interface 504 is used for communication between the electronic device and other devices, including network interfaces and user interfaces. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), or alternatively a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
Only an electronic device having components is shown, and it will be understood by those skilled in the art that the structures shown in the figures do not limit the electronic device, and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 501 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The audio/video stream synchronous transmission method program stored in the memory 502 of the electronic device 500 is a combination of a plurality of instructions, and when executed in the processor 501, may implement:
Acquiring synchronously acquired video data and audio data, and partitioning each video frame in the video data to obtain video frame sub-blocks;
calculating the image quality of the video frame sub-blocks, counting the number of the video frame sub-blocks, and carrying out image correction on the video frame sub-blocks according to the image quality and the number to obtain target video frame sub-blocks;
encoding the video data according to the target video frame sub-block to obtain a video encoding file, and converting the audio data into a binary bit stream to obtain an audio binary bit;
embedding the audio binary bit into the video coding file to obtain a mixed coding file of the video data and the audio data;
creating a time stamp for the mixed code file to obtain a time mixed code file, and packaging and transmitting the time mixed code file to a preset destination address.
In particular, the specific implementation method of the above instruction by the processor 501 may refer to the description of the relevant steps in the corresponding embodiment of the drawings, which is not repeated herein.
Further, the modules/units integrated with the electronic device 500 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. The computer readable storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
acquiring synchronously acquired video data and audio data, and partitioning each video frame in the video data to obtain video frame sub-blocks;
calculating the image quality of the video frame sub-blocks, counting the number of the video frame sub-blocks, and carrying out image correction on the video frame sub-blocks according to the image quality and the number to obtain target video frame sub-blocks;
encoding the video data according to the target video frame sub-block to obtain a video encoding file, and converting the audio data into a binary bit stream to obtain an audio binary bit;
embedding the audio binary bit into the video coding file to obtain a mixed coding file of the video data and the audio data;
creating a time stamp for the mixed code file to obtain a time mixed code file, and packaging and transmitting the time mixed code file to a preset destination address.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present application without departing from the spirit and scope of the technical solution of the present application.

Claims (7)

1. An audio and video stream synchronous transmission method is characterized by comprising the following steps:
Acquiring synchronously acquired video data and audio data, and partitioning each video frame in the video data to obtain video frame sub-blocks;
performing convolution operation on the video frame sub-block by using a preset filter to obtain edge characteristics of the video frame sub-block, performing bicubic interpolation on the video frame sub-block to obtain an interpolation video frame sub-block, and generating structural characteristics of the video frame sub-block according to the interpolation video frame sub-block;
extracting structural features of the video frame sub-blocks using the following formula, including:
wherein ,representing structural features->Representing the abscissa and ordinate of pixel points in the sub-block of the interpolated video frame, < >>Representing the length and width of the interpolated video frame sub-block, < >>Representing pixel points in sub-blocks of an interpolated video frameThe radius of the field is->The number of pixels in the field is +.>Is a local binary pattern operator of +.>Local binary pattern representing a local binary pattern operator,/->Representing preset local binary pattern operator weights;
carrying out local normalization on the video frame sub-blocks to obtain normalized images of the video frame sub-blocks, calculating brightness characteristics of the video frame sub-blocks according to the normalized images, mapping the edge characteristics, the structural characteristics and the brightness characteristics with a pre-constructed characteristic quality relation to obtain image quality of the video frame sub-blocks, counting the number of the video frame sub-blocks, determining an image correction strategy of the video frame sub-blocks according to the image quality and the number, and carrying out image correction on the video frame sub-blocks based on the image correction strategy to obtain target video frame sub-blocks;
Encoding the video data according to the target video frame sub-block to obtain a video encoding file, and converting the audio data into a binary bit stream to obtain an audio binary bit;
embedding the audio binary bit into the video coding file to obtain a mixed coding file of the video data and the audio data, and embedding the audio binary bit into the video coding file to obtain a mixed coding file of the video data and the audio data, wherein the mixed coding file comprises the following steps: carrying out transformation coefficient correction on the video coding file according to the audio binary bit to obtain an audio-video correction transformation coefficient; reordering the audio and video correction transformation coefficients to obtain a correction transformation coefficient sequence; entropy coding is carried out on coefficients in the modified transformation coefficient sequence, so that a mixed coding file of the video data and the audio data is obtained;
creating a time stamp for the mixed code file to obtain a time mixed code file, and packaging and transmitting the time mixed code file to a preset destination address.
2. The method for synchronously transmitting an audio and video stream according to claim 1, wherein said partitioning each video frame in said video data comprises:
Acquiring the frame rate of the video data, and extracting each video frame of the video data according to the frame rate;
and dividing each video frame according to the size of each video frame to obtain video frame sub-blocks of each video frame.
3. The method for synchronously transmitting an audio and video stream according to claim 1, wherein said encoding said video data according to said target video frame sub-block to obtain a video encoded file comprises:
carrying out intra-frame prediction on each video frame in the video data according to the target video frame sub-block to obtain an image residual error of each video frame;
performing discrete cosine transform on the image residual to obtain a video transform coefficient of the video data;
and quantizing the video transformation coefficient by using a preset quantization step length to obtain a video coding file.
4. The method for synchronously transmitting an audio and video stream according to claim 1, wherein said packaging and transmitting the time-mixed encoded file to a preset destination address comprises:
calling a preset packing function according to the destination address and the time hybrid code file;
packaging the destination address and the time hybrid code file by using the packaging function to obtain a hybrid data packet;
And transmitting the mixed data packet to the destination address.
5. An audio and video stream synchronous transmission device, characterized in that the device comprises:
the video frame blocking module is used for acquiring synchronously acquired video data and audio data, and blocking each video frame in the video data to obtain video frame sub-blocks;
the video frame sub-block image correction module is used for carrying out convolution operation on the video frame sub-block by utilizing a preset filter to obtain the edge characteristics of the video frame sub-block, carrying out bicubic interpolation on the video frame sub-block to obtain an interpolation video frame sub-block, and generating the structural characteristics of the video frame sub-block according to the interpolation video frame sub-block;
extracting structural features of the video frame sub-blocks using the following formula, including:
wherein ,representing structural features->Representing the abscissa and ordinate of pixel points in the sub-block of the interpolated video frame, < >>Representing the length and width of the interpolated video frame sub-block, < >>Representing the radius of the pixel area in the interpolation video frame sub-block as +.>The number of pixels in the field is +.>Is a local binary pattern operator of +.>Local binary pattern representing a local binary pattern operator,/->Representing preset local binary pattern operator weights;
Carrying out local normalization on the video frame sub-blocks to obtain normalized images of the video frame sub-blocks, calculating brightness characteristics of the video frame sub-blocks according to the normalized images, mapping the edge characteristics, the structural characteristics and the brightness characteristics with a pre-constructed characteristic quality relation to obtain image quality of the video frame sub-blocks, counting the number of the video frame sub-blocks, determining an image correction strategy of the video frame sub-blocks according to the image quality and the number, and carrying out image correction on the video frame sub-blocks based on the image correction strategy to obtain target video frame sub-blocks;
the video coding and audio conversion module is used for coding the video data according to the target video frame sub-block to obtain a video coding file, and converting the audio data into a binary bit stream to obtain an audio binary bit;
an audio data embedding module, configured to embed the audio binary bit into the video encoded file to obtain a hybrid encoded file of the video data and the audio data, and embed the audio binary bit into the video encoded file to obtain a hybrid encoded file of the video data and the audio data, where the audio data embedding module includes: carrying out transformation coefficient correction on the video coding file according to the audio binary bit to obtain an audio-video correction transformation coefficient; reordering the audio and video correction transformation coefficients to obtain a correction transformation coefficient sequence; entropy coding is carried out on coefficients in the modified transformation coefficient sequence, so that a mixed coding file of the video data and the audio data is obtained;
And the audio and video synchronous transmission module is used for creating a time stamp for the mixed coding file to obtain a time mixed coding file, and packaging and transmitting the time mixed coding file to a preset destination address.
6. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the audio video stream synchronous transmission method according to any one of claims 1 to 4.
7. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the audio-video stream synchronous transmission method according to any one of claims 1 to 4.
CN202310416903.7A 2023-04-19 2023-04-19 Audio and video stream synchronous transmission method and device, electronic equipment and storage medium Active CN116132759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310416903.7A CN116132759B (en) 2023-04-19 2023-04-19 Audio and video stream synchronous transmission method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310416903.7A CN116132759B (en) 2023-04-19 2023-04-19 Audio and video stream synchronous transmission method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116132759A CN116132759A (en) 2023-05-16
CN116132759B true CN116132759B (en) 2023-09-12

Family

ID=86297770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310416903.7A Active CN116132759B (en) 2023-04-19 2023-04-19 Audio and video stream synchronous transmission method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116132759B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1215962A (en) * 1997-02-13 1999-05-05 索尼公司 Picture signal processing method and apparatus
US6983057B1 (en) * 1998-06-01 2006-01-03 Datamark Technologies Pte Ltd. Methods for embedding image, audio and video watermarks in digital data
CN101217670A (en) * 2008-01-14 2008-07-09 吉林大学 An audio self-adapting embedded video and drawing method for encoding and decoding purpose of synchronization about video and audio
CN110782413A (en) * 2019-10-30 2020-02-11 北京金山云网络技术有限公司 Image processing method, device, equipment and storage medium
CN112272313A (en) * 2020-12-23 2021-01-26 深圳乐播科技有限公司 HID (high intensity discharge) -based audio and video transmission method and device and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9886961B2 (en) * 2015-01-15 2018-02-06 Gopro, Inc. Audio watermark in a digital video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1215962A (en) * 1997-02-13 1999-05-05 索尼公司 Picture signal processing method and apparatus
US6983057B1 (en) * 1998-06-01 2006-01-03 Datamark Technologies Pte Ltd. Methods for embedding image, audio and video watermarks in digital data
CN101217670A (en) * 2008-01-14 2008-07-09 吉林大学 An audio self-adapting embedded video and drawing method for encoding and decoding purpose of synchronization about video and audio
CN110782413A (en) * 2019-10-30 2020-02-11 北京金山云网络技术有限公司 Image processing method, device, equipment and storage medium
CN112272313A (en) * 2020-12-23 2021-01-26 深圳乐播科技有限公司 HID (high intensity discharge) -based audio and video transmission method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN116132759A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
US10412393B2 (en) Intra-frame encoding method, intra-frame decoding method, encoder, and decoder
CN109842803B (en) Image compression method and device
CN105100814B (en) Image coding and decoding method and device
JPH06511361A (en) Adaptive block size image compression method and system
JP2009010954A (en) Method and system for processing image at high speed
CN110691250B (en) Image compression apparatus combining block matching and string matching
CN107431812B (en) For showing the complex region detection of stream compression
CN111741302B (en) Data processing method and device, computer readable medium and electronic equipment
CN113170140A (en) Bit plane encoding of data arrays
KR101805550B1 (en) Image data encoding method for presentation virtualization and server therefor
CN108769684A (en) Image processing method based on WebP image compression algorithms and device
CN102271251B (en) Lossless image compression method
CN107431811A (en) For showing that the quantization parameter of stream compression updates classification
JP2010098352A (en) Image information encoder
CN110913230A (en) Video frame prediction method and device and terminal equipment
CN104104953A (en) Tile-based compression and decompression for graphic applications
WO2024078066A1 (en) Video decoding method and apparatus, video encoding method and apparatus, storage medium, and device
CN116132759B (en) Audio and video stream synchronous transmission method and device, electronic equipment and storage medium
WO2012118569A1 (en) Visually optimized quantization
CN102577412A (en) Image coding method and device
CN114693818A (en) Compression method suitable for digital ortho image data
CN108900842B (en) Y data compression processing method, device and equipment and WebP compression system
CN107172425B (en) Thumbnail generation method and device and terminal equipment
CN110876062A (en) Electronic device for high-speed compression processing of feature map and control method thereof
Kim et al. Implementation of DWT-based adaptive mode selection for LCD overdrive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant