CN117641063A - Video source previewing method and device, electronic equipment and storage medium - Google Patents
Video source previewing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117641063A CN117641063A CN202311656146.7A CN202311656146A CN117641063A CN 117641063 A CN117641063 A CN 117641063A CN 202311656146 A CN202311656146 A CN 202311656146A CN 117641063 A CN117641063 A CN 117641063A
- Authority
- CN
- China
- Prior art keywords
- video signal
- target
- video
- pixel point
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000004806 packaging method and process Methods 0.000 claims abstract description 6
- 230000006835 compression Effects 0.000 claims description 98
- 238000007906 compression Methods 0.000 claims description 96
- 239000011159 matrix material Substances 0.000 claims description 45
- 238000006243 chemical reaction Methods 0.000 claims description 26
- 238000012856 packing Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000011218 segmentation Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 101100175003 Oryza sativa subsp. japonica RGB1 gene Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440263—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
- H04N21/440272—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA for performing aspect ratio conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/637—Control signals issued by the client directed to the server or network components
- H04N21/6377—Control signals issued by the client directed to the server or network components directed to server
- H04N21/6379—Control signals issued by the client directed to the server or network components directed to server directed to encoder, e.g. for requesting a lower encoding rate
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present disclosure relates to the field of video source technologies, and in particular, to a video source previewing method, a device, an electronic apparatus, and a storage medium. The method comprises the following steps: the method comprises the steps of obtaining a signal to be previewed video output, namely a video original image signal, numbering video signals to obtain a target video signal, packaging the target video signal, and sending the target video signal to a user terminal, so that the user terminal can decode and display the packaged target video signal to realize video source previewing, a professional codec chip is not needed to encode and decode a video source, and the cost of video source previewing is reduced.
Description
Technical Field
The present disclosure relates to the field of video source technologies, and in particular, to a video source previewing method, a device, an electronic apparatus, and a storage medium.
Background
The video source previewing refers to previewing video files or previewing video files so as to search and screen quickly, and a user can quickly determine the files which the user wants to query through the video source previewing. The video source preview generally encodes and decodes signals output by the video source through a professional encoding and decoding chip, and performs preview display.
Specifically, a video source to be previewed is accessed to an input end of a professional codec chip, the professional codec chip analyzes the input video source, decodes the video signal into a digital signal, and outputs the decoded digital signal to a display device so as to realize a decoding process of the video source.
However, different video sources may employ different coding formats and compression algorithms, one specialized codec chip may only support a part of coding formats and compression algorithms, and the specialized codec chip generally needs higher manufacturing cost, that is, more than one specialized codec chip is often needed to encode and decode video sources with multiple formats, so that video source previewing needs higher cost, and with the wide application of video source previewing, how to reduce the cost of video source previewing is a problem to be solved.
Disclosure of Invention
In order to reduce the video source preview cost, the application provides a video source preview method, a video source preview device, electronic equipment and a storage medium.
In a first aspect, the present application provides a video source previewing method, which adopts the following technical scheme:
a video source preview method comprising:
acquiring a video signal, wherein the video signal is a signal output by a video source to be previewed;
encoding the video signal to obtain a target video signal;
and packaging the target video signals, and sending the packaged target video signals to a user terminal so as to decode and display the packaged target video signals through the user terminal.
By adopting the technical scheme, the signal to be previewed and output by the video is obtained, namely the video original image signal, the video signal is numbered, the target video signal is obtained, the target video signal is packaged and sent to the user terminal, so that the user terminal can decode and display the packaged target video signal, the video source previewing is realized, a professional encoding and decoding chip is not needed for encoding and decoding the video source, and the cost of the video source previewing is reduced.
In one possible implementation, encoding the video signal to obtain a target video signal includes:
determining a size of the video signal, a compressed lateral size, and a compressed longitudinal size, the sizes including a lateral size and a longitudinal size;
determining a compression coefficient of the video signal based on the size of the video signal, the compressed lateral size, and the compressed longitudinal size;
and compressing the video signal based on the compression coefficient to obtain a target video signal.
By adopting the technical scheme, the compression coefficient of the video signal is determined by determining the size, the compressed transverse size and the compressed longitudinal size of the video signal, and the video signal is compressed based on the compression coefficient, so that the realization mode for obtaining the target video signal is provided.
In one possible implementation, determining the compression coefficient of the video signal based on the size of the video signal, the compressed lateral size, and the compressed longitudinal size includes:
determining a lateral compression coefficient of the video signal based on a lateral dimension of the video signal and the compressed lateral dimension;
determining a longitudinal compression coefficient of the video signal based on a lateral dimension of the video signal and the compressed longitudinal dimension;
a compression coefficient of the video signal is determined based on the lateral compression coefficient and the longitudinal compression coefficient.
By adopting the technical scheme, the transverse compression coefficient of the video signal is determined based on the transverse dimension of the video signal and the transverse dimension after compression, the longitudinal compression coefficient of the video signal is determined based on the longitudinal dimension of the video signal and the longitudinal dimension after compression, and the compression coefficient of the video signal is determined based on the transverse compression coefficient and the longitudinal compression coefficient, so that an implementation mode for determining the compression coefficient of the video signal is provided.
In one possible implementation, compressing the video signal based on the compression coefficient to obtain a target video signal includes:
acquiring a pixel point matrix corresponding to the video signal, and determining the number of row pixels of each row and the number of column pixels of each column in the pixel point matrix;
determining the number of horizontal pixel points based on the horizontal compression coefficient and the number of the row pixels;
determining the number of vertical pixel points based on the vertical compression coefficient and the number of column pixels;
and compressing the video signal based on the horizontal pixel number and the vertical pixel number to obtain a target video signal.
By adopting the technical scheme, the vertical pixel point can be obtained by acquiring the pixel point matrix corresponding to the video signal, determining the row pixel number of each row and the column pixel number of each column in the pixel point matrix, determining the horizontal pixel point based on the horizontal compression coefficient and the row pixel number, and compressing the video signal based on the horizontal pixel point and the vertical pixel point, so as to obtain the target video signal, namely, compressing the pixel point matrix corresponding to the video signal through the compression coefficient without considering the format problem of the video source, thereby reducing the cost of video source preview.
In one possible implementation, compressing the video signal based on the number of horizontal pixels and the number of vertical pixels to obtain a target video signal includes:
dividing the pixel point matrix into a plurality of sub-pixel point matrixes based on the transverse pixel point number and the longitudinal pixel point number, wherein the number of row pixels of each row in each sub-pixel point matrix is the same as that of the transverse pixel point number, and the number of column pixels of each column is the same as that of the longitudinal pixel point number;
fusing each row of pixel points in each sub-pixel point matrix into pixel point data, and compressing the fused pixel points to obtain target pixel point data;
and performing format conversion on the data of each target pixel point to obtain a target video signal.
By adopting the technical scheme, the pixel point matrix is divided into a plurality of sub-pixel point matrixes based on the transverse pixel point number and the longitudinal pixel point number, each sub-pixel point matrix is compressed into one target pixel point data, each target pixel point is subjected to format conversion, and one target video signal is obtained after the format conversion of each target pixel point data is finished.
In one possible implementation manner, the pixel point data is RBG data, and the converting the format of each target pixel point data to obtain a target video signal includes:
determining a linear conversion relation between RBG data and YUV data corresponding to each target pixel data;
and converting each target pixel point data into YUV data based on the linear conversion relation to obtain a target video signal.
By adopting the technical scheme, RBG data corresponding to each target pixel point data is converted into YUV data through the linear conversion relation between the RBG data and the YUV data, so as to obtain a target video signal, and the data format is converted into YUV, so that an image can be correctly displayed on the display device.
In one possible implementation, the target video signal is packetized and sent to the user terminal, including:
determining a number of divisions based on the video signal;
dividing the target video signal into a divided number of data packets;
each data packet is sent to the user terminal.
By adopting the technical scheme, the target video signal is segmented by determining the segmentation number, the segmentation number of data packets are obtained, and each data packet is sent to the user terminal, so that the efficiency and the success rate of data transmission are improved.
In a second aspect, the present application provides a video source preview apparatus, which adopts the following technical scheme:
a video source preview device comprising:
the acquisition module is used for acquiring video signals, wherein the video signals are signals output by a video source to be previewed;
the coding module is used for coding the video signal to obtain a target video signal;
and the packing module is used for packing the target video signals and sending the target video signals to the user terminal so as to decode and display the packed target video signals through the user terminal.
In a third aspect, the present application provides an electronic device, which adopts the following technical scheme:
an electronic device, the electronic device comprising:
at least one processor;
a memory;
at least one application program, wherein the at least one application program is stored in the memory and configured to be executed by the at least one processor, the at least one application program configured to: the video source preview method described in the first aspect is executed.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer-readable storage medium, comprising: a computer program capable of being loaded by a processor and executing the video source preview method of the first aspect described above is stored.
In summary, the present application includes the following beneficial technical effects: the method comprises the steps of obtaining a signal to be previewed video output, namely a video original image signal, numbering video signals to obtain a target video signal, packaging the target video signal, and sending the target video signal to a user terminal, so that the user terminal can decode and display the packaged target video signal to realize video source previewing, a professional codec chip is not needed to encode and decode a video source, and the cost of video source previewing is reduced.
Drawings
Fig. 1 is a flowchart of a video source preview method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a pixel matrix dividing manner according to an embodiment of the present application;
fig. 3 is a block schematic diagram of a video source preview device according to an embodiment of the present application;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below in conjunction with figures 1-4.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The video source previewing refers to previewing video files or previewing video files so as to search and screen quickly, and a user can quickly determine the files which the user wants to query through the video source previewing. The video source preview generally encodes and decodes signals output by the video source through a professional encoding and decoding chip, and performs preview display. Specifically, a video source to be previewed is accessed to an input end of a professional codec chip, the professional codec chip analyzes the input video source, decodes the video signal into a digital signal, and outputs the decoded digital signal to a display device so as to realize a decoding process of the video source. However, different video sources may employ different coding formats and compression algorithms, one specialized codec chip may only support a part of coding formats and compression algorithms, and the specialized codec chip generally needs higher manufacturing cost, that is, more than one specialized codec chip is often needed to encode and decode video sources with multiple formats, so that video source previewing needs higher cost, and with the wide application of video source previewing, how to reduce the cost of video source previewing is a problem to be solved.
In view of this, the embodiment of the application provides a video source previewing method, which obtains a signal output by a video to be previewed, namely a video original image signal, numbers the video signal to obtain a target video signal, packages the target video signal, and sends the target video signal to a user terminal, so that the user terminal can decode and display the packaged target video signal to realize video source previewing, a professional codec chip is not required to encode and decode the video source, and the cost of video source previewing is reduced.
Referring to fig. 1, an embodiment of the present application provides a video source preview method, which is performed by an electronic device, and includes:
step S101, acquiring a video signal.
The video signal is a signal output by a video source to be previewed. Specifically, the signal output by the video source to be previewed is the original image data of the video source to be previewed, for example, the video signal is RBG data of the video source to be previewed.
Specifically, the user selects an input interface for previewing, for example, an input interface such as DVI (Digital Visual Interface, digital video interface), HDMI (High Definition Multimedia Interface ), VGA (Video Graphics Array, video graphics array) and SDI, and the electronic device receives a signal output by a video source to be previewed corresponding to the input interface. The signal output by the video source to be previewed may be one signal, or may be multiple signals, for example, four signals, two signals, eight signals, and so on. It should be noted that the following embodiments are described by taking one signal as an example.
Step S102, encoding the video signal to obtain a target video signal.
The encoding process generally includes compression, encoding, and encapsulation. Specifically, after the video signal is obtained, the video signal is compressed to remove redundant information in the video signal, so that the storage and the transmission are convenient. More specifically, the video signal may be compressed by means of video compression software, network transmission, microwave transmission, and the like.
After compression of the video signal is completed, the compressed video signal is encoded to convert the data of the video signal into data in a machine-readable format. Specifically, an encoding algorithm such as h.264 or h.265 may be used to encode the compressed video data. Further, after compression encoding is performed on the video signal, the video signal after compression encoding is put into a file according to a certain format, so that the video signal after compression encoding is packaged, and the packaged video signal is the target video signal. The format of the package is selected according to actual needs, which is not limited in the embodiment of the present application.
And step S103, packaging the target video signals, and sending the packaged target video signals to the user terminal so as to decode and display the packaged target video signals through the user terminal.
Specifically, the packing of the target video signal may be achieved by using a dedicated video compression software or encoder, and the packed target video signal is transmitted to the user terminal, so that the packed target video signal is decoded and displayed by the user terminal. More specifically, after the electronic device packages the target video signal, a notification signal is triggered to notify the control card to read the packaged target video signal, and after the control card reads the packaged target video signal, the packaged target video signal is sent to the user terminal through a protocol, and after the user terminal receives the packaged target video signal, the packaged target video signal is decoded and displayed by the user terminal, so that the preview of the video source is completed. More specifically, the electronic device may send the packetized target video signal to the user terminal via a UDP protocol (User Datagram Protocol ).
In a possible implementation manner of the embodiment of the present application, in the step S102, encoding a video signal to obtain a target video signal may specifically include: determining the size of the video signal, the compressed transverse size and the compressed longitudinal size; determining a compression coefficient of the video signal based on the size of the video signal, the compressed lateral size, and the compressed longitudinal size; and compressing the video signal based on the compression coefficient to obtain a target video signal.
Wherein the dimensions include a transverse dimension and a longitudinal dimension. In particular, the size of a video signal generally refers to the resolution of the video, i.e. the number of pixels of the image in the vertical and horizontal directions.
Specifically, the size of the video signal to be previewed output by the video source is determined based on the number of pixels of the video signal in the vertical and horizontal directions, and the compressed lateral size and the compressed longitudinal size are acquired. The compressed transverse dimension and the compressed longitudinal dimension are preset in advance, and the excessive compression may cause the loss of video quality, so that the compressed transverse dimension and the compressed longitudinal dimension can be set according to actual conditions. For example, the size of the video signal is: transverse dimension 1920, longitudinal dimension 1080, and the resulting compressed transverse dimension 240, and compressed longitudinal dimension 136.
Further, after determining the size, the compressed transverse size and the compressed longitudinal size of the video signal, determining a compression coefficient of the video signal, and compressing the video signal based on the compression coefficient to obtain the target video signal. Specifically, based on the lateral dimension of the video signal and the compressed lateral dimension, a lateral compression coefficient is calculated, a longitudinal compression coefficient is similarly obtained, and the lateral compression coefficient and the longitudinal compression coefficient are used as the compression coefficients of the video signal.
Further, after the compression coefficient of the video signal is obtained, the video signal is compressed to obtain the target video signal. Specifically, the video signal is compressed laterally based on the lateral compression coefficient, the video signal is compressed longitudinally based on the longitudinal compression coefficient, and the compressed video signal is taken as a target video signal.
In one possible implementation manner of the embodiment of the present application, in the foregoing embodiment, determining the compression coefficient of the video signal based on the size of the video signal, the compressed lateral size, and the compressed longitudinal size may specifically include: determining a lateral compression coefficient of the video signal based on the lateral dimension of the video signal and the compressed lateral dimension; determining a longitudinal compression coefficient of the video signal based on the lateral dimension of the video signal and the compressed longitudinal dimension; the compression coefficient of the video signal is determined based on the lateral compression coefficient and the longitudinal compression coefficient.
Wherein the compression coefficients include a transverse compression coefficient and a longitudinal compression coefficient.
Specifically, after the size, the compressed transverse size and the compressed longitudinal size of the video signal are determined, the ratio of the transverse size to the compressed transverse size of the video signal can be calculated based on the transverse size and the compressed transverse size of the video signal, so as to obtain the transverse compression coefficient of the video signal; similarly, the ratio of the longitudinal dimension of the video signal to the compressed longitudinal dimension is calculated to obtain the longitudinal compression coefficient of the video signal. Illustratively, in the above example, when the video signal has a transverse dimension of 1920, a longitudinal dimension of 1080, a compressed transverse dimension of 240, and a compressed longitudinal dimension of 136, a transverse compression coefficient=1920/240, that is, a transverse compression coefficient of 8, and a longitudinal compression coefficient=1080/136, that is, a longitudinal compression coefficient of 7.94 are calculated.
Further, after the lateral compression coefficient and the longitudinal compression coefficient are determined, the lateral compression coefficient and the longitudinal compression coefficient are used as the compression coefficients of the video signal.
In one possible implementation manner of the embodiment of the present application, in the foregoing embodiment, compressing the video signal based on the compression coefficient to obtain the target video signal may specifically include: acquiring a pixel point matrix corresponding to a video signal, and determining the number of row pixels of each row and the number of column pixels of each column in the pixel point matrix; determining the number of horizontal pixels based on the horizontal compression coefficient and the number of row pixels; determining the number of vertical pixel points based on the vertical compression coefficient and the number of column pixels; and compressing the video signal based on the number of horizontal pixels and the number of vertical pixels to obtain a target video signal.
Specifically, based on the size of the video signal, a pixel point matrix corresponding to the video signal is obtained, and the number of pixel points in each row, namely the number of row pixels, in the pixel point matrix and the number of pixel points in each column, namely the number of column pixels, are determined. Illustratively, in the above example, when the video signal has a transverse size of 1920 and a longitudinal size of 1080, the number of row pixels of the pixel dot matrix is 1920 and the number of column pixels is 1080.
Further, after the number of row pixels and the number of column pixels are obtained, the number of lateral pixels may be determined based on the lateral compression coefficient and the number of row pixels. Specifically, the lateral compression coefficient is taken as the lateral pixel number; and the longitudinal pixel point number can be obtained by the same method.
Further, after determining the number of horizontal pixels and the number of vertical pixels, the video signal may be compressed based on the number of horizontal pixels and the number of vertical pixels to obtain the target video signal. Specifically, selecting a plurality of horizontal pixel points from a pixel point matrix, namely a plurality of vertical pixel points, as a target pixel point, compressing the target pixel point into a pixel point, selecting a plurality of uncompressed horizontal pixel points from the pixel point matrix, namely a plurality of vertical pixel points, and the like, until all the pixel points in the pixel point matrix are compressed, and obtaining a target video signal.
In one possible implementation manner of the embodiment of the present application, in the foregoing embodiment, based on the number of horizontal pixels and the number of vertical pixels, compressing the video signal to obtain the target video signal includes: dividing a pixel point matrix into a plurality of sub-pixel point matrixes based on the number of horizontal pixel points and the number of vertical pixel points; fusing each row of pixel points in each sub-pixel point matrix into pixel point data, and compressing the fused pixel points to obtain target pixel point data; and performing format conversion on the data of each target pixel point to obtain a target video signal.
The number of row pixels of each row in each sub-pixel point matrix is the same as the number of horizontal pixels, the number of column pixels of each column is the same as the number of vertical pixels, and the sub-pixel point matrix belongs to the pixel point matrix.
Specifically, after determining the number of horizontal pixels and the number of vertical pixels, the pixel matrix may be divided into a plurality of sub-pixel matrices with the size equal to the number of horizontal pixels and the number of vertical pixels, and each sub-pixel matrix may be compressed. More specifically, the division is started from the first row and the first column in the pixel point matrix, the row division is performed once every two longitudinal pixel point rows, and the column division is performed every two transverse pixel point number columns, so as to obtain a plurality of sub-pixel point matrixes with the size of the transverse pixel point number which is equal to the longitudinal pixel point number. For example, in the above example, referring to fig. 2, the pixel dot matrix has 1920 rows, 1080 columns, 8 transverse compression coefficients, and 7.94 longitudinal compression coefficients, that is, a division is performed every 7.94 rows and 8 columns, so as to obtain a plurality of sub-pixel dot matrices with a size of 8×7.94.
Further, after the sub-pixel point matrix is obtained, the sub-pixel point matrix is compressed to obtain the target video signal. Specifically, for each sub-pixel matrix, each row of pixels in the sub-pixel matrix can be fused into one pixel data, and the fused pixels are compressed to obtain one target pixel data; or fusing each column of pixel points in the sub-pixel point matrix into pixel point data, and compressing the fused pixel points to obtain target pixel point data. Illustratively, in the above example, when the pixel data is RBG data, 8 pixels are grouped in each transverse direction, and 7.94 pixels are grouped in each longitudinal direction, and RGB values of one pixel, that is, target pixel data, are fused. Specifically, the 64 pixels of 8 lines may be first transversely fused into 8 dots, where h_rgb 1= (RGB 1+rgb 2..+ RGB 8)/8, and then longitudinally fused, where rgb= (h_rgb 1+h_rgb2+h_rgb 3.+ h_rgb7+h_rgb 8.+ 0.94)/7.94, to obtain the target pixel dot data. Wherein RGB1 is RBG data corresponding to the pixel points in the first row and the first column.
Further, after compressing each sub-pixel matrix, a plurality of target pixel data are obtained, and format conversion is performed on each target pixel data to obtain a target video signal. Specifically, the format of each target pixel point data is converted into a preset format based on the respective format of each target pixel point data, and a target video signal is obtained after the format conversion of each target pixel point is completed.
In an embodiment of the present application, when the pixel data is RBG data, format conversion is performed on each target pixel data to obtain a target video signal, which may specifically include: determining a linear conversion relation between RBG data and YUV data corresponding to each target pixel data; each target pixel data is converted into YUV data based on a linear conversion relationship to obtain a target video signal.
In the YUV format, Y represents a luminance signal, and U and V represent color difference signals. In the RGB format R, G, B represents the intensity of red, green and blue, respectively.
Specifically, a linear conversion relationship between RBG data and YUV data corresponding to each target pixel data is determined. More specifically, some color sample data in RGB and YUV formats is acquired for performing the calculation of the conversion relationship. RGB and YUV color data are processed using mathematical tools, such as linear regression analysis, to find a linear relationship between them. And determining a YUV-RGB linear conversion formula according to the processed data. Illustratively, the linear conversion relationship between RBG data and YUV data may be: y=0.299r+0.587g+0.114 b; u= -0.147R-0.289g+0.436b=0.492 x (B-Y); v=0.615R-0.515G-0.100 b=0.877 (R-Y).
Further, after determining the linear conversion relation between the RBG data and the YUV data, converting each target pixel point data from the RBG format to the YUV format, and obtaining a target video signal after finishing all the target pixel point conversion.
Further, when the pixel point data is YUV data, r=y+1.140V may be used; g=y-0.394U-0.581V; b=y+2.032U; converting each target pixel point data from YUV format to RBG format, and obtaining a target video signal after all target pixel point conversion is completed.
In one possible implementation manner of the embodiment of the present application, in the foregoing embodiment, the method for packaging the target video signal and sending the target video signal to the user terminal may specifically include: determining a number of divisions based on the video signal; dividing the target video signal into a divided number of data packets; each data packet is sent to the user terminal.
Specifically, the number of divisions is determined based on the size of the video signal. The size of each video signal corresponds to a preset dividing number, and the dividing number can be directly obtained based on the size of the video signal. Further, after the division number is obtained, the target video signal is divided into the division number of data packets, and each data packet is transmitted to the user terminal. Specifically, the target video signal may be segmented using a random segmentation method, a systematic segmentation method, a hierarchical segmentation method, a model segmentation method, and an integrated segmentation method.
Specifically, after dividing the target video signal into a plurality of divided data packets, each data packet may be packed, and after the packing is completed, the data packets may be transmitted. More specifically, the data packing format may be based on: slot (slot number-1 byte) +port (interface number-1 byte) +packageid (packet ID-1 byte (0-33))+1440 Bytes data=1443 Bytes, where the slot number and interface number may be used to identify and locate parameters and channels of data. The slot number may be used to identify a slot in a physical device or a component location in a slot. Each slot may house a particular type of component, such as a processor, memory module, expansion card, etc. The slot numbers may be numbered according to a specification or standard of hardware design. The interface number may be used to identify different interfaces or connection channels in the device or system. In a computer network, each network interface is assigned a unique interface number for network communications and resource sharing. The interface number may also be used to represent different types of device interfaces, such as a USB interface, HDMI interface, etc. Illustratively, in the above example, when the number of row pixels of the pixel dot matrix is 1920, the number of column pixels is 1080, the lateral compression coefficient is 8, and the longitudinal compression coefficient is 7.94, the data format may be: first line data: y0, U0, Y1, Y2, U1, Y3. Y238, U119, Y239; second line data: y240, V0, Y241, Y242, V1, Y243..y 478, V119, Y479; third line of data: y480, U120, Y481, Y482, U121, Y483..y 718, U239, Y719; fourth line data: y720, V120, Y721, Y722, V121y723..y 958, V239, Y959.
The foregoing embodiment describes a video source preview manner from the aspect of a method flow, where the signal output by the video source to be previewed may be one signal or may be multiple signals, for example, four signals, two signals, and eight signals, and the foregoing embodiment describes an example of one signal, and when the signal output by the video source to be previewed is multiple signals, the processing of the electronic device on each signal is consistent, which is consistent with the processing procedure of one signal in the foregoing embodiment. Specifically, when the video source to be previewed is multiple paths, multiple paths of signals output by the video source to be previewed are obtained, each path of video signals are encoded to obtain target video signals corresponding to each path of video signals, the target video signals corresponding to each path of video signals are respectively packaged and sent to a user terminal, and the user terminal decodes and displays the packaged target video signals. More specifically, for a detailed processing procedure of the multiple signals output by the video source to be previewed, refer to the above embodiment, and the embodiments of the present application are not repeated herein.
The above embodiments describe a video source preview method from the perspective of a method flow, and the following embodiments describe a video source preview device from the perspective of a virtual module or a virtual unit, which are described in detail in the following embodiments.
Referring to fig. 3, an embodiment of the present application provides a video source preview apparatus 30, including: the acquisition model 301, the encoding module 302 and the packing module 303.
A video source preview device 30 may specifically include:
the acquiring module 301 is configured to acquire a video signal, where the video signal is a signal output by a video source to be previewed;
the encoding module 302 is configured to encode a video signal to obtain a target video signal;
and the packing module 303 is configured to pack the target video signal and send the packed target video signal to the user terminal, so that the user terminal decodes and displays the packed target video signal.
In one possible implementation manner of the embodiment of the present application, when the encoding module 302 encodes the video signal to obtain the target video signal, the encoding module may be specifically configured to:
determining a size of the video signal, a compressed lateral size and a compressed longitudinal size, the size including the lateral size and the longitudinal size;
determining a compression coefficient of the video signal based on the size of the video signal, the compressed lateral size, and the compressed longitudinal size;
and compressing the video signal based on the compression coefficient to obtain a target video signal.
In one possible implementation manner of the embodiment of the present application, the encoding module 302 determines the compression coefficient of the video signal based on the size of the video signal, the compressed transverse size and the compressed longitudinal size, and may specifically be used for:
determining a lateral compression coefficient of the video signal based on the lateral dimension of the video signal and the compressed lateral dimension;
determining a longitudinal compression coefficient of the video signal based on the lateral dimension of the video signal and the compressed longitudinal dimension;
the compression coefficient of the video signal is determined based on the lateral compression coefficient and the longitudinal compression coefficient.
In one possible implementation manner of the embodiment of the present application, the encoding module 302 compresses the video signal based on the compression coefficient to obtain the target video signal, which may be specifically used to:
acquiring a pixel point matrix corresponding to a video signal, and determining the number of row pixels of each row and the number of column pixels of each column in the pixel point matrix;
determining the number of horizontal pixels based on the horizontal compression coefficient and the number of row pixels;
determining the number of vertical pixel points based on the vertical compression coefficient and the number of column pixels;
and compressing the video signal based on the number of horizontal pixels and the number of vertical pixels to obtain a target video signal.
In one possible implementation manner of the embodiment of the present application, the encoding module 302 compresses the video signal based on the number of horizontal pixels and the number of vertical pixels to obtain the target video signal, which may be specifically used to:
dividing a pixel point matrix into a plurality of sub-pixel point matrixes based on the number of horizontal pixel points and the number of vertical pixel points, wherein the number of row pixels of each row in each sub-pixel point matrix is the same as that of the horizontal pixel points, and the number of column pixels of each column is the same as that of the vertical pixel points;
fusing each row of pixel points in each sub-pixel point matrix into pixel point data, and compressing the fused pixel points to obtain target pixel point data;
and performing format conversion on the data of each target pixel point to obtain a target video signal.
In one possible implementation manner of this embodiment of the present application, the pixel data are RBG data, and the encoding module 302 performs format conversion on each target pixel data to obtain a target video signal, which may be specifically configured to:
determining a linear conversion relation between RBG data and YUV data corresponding to each target pixel data;
each target pixel data is converted into YUV data based on a linear conversion relationship to obtain a target video signal.
In one possible implementation manner of the embodiment of the present application, when the packing module 303 packs the target video signal and sends the target video signal to the user terminal, the packing module may be specifically configured to:
determining a number of divisions based on the video signal;
dividing the target video signal into a divided number of data packets;
each data packet is sent to the user terminal.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The embodiment of the application also describes an electronic device from the perspective of the entity apparatus, as shown in fig. 4, the electronic device 400 shown in fig. 4 includes: a processor 401 and a memory 403. Processor 401 is connected to memory 403, such as via bus 402. Optionally, the electronic device 400 may also include a transceiver 404. It should be noted that, in practical applications, the transceiver 404 is not limited to one, and the structure of the electronic device 400 is not limited to the embodiment of the present application.
The processor 401 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. Processor 401 may also be a combination that implements computing functionality, such as a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 402 may include a path to transfer information between the components. Bus 402 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or EISA (Extended Industry Standard Architecture ) bus, among others. Bus 402 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 4, but not only one bus or one type of bus.
The Memory 403 may be, but is not limited to, a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory ), a CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 403 is used for storing application program codes for executing the present application and is controlled to be executed by the processor 401. The processor 401 is arranged to execute application code stored in the memory 403 for implementing what is shown in the foregoing method embodiments.
Among them, electronic devices include, but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and stationary terminals such as digital TVs, desktop computers, and the like, and servers and the like. The electronic device shown in fig. 4 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments herein.
The present application provides a computer readable storage medium having a computer program stored thereon, which when run on a computer, causes the computer to perform the corresponding method embodiments described above.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.
Claims (10)
1. A video source preview method, comprising:
acquiring a video signal, wherein the video signal is a signal output by a video source to be previewed;
encoding the video signal to obtain a target video signal;
and packaging the target video signals, and sending the packaged target video signals to a user terminal so as to decode and display the packaged target video signals through the user terminal.
2. The video source preview method according to claim 1, wherein said encoding said video signal to obtain a target video signal comprises:
determining a size of the video signal, a compressed lateral size, and a compressed longitudinal size, the sizes including a lateral size and a longitudinal size;
determining a compression coefficient of the video signal based on the size of the video signal, the compressed lateral size, and the compressed longitudinal size;
and compressing the video signal based on the compression coefficient to obtain a target video signal.
3. The video source preview method according to claim 2, wherein said determining a compression coefficient of said video signal based on a size of said video signal, said compressed lateral size and said compressed longitudinal size comprises:
determining a lateral compression coefficient of the video signal based on a lateral dimension of the video signal and the compressed lateral dimension;
determining a longitudinal compression coefficient of the video signal based on a lateral dimension of the video signal and the compressed longitudinal dimension;
a compression coefficient of the video signal is determined based on the lateral compression coefficient and the longitudinal compression coefficient.
4. The video source preview method according to claim 3, wherein said compressing the video signal based on the compression coefficient to obtain a target video signal comprises:
acquiring a pixel point matrix corresponding to the video signal, and determining the number of row pixels of each row and the number of column pixels of each column in the pixel point matrix;
determining the number of horizontal pixel points based on the horizontal compression coefficient and the number of the row pixels;
determining the number of vertical pixel points based on the vertical compression coefficient and the number of column pixels;
and compressing the video signal based on the horizontal pixel number and the vertical pixel number to obtain a target video signal.
5. The video source preview method according to claim 4, wherein said compressing said video signal based on said horizontal pixel count and said vertical pixel count to obtain a target video signal comprises:
dividing the pixel point matrix into a plurality of sub-pixel point matrixes based on the transverse pixel point number and the longitudinal pixel point number, wherein the number of row pixels of each row in each sub-pixel point matrix is the same as that of the transverse pixel point number, and the number of column pixels of each column is the same as that of the longitudinal pixel point number;
fusing each row of pixel points in each sub-pixel point matrix into pixel point data, and compressing the fused pixel points to obtain target pixel point data;
and performing format conversion on the data of each target pixel point to obtain a target video signal.
6. The video source preview method according to claim 5, wherein the pixel data is RBG data, and the converting the format of each target pixel data to obtain a target video signal includes:
determining a linear conversion relation between RBG data and YUV data corresponding to each target pixel data;
and converting each target pixel point data into YUV data based on the linear conversion relation to obtain a target video signal.
7. The video source preview method according to any one of claims 1 to 6, wherein said packetizing the target video signal and transmitting it to the user terminal comprises:
determining a number of divisions based on the video signal;
dividing the target video signal into a divided number of data packets;
each data packet is sent to the user terminal.
8. A video source preview apparatus, comprising:
the acquisition module is used for acquiring video signals, wherein the video signals are signals output by a video source to be previewed;
the coding module is used for coding the video signal to obtain a target video signal;
and the packing module is used for packing the target video signals and sending the target video signals to the user terminal so as to decode and display the packed target video signals through the user terminal.
9. An electronic device, comprising:
at least one processor;
a memory;
at least one application program, wherein the at least one application program is stored in the memory and configured to be executed by the at least one processor, the at least one application program configured to: a video source preview method according to any of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed in a computer, causes the computer to perform the video source preview method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311656146.7A CN117641063B (en) | 2023-12-05 | 2023-12-05 | Video source previewing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311656146.7A CN117641063B (en) | 2023-12-05 | 2023-12-05 | Video source previewing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117641063A true CN117641063A (en) | 2024-03-01 |
CN117641063B CN117641063B (en) | 2024-08-27 |
Family
ID=90037355
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311656146.7A Active CN117641063B (en) | 2023-12-05 | 2023-12-05 | Video source previewing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117641063B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1652155A (en) * | 2005-03-28 | 2005-08-10 | 北京中星微电子有限公司 | Method and apparatus for changing digital image size |
CN101572821A (en) * | 2009-06-18 | 2009-11-04 | 深圳市美赛达科技有限公司 | Processing method and processing device for image zooming |
CN107172366A (en) * | 2017-05-08 | 2017-09-15 | 江苏欧帝电子科技有限公司 | A kind of video previewing method |
CN111669541A (en) * | 2019-03-08 | 2020-09-15 | 西安诺瓦电子科技有限公司 | Pre-monitoring picture display method and display system |
CN115147695A (en) * | 2022-06-30 | 2022-10-04 | 北京百度网讯科技有限公司 | Method, device and equipment for fusing image scaling and color space conversion |
CN116847106A (en) * | 2022-03-25 | 2023-10-03 | 北京字跳网络技术有限公司 | Image compression transmission method, device, electronic equipment and storage medium |
-
2023
- 2023-12-05 CN CN202311656146.7A patent/CN117641063B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1652155A (en) * | 2005-03-28 | 2005-08-10 | 北京中星微电子有限公司 | Method and apparatus for changing digital image size |
CN101572821A (en) * | 2009-06-18 | 2009-11-04 | 深圳市美赛达科技有限公司 | Processing method and processing device for image zooming |
CN107172366A (en) * | 2017-05-08 | 2017-09-15 | 江苏欧帝电子科技有限公司 | A kind of video previewing method |
CN111669541A (en) * | 2019-03-08 | 2020-09-15 | 西安诺瓦电子科技有限公司 | Pre-monitoring picture display method and display system |
CN116847106A (en) * | 2022-03-25 | 2023-10-03 | 北京字跳网络技术有限公司 | Image compression transmission method, device, electronic equipment and storage medium |
CN115147695A (en) * | 2022-06-30 | 2022-10-04 | 北京百度网讯科技有限公司 | Method, device and equipment for fusing image scaling and color space conversion |
Also Published As
Publication number | Publication date |
---|---|
CN117641063B (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100547812B1 (en) | Apparatus and method for converting color model of pixel data using color reference table | |
JP2018136535A (en) | Display system and method for transferring data to the same | |
US9607407B2 (en) | Variable-width differential memory compression | |
JP2013135463A (en) | Moving image compressing apparatus, image processing apparatus, moving image compressing method, image processing method, and data structure of moving image compressed file | |
TWI550557B (en) | Video data compression format | |
US20220014767A1 (en) | Bit plane encoding of data arrays | |
JP2018507620A (en) | Method and apparatus for decoding color pictures | |
EP4243415A1 (en) | Image compression method and apparatus, and intelligent terminal and computer-readable storage medium | |
US10249269B2 (en) | System on chip devices and operating methods thereof | |
JP2021044801A (en) | Data compression system and data mapping method | |
US11153586B2 (en) | Image processing device and frame buffer compressor | |
US10304213B2 (en) | Near lossless compression scheme and system for processing high dynamic range (HDR) images | |
CN1267813C (en) | System and method of processing chromatic difference signal 4:2:0 plane image data format storage | |
US8942474B2 (en) | Method and system for interpolating index values of associated tiles in an image | |
US9591309B2 (en) | Progressive lossy memory compression | |
CN114428595A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN117641063B (en) | Video source previewing method and device, electronic equipment and storage medium | |
JPH0772839A (en) | Color video display unit | |
US11044466B2 (en) | Image processing device | |
CN110087072A (en) | Image processing apparatus | |
US10026149B2 (en) | Image processing system and image processing method | |
CN108711191A (en) | A kind of method for processing video frequency and VR equipment | |
US9317891B2 (en) | Systems and methods for hardware-accelerated key color extraction | |
CN111510729A (en) | RGBD data compression transmission method based on video coding and decoding technology | |
CN116095262B (en) | Mobile processing device, processing method and system for processing video signal source |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |