CN113542805B - Video transmission method and device - Google Patents

Video transmission method and device Download PDF

Info

Publication number
CN113542805B
CN113542805B CN202110797818.0A CN202110797818A CN113542805B CN 113542805 B CN113542805 B CN 113542805B CN 202110797818 A CN202110797818 A CN 202110797818A CN 113542805 B CN113542805 B CN 113542805B
Authority
CN
China
Prior art keywords
video
pixels
filling
image space
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110797818.0A
Other languages
Chinese (zh)
Other versions
CN113542805A (en
Inventor
钟玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110797818.0A priority Critical patent/CN113542805B/en
Publication of CN113542805A publication Critical patent/CN113542805A/en
Application granted granted Critical
Publication of CN113542805B publication Critical patent/CN113542805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application provides a video transmission method and a video transmission device, wherein the method comprises the following steps: a plurality of video units are determined in the video frame, and the video units comprise a preset number of pixels in the video frame. And determining an image space corresponding to the preset video channel, wherein the number of pixels in the image space is greater than that of pixels in the video frame. Determining a filling area of each video unit in an image space, filling the corresponding video unit in the filling area, and filling preset information in at least one non-filling area to obtain smooth video information, wherein one non-filling area is arranged between at least two filling areas in a plurality of filling areas of the image space. And sending the smooth video information through a preset video channel. By filling the plurality of video units in the video frame into the image space with more pixels, the smooth processing of video transmission is realized at the sending end, the flow condition of the receiving end does not need to be monitored, and the complexity of the smooth video transmission is effectively reduced.

Description

Video transmission method and device
Technical Field
The present application relates to communications technologies, and in particular, to a video transmission method and apparatus.
Background
In a video transmission control system, smoothing processing for video transmission is widely used, wherein the smoothing processing can ensure uniform transmission of video data within a certain time.
At present, when the smooth processing of video transmission is realized, a video flow control technology is generally adopted, and in the video flow control technology, the flow condition of a receiving end needs to be monitored in real time so as to control the video data transmission of a sending end.
However, monitoring the traffic condition at the receiving end in real time makes the implementation complexity of smooth video transmission higher.
Disclosure of Invention
The embodiment of the application provides a video transmission method and a video transmission device, which are used for overcoming the problem of high complexity in realizing smooth video transmission.
In a first aspect, an embodiment of the present application provides a video transmission method, applied to a video sending end, including:
determining a plurality of video units in a video frame, wherein the video units comprise a preset number of pixels in the video frame;
determining an image space corresponding to a preset video channel, wherein the number of pixels in the image space is greater than the number of pixels in the video frame;
determining a filling area of each video unit in the image space, filling the corresponding video unit in the filling area, and filling preset information in at least one non-filling area to obtain smooth video information, wherein one non-filling area is arranged between at least two filling areas in a plurality of filling areas of the image space;
and sending the smooth video information through the preset video channel.
In one possible design, determining a fill area for each video unit in the image space includes:
acquiring a mapping relation between the video frame and the image space, wherein the mapping relation is used for indicating a corresponding relation between a video unit in the video frame and a pixel position in the image space;
and determining a filling area of each video unit in the image space according to the pixel position of each video unit in the video frame and the mapping relation.
In one possible design, for any one video unit; determining a filling area of the video unit in the image space according to the pixel position of the video unit in the video frame and the mapping relation, including:
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the region to be selected is greater than the number of pixels in the video unit;
determining the filling area in the region to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit.
In one possible design, the video unit includes a line of pixels in the video frame, and the mapping is used to indicate a correspondence between a line number in the video frame and a line number in the image space;
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, wherein the determining comprises the following steps:
determining at least one target line number in the image space according to the line number corresponding to the video unit and the mapping relation;
and determining the area where the line corresponding to the at least one target line number is located as the area to be selected.
In one possible design, the video unit includes a column of pixels in the video frame, and the mapping is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, wherein the determining comprises the following steps:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
and determining the area where the column corresponding to the at least one target column number is located as the area to be selected.
In a possible design, the image space includes a plurality of the filled regions and a plurality of the unfilled regions, and one unfilled region is disposed between every two adjacent filled regions, or;
at least two filling areas in the plurality of filling areas are not provided with the non-filling area therebetween.
In one possible design, the number of pixels included in each non-fill area is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
In a second aspect, an embodiment of the present application provides a video transmission method, applied to a video receiving end, including:
receiving smooth video information corresponding to a video frame through a preset video channel, wherein the video frame comprises a plurality of video units, and the video units comprise a preset number of pixels in the video frame;
determining a filling area of each video unit in the smoothed video information, wherein the number of pixels included in the smoothed video information is greater than the number of pixels included in the video frame, and one non-filling area is arranged between at least two filling areas in the plurality of filling areas in the image space;
acquiring the plurality of video units in the smooth video information according to the filling areas of the plurality of video units;
determining the video frame from the plurality of video units.
In one possible design, determining a padding area for each video unit in the smoothed video information includes:
acquiring a mapping relation between the video frame and an image space corresponding to the preset video channel, wherein the mapping relation is used for indicating a corresponding relation between a video unit in the video frame and a pixel position in the image space;
and determining a filling area of each video unit in the smooth video information according to the pixel position of each video unit in the video frame and the mapping relation.
In one possible design, for any one video unit; determining a filling area of the video unit in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relation, including:
determining a candidate area in the smooth video information according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the candidate area is greater than the number of pixels in the video unit;
and determining the filling area in the area to be selected according to the number of pixels in the video unit, wherein the number of pixels in the filling area is equal to the number of pixels in the video unit.
In one possible design, the video unit includes a line of pixels in the video frame, and the mapping is used to indicate a correspondence between a line number in the video frame and a line number in the image space;
determining a region to be selected in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relation, wherein the determining comprises:
determining at least one target line number in the smooth video information according to the line number corresponding to the video unit and the mapping relation;
and determining the area where the line corresponding to the at least one target line number in the smooth video information is located as the area to be selected.
In one possible design, the video unit includes a column of pixels in the video frame, and the mapping is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
determining a region to be selected in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relation, wherein the determining comprises:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
and determining the area where the column corresponding to the at least one target column number in the smooth video information is located as the area to be selected.
In one possible design, the image space includes a plurality of the filled regions and a plurality of the unfilled regions, wherein,
one non-filling area is arranged between every two adjacent filling areas; or;
at least two filling areas in the plurality of filling areas have no non-filling area arranged between the filling areas.
In one possible design, the number of pixels included in each non-fill area is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
In a third aspect, an embodiment of the present application provides a video transmission apparatus, applied to a video sending end, including:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a plurality of video units in a video frame, and the video units comprise a preset number of pixels in the video frame;
the determining module is further configured to determine an image space corresponding to a preset video channel, where the number of pixels included in the image space is greater than the number of pixels included in the video frame;
a filling module, configured to determine a filling area of each video unit in the image space, fill the corresponding video unit in the filling area, and fill preset information in at least one non-filling area to obtain smooth video information, where at least two filling areas in the plurality of filling areas in the image space are provided with one non-filling area therebetween;
the sending module is used for sending the smooth video information through the preset video channel;
in one possible design, the filling module is used in particular for:
acquiring a mapping relation between the video frame and the image space, wherein the mapping relation is used for indicating a corresponding relation between a video unit in the video frame and a pixel position in the image space;
determining a filling area of each video unit in the image space according to the pixel position of each video unit in the video frame and the mapping relation;
in one possible design, the filling module is used in particular for:
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the region to be selected is greater than that in the video unit;
determining the filling area in the area to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit;
in one possible design, the video unit includes a line of pixels in the video frame, and the mapping is used to indicate a correspondence between a line number in the video frame and a line number in the image space;
the filling module is specifically configured to:
determining at least one target line number in the image space according to the line number corresponding to the video unit and the mapping relation;
determining the area where the line corresponding to the at least one target line number is located as the area to be selected;
in one possible design, the video unit includes a column of pixels in the video frame, and the mapping is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
the filling module is specifically configured to:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
determining the area where the column corresponding to the at least one target column number is located as the area to be selected;
in a possible design, the image space includes a plurality of the filled regions and a plurality of the unfilled regions, and one unfilled region is disposed between every two adjacent filled regions, or;
at least two filling areas in the plurality of filling areas are not provided with the non-filling area therebetween;
in one possible design, the number of pixels included in each non-fill area is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
In a fourth aspect, an embodiment of the present application provides a video transmission apparatus, applied to a video receiving end, including:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving smooth video information corresponding to a video frame through a preset video channel, the video frame comprises a plurality of video units, and the video units comprise a preset number of pixels in the video frame;
a determining module, configured to determine a filling area of each video unit in the smoothed video information, where the number of pixels included in the smoothed video information is greater than the number of pixels included in the video frame, and one non-filling area is located between at least two filling areas in a plurality of filling areas in the image space;
an obtaining module, configured to obtain the multiple video units from the smoothed video information according to the filling areas of the multiple video units;
the determining module is further configured to determine the video frame according to the plurality of video units;
in one possible design, the determining module is specifically configured to:
acquiring a mapping relation between the video frame and an image space corresponding to the preset video channel, wherein the mapping relation is used for indicating a corresponding relation between a video unit in the video frame and a pixel position in the image space;
and determining a filling area of each video unit in the smooth video information according to the pixel position of each video unit in the video frame and the mapping relation.
In one possible design, the determining module is specifically configured to:
determining a region to be selected in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the region to be selected is greater than that in the video unit;
and determining the filling area in the area to be selected according to the number of pixels in the video unit, wherein the number of pixels in the filling area is equal to the number of pixels in the video unit.
In one possible design, the video unit includes a line of pixels in the video frame, and the mapping is used to indicate a correspondence between a line number in the video frame and a line number in the image space;
the determining module is specifically configured to:
determining at least one target line number in the smooth video information according to the line number corresponding to the video unit and the mapping relation;
and determining the area where the line corresponding to the at least one target line number in the smooth video information is located as the area to be selected.
In one possible design, the video unit includes a column of pixels in the video frame, and the mapping is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
the determining module is specifically configured to:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
and determining the area where the column corresponding to the at least one target column number in the smooth video information is located as the area to be selected.
In one possible design, the image space includes a plurality of the filled regions and a plurality of the unfilled regions, wherein,
one non-filling area is arranged between every two adjacent filling areas; or;
at least two filling areas in the plurality of filling areas are not provided with the non-filling area therebetween.
In one possible design, the number of pixels included in each non-fill area is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
In a fifth aspect, an embodiment of the present application provides a video transmission device, including:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being adapted to perform the method as described above in the first aspect and any one of the various possible designs of the first aspect when the program is executed.
In a sixth aspect, an embodiment of the present application provides a video transmission device, including:
a memory for storing a program;
a processor for executing the program stored in the memory, the processor being adapted to perform the method of the second aspect as well as any of the various possible designs of the second aspect, when the program is executed.
In a seventh aspect, an embodiment of the present application provides a video transmission system, including: a video transmitting apparatus and a video receiving apparatus;
wherein the video transmission apparatus is adapted to store program code for performing the method of the first aspect as well as any of its various possible designs;
the video receiving apparatus is arranged to perform the method as described above for the second aspect and any one of its various possible designs.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the method as described in the first aspect and any of the various possible designs of the first aspect or the second aspect and any of the various possible designs of the second aspect.
In a ninth aspect, the present application provides a computer program product, including a computer program, wherein the computer program is configured to, when executed by a processor, implement the method according to the first aspect and any one of the various possible designs of the first aspect or the second aspect and any one of the various possible designs of the second aspect.
The embodiment of the application provides a video transmission method and a video transmission device, wherein the method comprises the following steps: a plurality of video units are determined in a video frame, and the video units comprise a preset number of pixels in the video frame. And determining an image space corresponding to the preset video channel, wherein the number of pixels in the image space is greater than that in the video frame. Determining a filling area of each video unit in an image space, filling the corresponding video unit in the filling area, and filling preset information in at least one non-filling area to obtain smooth video information, wherein at least one non-filling area is arranged between two filling areas in a plurality of filling areas of the image space. And sending the smooth video information through a preset video channel. The plurality of video units in the video frame are filled into the image space with more pixels, and a non-filling area is arranged between at least two filling areas in the plurality of filling areas in the image space, so that the video units are scattered and filled into the image space, smooth video information is obtained, and then the smooth video information is transmitted, so that each pixel of the video frame can be scattered in different time periods in a frame period for transmission, the transmission of video frame data is prevented from being completed in the time period of one frame period, the smooth processing of video transmission can be realized at a transmitting end, the flow condition of a receiving end does not need to be monitored, and the complexity of the smooth video transmission is effectively reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a video transmission system according to an embodiment of the present application;
fig. 2 is a flowchart of a video transmission method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating an implementation of a video frame according to an embodiment of the present application;
fig. 4 is a second flowchart of a video transmission method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a possible implementation of a mapping relationship provided in an embodiment of the present application;
fig. 6 is a first schematic diagram illustrating implementation of video cell stuffing according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating an implementation of video cell stuffing according to an embodiment of the present application
Fig. 8 is a schematic diagram illustrating implementation of video cell stuffing according to an embodiment of the present application
Fig. 9 is a schematic diagram of implementation of video cell stuffing according to an embodiment of the present application
Fig. 10 is a schematic diagram illustrating an implementation of video cell stuffing according to an embodiment of the present application;
fig. 11 is a flowchart of a video transmission method according to an embodiment of the present application;
fig. 12 is a fourth flowchart of a video transmission method according to an embodiment of the present application;
fig. 13 is a schematic diagram illustrating an implementation of extracting a video frame according to an embodiment of the present application;
fig. 14 is a first schematic structural diagram of a video transmission apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a video transmission apparatus according to an embodiment of the present application;
fig. 16 is a first hardware structure diagram of a video transmission device according to an embodiment of the present application;
fig. 17 is a schematic hardware structure diagram of a video transmission device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to better understand the technical solution of the present application, the following further detailed description is provided for the background art related to the present application.
At present, the smoothing processing of video transmission is widely applied to a video transmission control system, in the video transmission control system, there are a video sending end and a video receiving end, and the smoothing processing of video transmission needs to ensure that the video sending end dispersedly transmits the data of a video frame to the video receiving end within a certain time, rather than transmitting all the video frame data at the front of the time.
For example, for a video with a frame rate of 60Hz (hertz), the frame period is 16.66ms (milliseconds), and if the video is not smoothed, it is likely that the first 4 milliseconds of the frame period will be reached, and then 1MB of data will be completely transmitted, which results in a very large amount of data to be processed by the subsequent device in a unit time.
However, if the smoothing processing of video transmission is performed, 1MB of data can be sent in a scattered manner within a frame period of 16.66ms, so as to effectively reduce the data amount that the subsequent device needs to process in a unit time, and correspondingly, the performance requirement on the subsequent device is also effectively reduced. The smoothing process of video transmission can be understood as distributed processing of data amount.
At present, when the related art performs the smoothing processing of video transmission, a flow control technology is usually adopted, where the flow control technology refers to controlling data flow sent by a sending end of a video to ensure that all video frame data is not sent in a short time, but the flow control technology needs to monitor the flow condition of a receiving end in real time to further control the video data transmission of the sending end, and cannot perform the smoothing processing of the video directly at the sending end without monitoring the flow of the receiving end, so that the smoothing processing of the video transmission based on the current flow control technology may cause higher complexity of the smoothing processing of the video transmission.
Aiming at the problems in the prior art, the application provides the following technical conception: the video frames are spatially distributed and can comprise video columns in the horizontal direction and video lines in the vertical direction, the video frames needing to be transmitted are divided into a plurality of video units, each video unit comprises a plurality of pixels, and then each video unit is scattered and filled into an image space, the image space in the application is a transmission frame for transmitting the video frames, the number of the pixels in the image space is larger than that in the video frames, and the unfilled area in the image space is filled with preset information, so that the pixels in the video frames can be scattered into a larger image space for transmission, the transmitted data volume in unit time is limited, the transmission of the video frame data can not be completed in the time front section of one frame period, the video smoothing processing at a transmitting end is effectively realized, and the smooth transmission of the video is simply and effectively realized on the premise of not monitoring the flow condition of a receiving end.
First, a video transmission system applied to the video transmission method of the present application is described with reference to fig. 1, where fig. 1 is a schematic diagram of the video transmission system provided in the embodiment of the present application.
As shown in fig. 1, the video transmission system includes a video sending end and a video receiving end, where the video sending end and the video receiving end may be, for example, a video recorder, a video camera, a conference device, a video-related display or monitor, or may also be a terminal device, where the terminal device may also be a computer device, a tablet computer, or a mobile phone (or called "cellular" phone), and the like, and the terminal device may also be a portable, pocket, hand-held, computer-embedded mobile device or device, and the like.
In this embodiment, the video sending end includes a memory unit, a reading module, a storage unit, and a video smooth sending unit. In a possible implementation manner, the memory unit may be, for example, a Double Data Rate (DDR) memory, and the DDR memory may be used as a buffer for video frames during Data storage, or in an actual implementation process, the storage device may be any device for buffering video frames, which is not limited in this embodiment.
And the reading module is used for reading the data of the video frame from the memory unit and storing the read video frame data into the storage unit. The storage unit in this embodiment may play a buffering role between the reading module and the video smoothing unit to avoid an excessive workload of the video smoothing unit.
The video smoothing unit may read the video frame data from the storage unit, smooth the video frame data, and send the video frame data through a High Definition Multimedia Interface (HDMI) channel, where the HDMI is a full digital video and audio transmission Interface, and the HDMI channel may play a role in isolating a video sending end from a video receiving end.
Because the bandwidth of the HDMI channel is configurable, 4096 × 2160P60 can be supported, in a possible implementation manner, the HDMI channel can be configured to be a fixed bandwidth, and then the video transmitter only needs to perform smoothing processing on the basis of the HDMI bandwidth, without monitoring the traffic condition of the receiver in real time as in the conventional scheme. Or in an actual implementation process, the HDMI channel may also be configured to be a dynamic bandwidth, and a bandwidth configuration condition of the HDMI channel may be selected according to an actual requirement, which is not limited in this embodiment.
Then, a video smoothing receiving module in the video receiving end can receive the smoothed video information after smoothing processing through the HDMI channel, and extract a video frame from the received smoothed video information, thereby effectively realizing the transmission of the video frame.
In another possible implementation manner, the memory unit and the video sending end may be further configured as two independent devices, where the memory unit may be an independent memory device, and the video sending end may be an independent video sending device, for example, a specific implementation manner of this embodiment is not limited, and may be selected according to actual requirements.
Based on the above description, the following description will be given with reference to specific embodiments. First, a video transmission method at a video sending end is introduced with reference to fig. 2 and fig. 3, fig. 2 is a flowchart of the video transmission method provided in the embodiment of the present application, and fig. 3 is a schematic implementation diagram of a video frame provided in the embodiment of the present application.
As shown in fig. 2, the method includes:
s201, determining a plurality of video units in the video frame, wherein the video units comprise a preset number of pixels in the video frame.
In this embodiment, a video frame may include a plurality of pixels, in this embodiment, a plurality of video units may be determined in the video frame, and each video unit includes a preset number of pixels in the video frame, where a specific setting of the preset number may be selected according to an actual requirement, which is not limited in this embodiment.
In a possible implementation manner, the video frames in this embodiment are spatially distributed in two dimensions, and may be divided into a horizontal direction and a vertical direction, for example, as can be understood in conjunction with fig. 3, as shown in fig. 3, a video frame includes a plurality of pixels, where a pixel may correspond to a plurality of video columns shown in fig. 3 in the horizontal direction, and each video column includes a plurality of pixels; and the pixels may correspond to a plurality of video lines as shown in fig. 3 in the vertical direction, each video line including a plurality of pixels.
Based on fig. 3, it can be determined that a plurality of pixel points are included in one video frame, for example, a unit composed of a preset number of pixels in the video frame may be determined as a video unit, so that a plurality of video units are determined in the video frame.
In a possible implementation, the video unit may be, for example, one video line described above, or a plurality of consecutive video lines; or the video unit can also be one video column introduced above, or a plurality of continuous video columns; or the video unit may also be a set of a × B rectangular pixels, for example, 301 shown in fig. 3 may be a video unit, and the specific dividing manner of the video unit is not limited in this embodiment as long as the video unit is divided from the video frame and includes a preset number of pixel sets in the video frame, and the specific determining manner of the video unit may be selected according to actual requirements.
In one possible implementation manner of determining the plurality of video units, for example, when the reading unit reads data from the memory unit, the reading unit reads the data in units of video units, that is, reads one video unit at a time, so as to determine the plurality of video units through multiple times of reading; alternatively, when the reading unit reads data from the memory unit, after reading a complete video frame from the memory unit, a plurality of video units may be determined from the complete video frame data.
S202, determining an image space corresponding to a preset video channel, wherein the number of pixels in the image space is greater than that in the video frame.
In this embodiment, data of a video frame may be transmitted through a preset video channel, for example, where the preset video channel corresponds to an image space, the image space is a transmission frame for transmitting the video frame, the image space also includes a plurality of pixels, and the number of pixels included in the image space in this embodiment is greater than the number of pixels included in the video frame.
In a possible implementation manner, the preset video channel in this embodiment may be, for example, an HDMI channel, where the HDMI channel corresponds to an HDMI space, and the HDMI space is the image space described in the current embodiment.
Alternatively, the preset video channel in this embodiment may also be a transmission channel corresponding to a display Interface (DP), or may also be a transmission channel corresponding to a Digital component Serial Interface (SDI), and this embodiment does not limit a specific implementation manner of the preset video channel as long as the preset video channel is a channel for video transmission.
Here, with a specific example, a case where the number of pixels included in the image space is larger than the number of pixels included in the video frame is exemplified, for example, the size of the HDMI space is 4096 × 2160, which means that it includes 4096 HDMI columns in the horizontal direction and 2160 HDMI rows in the vertical direction, and a total of 4096 × 2160 pixels; and, say that the size of a video frame is 1920 × 1080, it means that it includes 1920 video columns in the horizontal direction and 1080 video rows in the vertical direction, and totally 1920 × 1080 pixels.
In an actual implementation process, the preset video channel may be an HDMI channel, and may also be other channels that can be used for video data transmission.
S203, determining a filling area of each video unit in an image space, filling the corresponding video unit in the filling area, and filling preset information in at least one non-filling area to obtain smooth video information, wherein one non-filling area is arranged between at least two filling areas in a plurality of filling areas of the image space.
In this embodiment, the video frame includes a plurality of pixels, the image space also includes a plurality of pixels, and the number of pixels included in the image space is greater than the number of pixels included in the video frame, then the plurality of video units determined in the video frame may be filled into the image space.
In one possible implementation, for example, the filling area of each video unit may be determined in the image space according to the mapping relationship between the video frame and the image space, and then a specific filling process is performed.
Specifically, the corresponding video unit may be filled in the filling area, and the filling of the preset information is performed for the non-filling area that is not filled with the video unit, where the preset information may be, for example, a preset numerical value such as 0 or 1, and the embodiment does not limit the specific implementation of the preset information.
After filling the corresponding video units into the filling area and filling the preset information into the non-filling area, mapping the plurality of video units in the video frame into the image space is achieved, so as to obtain the smooth video information.
In addition, in the embodiment, at least one non-filled area exists between two filled areas in the plurality of filled areas in the image space, that is, when filling the video unit of the video frame into the image space, the video unit is broken up and then filled.
And S204, sending the smooth video information through a preset video channel.
After obtaining the smooth video information, the smooth video information may be sent through a preset video channel, for example, may be transmitted to a video receiving end.
It can be understood that, in this embodiment, after the video units are broken up, the image space with a larger number of pixels is filled to obtain the smooth video information, and then the smooth video information is transmitted to disperse the pixels of the video frames in different time periods in the frame period as much as possible, that is, the number of pixels of the video frames transmitted in a unit time is controlled to be limited as much as possible, so that the transmission of the video frame data is avoided to be completed before the time of one frame period, thereby implementing the processing of the video smooth transmission.
And then the video receiving end can extract the video frame from the smooth video information, thereby completing the transmission of the video frame.
The video transmission method provided by the embodiment of the application comprises the following steps: a plurality of video units are determined in the video frame, and the video units comprise a preset number of pixels in the video frame. And determining an image space corresponding to the preset video channel, wherein the number of pixels in the image space is greater than that in the video frame. Determining a filling area of each video unit in an image space, filling the corresponding video unit in the filling area, and filling preset information in at least one non-filling area to obtain smooth video information, wherein one non-filling area is arranged between at least two filling areas in a plurality of filling areas of the image space. And sending the smooth video information through a preset video channel. The plurality of video units in the video frame are filled into the image space with more pixels, and a non-filling area is arranged between at least two filling areas in the plurality of filling areas in the image space, so that the video units are scattered and filled into the image space, smooth video information is obtained, and then the smooth video information is transmitted, so that each pixel of the video frame can be scattered in different time periods in a frame period for transmission, the transmission of video frame data is prevented from being completed in the time period of one frame period, the smooth processing of video transmission can be realized at a transmitting end, the flow condition of a receiving end does not need to be monitored, and the complexity of the smooth video transmission is effectively reduced.
Based on the foregoing embodiments, the following describes in further detail a video transmission method at a video sending end with reference to fig. 4 to 9, where fig. 4 is a second flowchart of the video transmission method provided in the embodiment of the present application, fig. 5 is a schematic diagram of possible implementation of a mapping relationship provided in the embodiment of the present application, fig. 6 is a first schematic diagram of implementation of video unit padding provided in the embodiment of the present application, fig. 7 is a second schematic diagram of implementation of video unit padding provided in the embodiment of the present application, fig. 8 is a third schematic diagram of implementation of video unit padding provided in the embodiment of the present application, fig. 9 is a fourth schematic diagram of implementation of video unit padding provided in the embodiment of the present application, and fig. 10 is a fifth schematic diagram of implementation of video unit padding provided in the embodiment of the present application.
As shown in fig. 4, the method includes:
s401, determining a plurality of video units in a video frame, wherein the video units comprise a preset number of pixels in the video frame.
The implementation manner of S401 is similar to that of S201, and is not described herein again.
S402, determining an image space corresponding to a preset video channel, wherein the number of pixels in the image space is greater than that in the video frame.
The implementation manner of S402 is similar to that of S202, and is not described here again.
And S403, acquiring a mapping relation between the video frame and the image space, wherein the mapping relation is used for indicating a corresponding relation between the video unit in the video frame and the pixel position in the image space.
In this embodiment, there is a mapping relationship between a video frame and an image space, where the video frame includes a plurality of pixels, the image space also includes a plurality of pixels, and the number of pixels in the image space is greater than the number of pixels in the video frame, and the mapping relationship in this embodiment may be used to indicate a correspondence relationship between video units in the video frame and pixel positions in the image space.
In a possible implementation manner, the video unit in this embodiment may include a line of pixels in a video frame, for example, and the mapping relationship may indicate a correspondence relationship between a line number in the video frame and a line number in an image space, for example.
Taking the image space as an HDMI space as an example, the mapping relationship may satisfy the following formula one:
Figure BDA0003163418440000101
the ddr _ v _ num is a line number corresponding to a line of pixels in the video frame, which may be, for example, a vertical coordinate of the video frame, HDMI _ v _ num is a line number corresponding to a line of pixels in the HDMI space, which may be, for example, a vertical coordinate of the HDMI space, margin _ v is an amplification factor, and floor () function is an integer function.
In one possible implementation, the magnification factor margin _ v may satisfy the following equation two, for example:
Figure BDA0003163418440000102
where ddr _ vsize is the number of pixels in the vertical direction of the video frame, and HDMI _ vsize is the number of pixels in the vertical direction of the HDMI space.
Based on the above formula, it can be seen that if the number of pixels in the vertical direction of the video frame is the same as the number of pixels in the vertical direction of the image space, for example, the video frame includes 2160 rows of pixels, and the HDMI space also includes 2160 rows of pixels, the line number of the video frame and the line number of the image space in the mapping relationship are in a one-to-one mapping relationship; however, if the number of pixels in the vertical direction of the video frame is different from the number of pixels in the vertical direction of the image space, for example, 1080 lines of pixels are included in the video frame, and 2160 lines of pixels are included in the HDMI space, then the line number of the video frame and the line number of the image space in the mapping relationship are in a many-to-one mapping relationship.
When the mapping relationship is determined based on the mapping functions introduced by the first formula and the second formula, for example, the pixels in each row in the image space may be sequentially traversed, so as to determine the row number in the corresponding video frame for the row number of the pixel in each row in the image space, thereby obtaining the mapping relationship.
Taking the HDMI space as an example, it can be expressed as follows:
for hdmi_v_num=0:hdmi_vsize-1
ddr_v_num=floor((hdmi_v_num*margin_v+(margin_v/2-800))/4096)
end
the above represents the traversal starting from line 1 pixel in HDMI space (HDMI _ v _ num = 0), the line number in the video frame determined for each line of pixels in HDMI space, until the traversal reaches the last line of pixels in HDMI space (HDMI _ v _ num = HDMI _ vsize-1).
The above process can also be understood in conjunction with fig. 5, assuming that the size of the current HDMI space is 4096 × 2160, then 2160 lines of pixels are currently included in the HDMI space, the traversal is started from the first line of pixels, the line number in the corresponding video frame is determined for each line of pixels, and there is a many-to-one relationship between the line number of the image space and the line number of the video frame, as shown in fig. 5.
For convenience of description, it is assumed that HDMIx represents the x-th row of pixels in the HDMI space, and video row y represents the y-th row of pixels in the video frame, where x and y are integers greater than or equal to 1, for example, as shown in fig. 5, it is currently determined that HDMI1, HDMI2, and HDMI3 all correspond to video rows 1, HDMIx 4, HDMI5, and HDMI6 all correspond to video rows 2, HDMIx 7, HDMI8, and HDMI9 all correspond to video rows 3, and so on, so as to determine the row numbers in the video frames corresponding to the respective rows of pixels in the HDMI space, and thus determine the mapping relationship between the image space and the video frames.
In another possible implementation manner, the video unit may further include a column of pixels in the video frame, and then the mapping relationship may be used to indicate a correspondence between a column number in the video frame and a column number in the image space.
When the video unit includes a column of pixels in the video frame, an implementation manner of determining the mapping relationship is similar to the above-described implementation manner that the video unit includes a row of pixels, and when the video unit includes rectangular pixels in the video frame, an implementation manner thereof is also similar, which may refer to the above description, and details are not repeated here.
And the first formula and the second formula are exemplary descriptions of mapping functions, in an actual implementation process, a specific implementation manner of the mapping function may be selected according to actual requirements, for example, the mapping function may be an identity transformation of the first formula, or corresponding parameters may be added to the first formula, or parameters in the first formula may be adjusted, or a corresponding mathematical part may be added on the basis of the first formula, and the like.
It should be noted that the mapping function in this embodiment is synchronized between the video sending end and the video receiving end, so that the video sending end and the video receiving end can determine the mapping relationship based on the mapping function without transmitting the mapping relationship in addition, thereby effectively reducing the data transmission amount and improving the video transmission efficiency.
S404, determining a region to be selected in an image space according to the pixel position and the mapping relation of the video unit in the video frame, wherein the number of pixels in the region to be selected is larger than that in the video unit.
After the mapping relationship is obtained, because the mapping relationship may indicate a correspondence between a video unit in the video frame and a pixel position in the image space, for any one video unit, the region to be selected may be determined in the image space according to the pixel position of the video unit in the video frame and the determined mapping relationship.
The candidate area may include, for example, a plurality of areas for filling the current video unit, and therefore in this embodiment, the number of pixels included in the candidate area is greater than the number of pixels included in the video unit.
In a possible implementation manner, the video unit may include a line of pixels in the video frame, and the mapping relationship is used to indicate a corresponding relationship between a line number in the video frame and a line number in the image space, so when determining the region to be selected, for example, at least one target line number may be determined in the image space according to the line number and the mapping relationship corresponding to the video unit; and determining the area where the line corresponding to the at least one target line number is located as the area to be selected.
Taking the HDMI space described above as an example, based on the above description, it can be determined that the mapping relationship in this embodiment may indicate a many-to-one correspondence relationship between the line number of the image space and the line number of the video frame, and then it may be determined that each line of pixels in the video frame corresponds to at least one target line number in the HDMI space according to the mapping relationship.
As can also be understood with reference to fig. 5, for example, based on the mapping relationship determined in fig. 5, it can be currently determined that the HDMI lines corresponding to the video line 1 are: HDMI1, HDMI2, HDMI3, and a plurality of HDMI lines corresponding to the video line 2 may be determined to be: HDMI4, HDMI5, HDMI6, and a plurality of HDMI lines corresponding to the video line 3 may be determined as follows: HDMI7, HDMI8, HDMI9, and the like, and the specific corresponding relationship may be determined according to the specific implementation of the mapping relationship.
That is to say, according to the mapping relationship, at least one target line number in the image space may be determined for each video line in the video frame, and then the region where the line corresponding to the at least one target line number is located is determined as the region to be selected.
For example, in fig. 5, the areas where three HDMI lines with line numbers of HDMI1, HDMI2, and HDMI3 are located in the image space are the areas to be selected corresponding to the video line 1.
In another possible implementation manner, the video unit may further include a column of pixels in the video frame, and the mapping relationship is used to indicate a correspondence between a column number in the video frame and a column number in the image space, and when the region to be selected is determined, for example, at least one target column number may be determined in the image space according to the column number and the mapping relationship corresponding to the video unit; and determining the area where the column corresponding to the at least one target column number is located as the area to be selected. The implementation manner is similar to the above-described implementation manner of one row of pixels, and details are not described here.
S405, determining a filling area in the area to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit.
After the candidate area is determined, it can be understood that pixel positions in the candidate area can be actually used for filling the corresponding video unit, and then a filling area can be determined in the candidate area according to the number of pixels included in the video unit, where a specific selection of the filling area can be selected according to an actual requirement as long as the number of pixels included in the filling area is equal to the number of pixels included in the video unit, and the filling area is an area in the candidate area.
In a possible implementation manner, a video unit includes a row of pixels in a video frame as an example, for example, the current mapping relationship indicates that a corresponding relationship exists between a certain row of pixels in the video frame and N rows of pixels in an image space, and pixel positions of the N rows of pixels in the image space constitute a region to be selected.
A region in the first row of pixels of the N rows of pixels, for example, where the number of pixels included is equal to the number of pixels included in the video unit, may be determined as a filling region; or, an area in which the number of pixels included in the last line of the N lines of pixels is equal to the number of pixels included in the video unit may be determined as a filling area; or, an area in which the number of pixels included in the S-th row of pixels in the N rows is equal to the number of pixels included in the video unit may be determined as a filling area; or, a region composed of a plurality of pixels in the N rows of pixels that are not in the same row may also be determined as a filling region, which is not particularly limited in this embodiment as long as the filling region is a region in the region to be selected and includes the number of pixels equal to the number of pixels included in the video unit.
Wherein N is an integer of 1 or more, and S is an integer of 1 or more and N or less.
S406, filling corresponding video units in the filling areas, and filling preset information in at least one non-filling area to obtain smooth video information, wherein at least one non-filling area is arranged between two filling areas in the plurality of filling areas of the image space.
After determining the filling area corresponding to each video unit, the filling of the video units can be performed.
In a possible implementation manner, a specific implementation manner of filling the video unit is described by taking an example that the video unit includes a row of pixels in a video frame, and the image space is an HDMI space.
For example, as can be understood with reference to fig. 6, assuming that 1080 lines of pixels are included in the current video frame and 2160 lines of pixels are included in the HDMI space, and assuming that, of the first line of pixels in the region to be selected, a region starting from the first column until the number of pixels is equal to the number of pixels in the video unit is currently determined as a padding region, it is possible to map, for example, video line 1 to HDMI line 1, video line 2 to HDMI line 4, video line 3 to HDMI line 7, and so on, starting from the 1 st column currently, as shown in fig. 6.
And may also be understood with reference to fig. 7, assuming that 1080 lines of pixels are included in the current video frame and 2160 lines of pixels are included in the HDMI space, while assuming that an area starting from column 5 until the number of pixels equals the number of pixels in a video unit among the last line of pixels in the area to be selected is currently determined as a padding area, as shown in fig. 7, video line 1 is currently mapped to HDMI line 3 starting from column 5, video line 2 is mapped to HDMI line 6, video line 3 is mapped to HDMI line 9, and so on.
In another possible implementation manner, a specific implementation manner of filling a video unit is described by taking an example that the video unit includes a column of pixels in a video frame, and an image space is an HDMI space.
It can be understood, for example, with reference to fig. 8, that assuming that 1920 columns of pixels are included in the current video frame and 4096 columns of pixels are included in the HDMI space, while assuming that, of the first column of pixels in the region to be selected, a region starting from the first row until the number of pixels equals the number of pixels in a video unit is determined as a padding region, as shown in fig. 8, that, starting from the 1 st row, video column 1 is mapped to HDMI column 1, video column 2 is mapped to HDMI column 5, video column 3 is mapped to HDMI column 8, and so on.
And may also be understood with reference to fig. 9, assuming that 1920 columns of pixels are included in the current video frame and 4096 columns of pixels are included in the HDMI space, while assuming that an area starting from row 5 until the number of pixels equal to the number of pixels in a video unit among the last column of pixels in the area to be selected is currently determined as a padding area, as shown in fig. 9, video column 1 is currently mapped to HDMI column 4, video column 2 is mapped to HDMI column 8, video column 3 is currently mapped to HDMI column 12, and so on, starting from row 5.
And in this embodiment, a non-filled region in the image space may be filled with preset information, where the non-filled region refers to a position other than a position in the image space filled with the video unit.
For example, referring to fig. 6 to 9, for a position other than a position where a video unit is filled, preset information may be used for filling, and a specific selection of the preset information may be determined according to an actual requirement, for example, the specific selection may be 0, or may also be 1, and the present embodiment does not limit this.
Based on the above description, it can be determined that the image space in the embodiment may include a plurality of filled regions and a plurality of unfilled regions, and on the basis that at least one unfilled region is provided between two filled regions in the above-defined plurality of filled regions, in a possible implementation manner, one unfilled region may be provided between every two adjacent filled regions, for example, as shown in fig. 6 to 8, that is, each video unit is completely broken up, and is correspondingly dispersed between each adjacent filled region in the image space.
Alternatively, it is also possible to set a non-filled area between at least two filled areas in the plurality of filled areas, that is, in this embodiment, the video units are allowed to be mapped to adjacent positions in the image space, that is, the portions of the video units are broken, the filled areas corresponding to the image space are scattered, the portions of the video units are not broken, and the filled areas corresponding to the image space are continuous. For example, it can be understood by referring to the case of fig. 10, as shown in fig. 10, where the first line of pixels in the video frame and the second line of pixels in the video frame correspond to the filling areas in the HDMI space, which are adjacent to each other; and the third row of pixels in the video frame and the fourth row of pixels in the video frame are adjacent to each other, but a non-filled area is arranged between the second row of pixels in the video frame and the third row of pixels in the video frame, which means that the video unit part described herein is filled up in a scattered manner.
The specific implementation manner of filling the video unit is not limited in this embodiment, and the implementation manner may be selected according to actual requirements, for example, the degree and manner of scattering the video unit may be selected according to the actual requirements, and then corresponding filling is performed.
In this embodiment, the setting manner of the non-filled regions may also be defined, for example, the number of pixels included in the non-filled regions may be the same, that is, the number of pixels included in each non-filled region is completely the same.
Alternatively, it may be provided that the number of pixels included in at least two of the non-filled regions is different, that is, the number of pixels included in each of the non-filled regions is not exactly the same.
Or it may also be set that a difference between the numbers of pixels included in each two non-fill areas in the multiple non-fill areas is smaller than or equal to the first threshold, that is, the numbers of pixels included in each non-fill area have a certain difference, but the difference is within a certain range.
And S407, sending the smooth video information through a preset video channel.
The implementation manner of S407 is similar to that of S204, and is not described herein again.
The video transmission method provided by the embodiment of the application can determine the corresponding relationship between the video units in the video frame and the pixel positions in the image space through the mapping function, so that the mapping relationship between the video frame and the image space can be determined in order, then based on the mapping relationship, in the multiple candidate areas corresponding to each video unit, the filling areas corresponding to each video unit are determined, and the video units are filled into the corresponding filling areas, so that the video frame can be simply and effectively mapped to a larger image space, the smooth processing of video transmission is effectively realized, in the process, the flow condition of a receiving end does not need to be monitored, and the complexity of video smooth processing is effectively reduced. Meanwhile, the same mapping function is synchronously arranged at the receiving end and the transmitting end, so that the mapping relation is not required to be sent additionally, the data transmission quantity is effectively reduced, and the video transmission efficiency is improved.
In the above description, the implementation of the video sending end is described in the following with reference to specific embodiments, and fig. 11 is a flow chart of a video transmission method provided in this embodiment, as shown in fig. 11, the method includes:
s1101, receiving smooth video information corresponding to a video frame through a preset video channel, wherein the video frame comprises a plurality of video units, and the video units comprise a preset number of pixels in the video frame.
In this embodiment, the video receiving end may receive the smoothed video information through the preset video channel, and a plurality of video units filled in the smoothed video information, where the video units include a preset number of pixels in the video frame, so that the video receiving end may extract the video units from the smoothed video information to obtain the video frame.
S1102, determining a filling area of each video unit in the smooth video information, wherein the number of pixels included in the smooth video information is larger than that of pixels included in a video frame, and a non-filling area is arranged between at least two filling areas in the plurality of filling areas in the image space.
In this embodiment, the video units are filled in each filling region in the smoothed video information, so that the filling region corresponding to each video unit can be determined in the smoothed video information, where the number of pixels included in the smoothed video information is greater than the number of pixels included in the video frame, and a non-filling region is provided between at least two filling regions in the plurality of filling regions in the image space.
And S1103, acquiring a plurality of video units in the smoothed video information according to the filling areas of the plurality of video units.
After determining the filling areas of the video units in the smoothed video information, the video units can be extracted at the corresponding positions of the filling areas, so as to obtain a plurality of video units in the smoothed video information.
And S1104, determining a video frame according to the plurality of video units.
In one possible implementation, after the plurality of video units are extracted, the plurality of video units may be sequentially spliced, so that a video frame may be obtained.
The data transmission method provided by the embodiment of the application comprises the following steps: receiving smooth video information corresponding to a video frame through a preset video channel, wherein the video frame comprises a plurality of video units, and the video units comprise a preset number of pixels in the video frame. And determining a filling area of each video unit in the smooth video information, wherein the number of pixels included in the smooth video information is larger than that of pixels included in the video frame, and a non-filling area is arranged between at least two filling areas in the plurality of filling areas in the image space. And acquiring the plurality of video units in the smooth video information according to the filling areas of the plurality of video units. A video frame is determined from a plurality of video units. Smooth video information is received through a preset video channel, each video unit is included in the smooth video information, the video units are scattered and filled in filling areas of the smooth video information, then the video units are extracted at positions corresponding to the filling areas, and then according to large video frames of the video units, smooth transmission of videos can be effectively achieved on the premise that the flow condition of a receiving end does not need to be monitored, and the complexity of the smooth transmission of the videos is reduced.
Based on the foregoing embodiment, the following describes the video smooth transmission method at the video receiving end provided in the present application in further detail with reference to fig. 12 to fig. 13, where fig. 12 is a fourth flowchart of the video transmission method provided in the embodiment of the present application, and fig. 13 is an implementation schematic diagram of extracting a video frame provided in the embodiment of the present application.
S1201, smooth video information corresponding to a video frame is received through a preset video channel, wherein the video frame comprises a plurality of video units, and the video units comprise a preset number of pixels in the video frame.
The implementation manner of S1201 is similar to that of S1101, and is not described herein again.
S1202, obtaining a mapping relation between a video frame and an image space corresponding to a preset video channel, wherein the mapping relation is used for indicating a corresponding relation between a video unit in the video frame and a pixel position in the image space.
In this embodiment, an implementation manner of determining the mapping relationship is similar to that of the video sending end, and details are not described here.
In a possible implementation manner, for example, the mapping function can be synchronized at the video receiving end and the video sending end, so that the video receiving end and the video sending end can be effectively ensured to determine the same mapping relationship without sending information such as a mapping table and mapping data, and thus, system consumption can be effectively saved.
S1203, determining a to-be-selected area in the smoothed video information according to the pixel position and the mapping relation of the video unit in the video frame, wherein the number of pixels in the to-be-selected area is larger than that in the video unit.
The number of pixels included in the smooth video information is greater than that of pixels included in the video frame, and a non-filling area is arranged between at least two filling areas in a plurality of filling areas in the image space;
in one possible implementation, the video unit includes a line of pixels in the video frame, and the mapping is used to indicate a correspondence between a line number in the video frame and a line number in the image space.
When the area to be selected is determined, for example, at least one target line number may be determined in the smoothed video information according to the line number and the mapping relationship corresponding to the video unit; and determining the area where the line corresponding to at least one target line number in the smooth video information is located as the area to be selected.
In another possible implementation, the video unit includes a column of pixels in the video frame, and the mapping is used to indicate a correspondence between a column number in the video frame and a column number in the image space.
When the area to be selected is determined, for example, at least one target column number may be determined in the image space according to the column number and the mapping relationship corresponding to the video unit; and determining the area where the column corresponding to at least one target column number in the smoothed video information is located as the area to be selected.
The implementation manner of determining the region to be selected in the smoothed video information is similar to that described in the video sending end, and is not described herein again.
S1204, determining a filling area in the area to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit.
The implementation manner of determining the filling area in the to-be-selected area is similar to that described in the video sending end, and is not described herein again.
And S1205, acquiring a plurality of video units in the smooth video information according to the filling areas of the plurality of video units.
After determining the filling area corresponding to each video unit, each video unit can be extracted.
In a possible implementation manner, the video units may be, for example, a row of pixels, and the corresponding filling regions are partial pixels in the row of pixels in the video frame, so that, for example, starting from a specific start mapping position, a row of pixels in the video frame may be extracted from the respective filling regions corresponding to the respective video units, and preset information in the non-filling regions may be removed.
In another possible implementation manner, the video units may be, for example, a column of pixels, and the corresponding filling regions are partial pixels in a column of pixels in the video frame, so that, for example, starting from a specific start mapping position, a column of pixels in the video frame may be extracted from the respective filling regions corresponding to the respective video units, and preset information in the non-filling regions may be removed.
For example, the above process may be understood in conjunction with fig. 13, as shown in fig. 13, assuming that 2160 lines of pixels are included in the current HDMI space, 1080 lines of pixels in the video frame are filled in the HDMI space, based on the mapping relationship, it may be determined that HDMI line 1, HDMI line 2, and HDMI line 3 all correspond to video line 1, and assuming that the position in the first line of the plurality of HDMI lines from the first column is currently the position, until the position in the current line where the number of pixels is equal to the number of pixels in the video unit is the filling area, the pixels of the video frame may be extracted in the filling area of HDMI line 1, so as to extract the first line of pixels of the video frame.
The extraction manners of the other rows are similar, and the implementation manners of extracting the video units in the other possible mapping relationships are also similar, which are not described herein again.
And S1206, determining the video frame according to the plurality of video units.
After the video units are extracted, the video units are sequentially spliced to obtain a video frame, for example, in fig. 13, the video units are extracted and then spliced to obtain the video frame shown in fig. 13.
The video transmission method provided by the embodiment of the application comprises the following steps: by extracting each video unit from the smooth video information according to the mapping relation and splicing the video units to receive the video frames, the smooth processing of video transmission can be effectively realized on the premise of not monitoring the flow condition of a receiving end, and the complexity of the smooth processing is reduced. Meanwhile, the same mapping function is synchronously arranged at the receiving end and the transmitting end, so that the mapping relation does not need to be additionally received, the data transmission quantity is effectively reduced, and the video transmission efficiency is improved.
Fig. 14 is a first schematic structural diagram of a video transmission apparatus according to an embodiment of the present application. As shown in fig. 14, the apparatus 140 includes: a determination module 1401, a padding module 1402, and a transmission module 1403.
A determining module 1401, configured to determine a plurality of video units in a video frame, where the video units include a preset number of pixels in the video frame;
the determining module 1401 is further configured to determine an image space corresponding to a preset video channel, where the number of pixels included in the image space is greater than the number of pixels included in the video frame;
a filling module 1402, configured to determine a filling area of each video unit in the image space, fill the corresponding video unit in the filling area, and fill preset information in at least one non-filling area to obtain smooth video information, where one non-filling area is located between at least two filling areas in a plurality of filling areas in the image space;
a sending module 1403, configured to send the smoothed video information through the preset video channel;
in one possible design, the filling module 1402 is specifically configured to:
acquiring a mapping relation between the video frame and the image space, wherein the mapping relation is used for indicating a corresponding relation between a video unit in the video frame and a pixel position in the image space;
determining a filling area of each video unit in the image space according to the pixel position of each video unit in the video frame and the mapping relation;
in one possible design, the filling module 1402 is specifically configured to:
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the region to be selected is greater than the number of pixels in the video unit;
determining the filling area in the area to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit;
in one possible design, the video unit includes a line of pixels in the video frame, and the mapping is used to indicate a correspondence between a line number in the video frame and a line number in the image space;
the filling module 1402 is specifically configured to:
determining at least one target line number in the image space according to the line number corresponding to the video unit and the mapping relation;
determining the area where the line corresponding to the at least one target line number is located as the area to be selected;
in one possible design, the video unit includes a column of pixels in the video frame, and the mapping is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
the filling module 1402 is specifically configured to:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
determining the area where the column corresponding to the at least one target column number is located as the area to be selected;
in a possible design, the image space includes a plurality of the filled regions and a plurality of the unfilled regions, and one unfilled region is disposed between every two adjacent filled regions, or;
at least two filling areas in the plurality of filling areas are not provided with the non-filling area;
in one possible design, the number of pixels included in each non-fill area is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
The apparatus provided in this embodiment may be configured to implement the technical solutions of the method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 15 is a schematic structural diagram of a video transmission apparatus according to an embodiment of the present application. As shown in fig. 15, the apparatus 150 includes: a receiving module 1501, a determining module 1502, and an obtaining module 1503.
A receiving module 1501, configured to receive, through a preset video channel, smooth video information corresponding to a video frame, where the video frame includes a plurality of video units, and the video units include a preset number of pixels in the video frame;
a determining module 1502, configured to determine a filling region of each video unit in the smoothed video information, where the number of pixels included in the smoothed video information is greater than the number of pixels included in the video frame, and one non-filling region is located between at least two filling regions in a plurality of filling regions in the image space;
an obtaining module 1503, configured to obtain the multiple video units in the smoothed video information according to the filling regions of the multiple video units;
the determining module 1502 is further configured to determine the video frame according to the plurality of video units;
in one possible design, the determining module 1502 is specifically configured to:
acquiring a mapping relation between the video frame and an image space corresponding to the preset video channel, wherein the mapping relation is used for indicating a corresponding relation between a video unit in the video frame and a pixel position in the image space;
and determining a filling area of each video unit in the smooth video information according to the pixel position of each video unit in the video frame and the mapping relation.
In one possible design, the determining module 1502 is specifically configured to:
determining a candidate area in the smooth video information according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the candidate area is greater than the number of pixels in the video unit;
determining the filling area in the region to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit.
In one possible design, the video unit includes a line of pixels in the video frame, and the mapping is used to indicate a correspondence between a line number in the video frame and a line number in the image space;
the determining module 1502 is specifically configured to:
determining at least one target line number in the smooth video information according to the line number corresponding to the video unit and the mapping relation;
and determining the area where the line corresponding to the at least one target line number in the smooth video information is located as the area to be selected.
In one possible design, the video unit includes a column of pixels in the video frame, and the mapping is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
the determining module 1502 is specifically configured to:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
and determining the area where the column corresponding to the at least one target column number in the smooth video information is located as the area to be selected.
In one possible design, the image space includes a plurality of the filled regions and a plurality of the unfilled regions, wherein,
one non-filling area is arranged between every two adjacent filling areas; or;
at least two filling areas in the plurality of filling areas have no non-filling area arranged between the filling areas.
In one possible design, the number of pixels included in each non-fill area is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
The apparatus provided in this embodiment may be configured to implement the technical solutions of the method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 16 is a schematic diagram of a hardware structure of a video transmission device according to an embodiment of the present application, and as shown in fig. 16, a video transmission device 160 according to the embodiment includes: a processor 1601 and a memory 1602; wherein
A memory 1602 for storing computer-executable instructions;
the processor 1601 is configured to execute the computer-executable instructions stored in the memory to implement the steps performed by the video transmission method of the video transmitting end in the foregoing embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 1602 may be separate or integrated with the processor 1601.
When the memory 1602 is provided separately, the video transmission device further includes a bus 1603 for connecting the memory 1602 and the processor 1601.
Fig. 17 is a schematic diagram of a hardware structure of a video transmission device according to an embodiment of the present application, and as shown in fig. 17, a video transmission device 170 according to the embodiment includes: a processor 1701 and a memory 1702; wherein
A memory 1702 for storing computer-executable instructions;
the processor 1701 is configured to execute the computer executable instructions stored in the memory to implement the steps performed by the video transmission method at the video receiving end in the above embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 1702 may be separate or integrated with the processor 1701.
When the memory 1702 is provided separately, the video transmission apparatus further includes a bus 1703 for connecting the memory 1702 and the processor 1701.
An embodiment of the present application further provides a computer-readable storage medium, where a computer executing instruction is stored in the computer-readable storage medium, and when a processor executes the computer executing instruction, the video transmission method performed by the above video transmission device is implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (20)

1. A video transmission method is applied to a video sending end, and the method comprises the following steps:
determining a plurality of video units in a video frame, wherein the video units comprise a preset number of pixels in the video frame;
determining an image space corresponding to a preset video channel, wherein the number of pixels in the image space is greater than the number of pixels in the video frame;
determining a filling area of each video unit in the image space according to the pixel position and the mapping relation of each video unit in the video frame, filling the corresponding video unit in the filling area, and filling preset information in at least one non-filling area to obtain smooth video information, wherein at least one non-filling area is arranged between two filling areas in a plurality of filling areas of the image space; the mapping relationship is used for indicating the corresponding relationship between the video unit in the video frame and the pixel position in the image space;
sending the smooth video information through the preset video channel;
if the video unit comprises a line of pixels in the video frame, and the mapping relationship indicates that a corresponding relationship exists between a certain line of pixels in the video frame and N lines of pixels in the image space, for any video unit, the filling area of the video unit is an area which is in the S-th line of pixels in the N lines and comprises the number of pixels equal to the number of pixels included in the video unit, or an area which is not in the same line in the N lines and consists of a plurality of pixels and is equal to the number of pixels included in the video unit; wherein N is an integer not less than 1, and S is an integer not less than 1 and not more than N;
the image space comprises a plurality of filling areas and a plurality of non-filling areas;
one non-filling area is arranged between every two adjacent filling areas; or,
at least two filling areas in the plurality of filling areas have no non-filling area arranged between the filling areas.
2. The method of claim 1, further comprising, prior to determining the fill area for each video unit in the image space:
and acquiring the mapping relation between the video frame and the image space.
3. The method of claim 2, wherein for any one video unit; determining a filling area of the video unit in the image space according to the pixel position of the video unit in the video frame and the mapping relation, including:
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the region to be selected is greater than that in the video unit;
and determining the filling area in the area to be selected according to the number of pixels in the video unit, wherein the number of pixels in the filling area is equal to the number of pixels in the video unit.
4. The method of claim 3, wherein if the video unit comprises a row of pixels in the video frame, the mapping is used to indicate a correspondence between a row number in the video frame and a row number in the image space;
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relationship, including:
determining at least one target line number in the image space according to the line number corresponding to the video unit and the mapping relation;
and determining the area where the line corresponding to the at least one target line number is located as the area to be selected.
5. The method of claim 3, wherein if the video unit comprises a column of pixels in the video frame, the mapping relationship is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, including:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
and determining the area where the column corresponding to the at least one target column number is located as the area to be selected.
6. The method of claim 1,
the number of pixels included in each non-filled region is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
7. A video transmission method, applied to a video receiving end, the method comprising:
receiving smooth video information corresponding to a video frame through a preset video channel, wherein the video frame comprises a plurality of video units, and the video units comprise a preset number of pixels in the video frame;
determining a filling area of each video unit in the smooth video information according to the pixel position and the mapping relation of each video unit in the video frame, wherein the number of pixels in the smooth video information is greater than that in the video frame, and a non-filling area is arranged between at least two filling areas in a plurality of filling areas in an image space; the mapping relationship is used for indicating the corresponding relationship between the video unit in the video frame and the pixel position in the image space;
acquiring the plurality of video units from the smooth video information according to the filling areas of the plurality of video units;
determining the video frame from the plurality of video units;
if the video unit includes a row of pixels in the video frame, and the mapping relationship indicates that a corresponding relationship exists between a certain row of pixels in the video frame and N rows of pixels in the image space, for any one video unit, the filling region of the video unit is a region, in the S-th row of pixels in the N rows of pixels, in which the number of pixels included is equal to the number of pixels included in the video unit, or a region, in the N rows of pixels, which is not the same row and is formed by a plurality of pixels and is equal to the number of pixels included in the video unit; wherein N is an integer not less than 1, and S is an integer not less than 1 and not more than N;
the image space comprises a plurality of filling areas and a plurality of non-filling areas;
one non-filling area is arranged between every two adjacent filling areas; or,
at least two filling areas in the plurality of filling areas have no non-filling area arranged between the filling areas.
8. The method of claim 7, further comprising, prior to determining the padding region for each video unit in the smoothed video information:
and acquiring a mapping relation between the video frame and an image space corresponding to the preset video channel.
9. The method of claim 8, wherein for any one video unit; determining a filling area of the video unit in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relation, including:
determining a candidate area in the smooth video information according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the candidate area is greater than the number of pixels in the video unit;
determining the filling area in the region to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit.
10. The method of claim 9, wherein if the video unit comprises a row of pixels in the video frame, the mapping is used to indicate a correspondence between a row number in the video frame and a row number in the image space;
determining a candidate region in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relationship, including:
determining at least one target line number in the smooth video information according to the line number corresponding to the video unit and the mapping relation;
and determining the area where the line corresponding to the at least one target line number in the smooth video information is located as the area to be selected.
11. The method according to claim 9, wherein if the video unit comprises a column of pixels in the video frame, the mapping relationship is used to indicate a correspondence between a column number in the video frame and a column number in the image space;
determining a candidate region in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relationship, including:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
and determining the area where the column corresponding to the at least one target column number in the smooth video information is located as the area to be selected.
12. The method of claim 7,
the number of pixels included in each non-filled region is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
13. A video transmission apparatus, applied to a video transmitting end, the apparatus comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a plurality of video units in a video frame, and the video units comprise a preset number of pixels in the video frame;
the determining module is further configured to determine an image space corresponding to a preset video channel, where the number of pixels included in the image space is greater than the number of pixels included in the video frame;
filling module for
Determining a filling area of each video unit in the image space according to the pixel position and the mapping relation of each video unit in the video frame
Filling corresponding video units in the filling areas, and filling preset information in at least one non-filling area to obtain smooth video information, wherein one non-filling area is arranged between at least two filling areas in the plurality of filling areas of the image space; the mapping relationship is used for indicating a corresponding relationship between a video unit in the video frame and a pixel position in the image space;
the sending module is used for sending the smooth video information through the preset video channel;
if the video unit includes a row of pixels in the video frame, and the mapping relationship indicates that a corresponding relationship exists between a certain row of pixels in the video frame and N rows of pixels in the image space, for any one video unit, the filling region of the video unit is a region, in the S-th row of pixels in the N rows of pixels, in which the number of pixels included is equal to the number of pixels included in the video unit, or a region, in the N rows of pixels, which is not the same row and is formed by a plurality of pixels and is equal to the number of pixels included in the video unit; wherein N is an integer of 1 or more, S is an integer of 1 or more and N or less;
wherein the image space includes a plurality of the filled regions and a plurality of the unfilled regions therein;
one non-filling area is arranged between every two adjacent filling areas; or,
at least two filling areas in the plurality of filling areas have no non-filling area arranged between the filling areas.
14. The apparatus according to claim 13, wherein the filling module is specifically configured to:
acquiring a mapping relation between the video frame and the image space;
the filling module is specifically configured to:
determining a region to be selected in the image space according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the region to be selected is greater than that in the video unit;
determining the filling area in the area to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit;
if the video unit comprises a line of pixels in the video frame, the mapping relation is used for indicating the corresponding relation between the line number in the video frame and the line number in the image space;
the filling module is specifically configured to:
determining at least one target line number in the image space according to the line number corresponding to the video unit and the mapping relation;
determining the area where the line corresponding to the at least one target line number is located as the area to be selected;
if the video unit comprises a column of pixels in the video frame, the mapping relation is used for indicating the corresponding relation between the column number in the video frame and the column number in the image space;
the filling module is specifically configured to:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
determining the area where the column corresponding to the at least one target column number is located as the area to be selected;
wherein,
the number of pixels included in each non-filled region is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
15. A video transmission apparatus, applied to a video receiving end, the apparatus comprising:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving smooth video information corresponding to a video frame through a preset video channel, the video frame comprises a plurality of video units, and the video units comprise a preset number of pixels in the video frame;
a determining module, configured to determine a filling region of each video unit in the smoothed video information according to a pixel position and a mapping relationship of each video unit in the video frame, where the number of pixels included in the smoothed video information is greater than the number of pixels included in the video frame, and a non-filling region is arranged between at least two filling regions in a plurality of filling regions in an image space; the mapping relationship is used for indicating a corresponding relationship between a video unit in the video frame and a pixel position in the image space;
an obtaining module, configured to obtain the multiple video units from the smoothed video information according to the filling areas of the multiple video units;
the determining module is further configured to determine the video frame according to the plurality of video units;
if the video unit comprises a line of pixels in the video frame, and the mapping relationship indicates that a corresponding relationship exists between a certain line of pixels in the video frame and N lines of pixels in the image space, for any video unit, the filling area of the video unit is an area which is in the S-th line of pixels in the N lines and comprises the number of pixels equal to the number of pixels included in the video unit, or an area which is not in the same line in the N lines and consists of a plurality of pixels and is equal to the number of pixels included in the video unit; wherein N is an integer of 1 or more, S is an integer of 1 or more and N or less;
wherein the image space includes a plurality of the filled regions and a plurality of the unfilled regions;
one non-filling area is arranged between every two adjacent filling areas; or,
at least two filling areas in the plurality of filling areas are not provided with the non-filling area therebetween.
16. The apparatus of claim 15, wherein the determining module is specifically configured to:
acquiring a mapping relation between the video frame and an image space corresponding to the preset video channel;
the determining module is specifically configured to:
determining a region to be selected in the smoothed video information according to the pixel position of the video unit in the video frame and the mapping relation, wherein the number of pixels in the region to be selected is greater than that in the video unit;
determining the filling area in the area to be selected according to the number of pixels included in the video unit, wherein the number of pixels included in the filling area is equal to the number of pixels included in the video unit;
if the video unit comprises a line of pixels in the video frame, the mapping relationship is used for indicating the corresponding relationship between the line number in the video frame and the line number in the image space;
the determining module is specifically configured to:
determining at least one target line number in the smooth video information according to the line number corresponding to the video unit and the mapping relation;
determining the area where the line corresponding to the at least one target line number in the smooth video information is located as the area to be selected;
if the video unit comprises a column of pixels in the video frame, the mapping relation is used for indicating the corresponding relation between the column number in the video frame and the column number in the image space;
the determining module is specifically configured to:
determining at least one target column number in the image space according to the column number corresponding to the video unit and the mapping relation;
determining an area where a column corresponding to the at least one target column number in the smoothed video information is located as the area to be selected;
wherein the number of pixels included in each non-filled region is the same; or,
the number of pixels included in at least two of the plurality of non-filled regions is different; or,
the difference between the numbers of pixels included in each two of the plurality of non-filled regions is less than or equal to a first threshold.
17. A video transmission device, comprising:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being configured to perform the method of any of claims 1 to 6 when the program is executed.
18. A video transmission device, comprising:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being configured to perform the method of any of claims 7 to 12 when the program is executed.
19. A video transmission system, comprising: a video transmitting device and a video receiving device;
wherein the video transmission device is configured to store data for performing the method of any one of claims 1 to 6;
the video receiving device is configured to perform the method of any of claims 7 to 12.
20. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 6 or 7-12.
CN202110797818.0A 2021-07-14 2021-07-14 Video transmission method and device Active CN113542805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110797818.0A CN113542805B (en) 2021-07-14 2021-07-14 Video transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110797818.0A CN113542805B (en) 2021-07-14 2021-07-14 Video transmission method and device

Publications (2)

Publication Number Publication Date
CN113542805A CN113542805A (en) 2021-10-22
CN113542805B true CN113542805B (en) 2023-01-24

Family

ID=78099226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110797818.0A Active CN113542805B (en) 2021-07-14 2021-07-14 Video transmission method and device

Country Status (1)

Country Link
CN (1) CN113542805B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821305A (en) * 2011-06-06 2012-12-12 索尼公司 Signal transmission/reception apparatus and method and signal transmission system
CN104168438A (en) * 2014-08-23 2014-11-26 广州市奥威亚电子科技有限公司 Video transmission method for achieving point-by-point correspondence based on SDI
CN105594204A (en) * 2013-10-02 2016-05-18 杜比实验室特许公司 Transmitting display management metadata over HDMI
CN109076215A (en) * 2016-08-23 2018-12-21 深圳市大疆创新科技有限公司 System and method for improving the efficiency encoded/decoded to bending view video
CN109495707A (en) * 2018-12-26 2019-03-19 中国科学院西安光学精密机械研究所 High-speed video acquisition and transmission system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821305A (en) * 2011-06-06 2012-12-12 索尼公司 Signal transmission/reception apparatus and method and signal transmission system
CN105594204A (en) * 2013-10-02 2016-05-18 杜比实验室特许公司 Transmitting display management metadata over HDMI
CN104168438A (en) * 2014-08-23 2014-11-26 广州市奥威亚电子科技有限公司 Video transmission method for achieving point-by-point correspondence based on SDI
CN109076215A (en) * 2016-08-23 2018-12-21 深圳市大疆创新科技有限公司 System and method for improving the efficiency encoded/decoded to bending view video
CN109495707A (en) * 2018-12-26 2019-03-19 中国科学院西安光学精密机械研究所 High-speed video acquisition and transmission system and method

Also Published As

Publication number Publication date
CN113542805A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
WO2021047429A1 (en) Image rendering method and device, apparatus, and storage medium
CN109120869B (en) Double-light image integration method, integration equipment and unmanned aerial vehicle
EP4002281A1 (en) Layer composition method and apparatus, electronic device, and storage medium
KR102617258B1 (en) Image processing method and apparatus
CN109992541B (en) Data carrying method, computing device and computer storage medium
CN111158619A (en) Picture processing method and device
CN114598893B (en) Text video realization method and system, electronic equipment and storage medium
US20170155890A1 (en) Method and device for stereoscopic image display processing
CN111464757A (en) Video processing method, device and system
CN105589667B (en) Method and device for capturing display image of display equipment
CN114040246A (en) Image format conversion method, device, equipment and storage medium of graphic processor
CN114466228B (en) Method, equipment and storage medium for improving smoothness of screen projection display
CN113559498B (en) Three-dimensional model display method and device, storage medium and electronic equipment
CN113542805B (en) Video transmission method and device
CN113286100B (en) Configuration method and device of video output interface and video output equipment
EP4050567A1 (en) Information processing device, 3d data generation method, and program
CN114554126B (en) Baseboard management control chip, video data transmission method and server
CN110460854A (en) Method for compressing image
CN114153408B (en) Image display control method and related equipment
CN113658049A (en) Image transposition method, equipment and computer readable storage medium
CN114625891A (en) Multimedia data processing method, device and system
CN115706725A (en) Method and device for acquiring decoding rendering configuration, electronic equipment and storage medium
CN109361956B (en) Time-based video cropping methods and related products
CN109274955B (en) Compression and synchronization method and system for light field video and depth map, and electronic equipment
CN113766315A (en) Display device and video information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant