CN112333469B - System based on mobile network and wifi video transmission enhancement method - Google Patents

System based on mobile network and wifi video transmission enhancement method Download PDF

Info

Publication number
CN112333469B
CN112333469B CN202011159735.0A CN202011159735A CN112333469B CN 112333469 B CN112333469 B CN 112333469B CN 202011159735 A CN202011159735 A CN 202011159735A CN 112333469 B CN112333469 B CN 112333469B
Authority
CN
China
Prior art keywords
module
video
slice
fragment
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011159735.0A
Other languages
Chinese (zh)
Other versions
CN112333469A (en
Inventor
陈尚武
李华松
倪仰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xujian Science And Technology Co ltd
Original Assignee
Hangzhou Xujian Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xujian Science And Technology Co ltd filed Critical Hangzhou Xujian Science And Technology Co ltd
Priority to CN202011159735.0A priority Critical patent/CN112333469B/en
Publication of CN112333469A publication Critical patent/CN112333469A/en
Application granted granted Critical
Publication of CN112333469B publication Critical patent/CN112333469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a system based on a mobile network and a wifi video transmission enhancement method, which comprises a video source module, a picture disassembling module, a slice one coding module, a slice two coding module, a motion detection module, a wifi network transmission module, a mobile network transmission module, a network receiving module, a network packaging module, a slice one decoding module, a slice two decoding module, a picture merging module, a picture double-frame module and a video display module. The invention uses wifi and mobile network to transmit independent and complementary video coding at the same time, automatically selects high frame rate or high resolution according to image conversion, outputs high resolution image together if the image is in static state, outputs high frame rate video image together if the image is in motion state, and still outputs video image with limited distortion or low frame rate if one channel of wifi and mobile network has problem.

Description

System based on mobile network and wifi video transmission enhancement method
Technical Field
The invention relates to the field of network communication, in particular to a system based on a mobile network and a wifi video transmission enhancement method.
Background
At present, mobile devices (such as law enforcement recorders and mobile phones) basically support wifi and a mobile network at the same time, and in basic mobile devices, wifi has a higher priority than the mobile network, and only one of the network connection modes can be selected, when the wifi network condition is not good, only the mobile network can be selected to bear higher traffic cost, and when the wifi network condition and the mobile network condition are not good, only the requirement reduction (such as a low frame rate and low resolution video playing mode) can be selected. Therefore, a method for enhancing video transmission by using wifi and a mobile network simultaneously is required to be found, so that the resolution or frame rate of the video is increased, and the reliability of the mobile device is improved.
Disclosure of Invention
The invention aims to provide a system based on a mobile network and a wifi video transmission enhancement method, and wifi and the mobile network are used for transmitting video images simultaneously. The technical scheme for realizing the purpose of the invention is as follows:
a system based on a mobile network and a wifi video transmission enhancement method comprises a video source module, a picture disassembling module, a fragment one coding module, a fragment two coding module, a motion detection module, a wifi network transmission module, a mobile network transmission module, a network receiving module, a network packing module, a fragment one decoding module, a fragment two decoding module, a picture combining module, a picture double-frame module and a video display module;
a video source module: generating a high-frame-rate and high-resolution YUV video stream, wherein each video frame carries a timestamp;
a picture disassembling module: receiving YUV video streams of a video source module, sequencing the video streams of high frame rates according to time sequence, splitting by using odd-even frames to obtain two video streams of low frame rates, wherein the odd frames are a fragment one video stream, the even frames are a fragment two video stream, splitting pictures of each frame of YUV video streams of the fragment one video stream and the fragment two video stream according to rows according to the condition that human eyes are not longitudinally sensitive to transverse change, splitting the pictures into two pictures according to odd columns of pixels and even columns of pixels, obtaining odd-even blocks of low frame rate and low resolution by each fragment, and sending the video streams obtained by splitting and a timestamp to a fragment one coding module and a fragment two coding module;
a slicing-encoding module: receiving a first video stream of a picture disassembling module, receiving a notice of whether a picture of a motion detection module moves, if the picture does not move, the picture content of the first video stream and the second video stream is not greatly different, at the moment, a first video stream and a second video stream are complementary to form a high-resolution strategy, a first coding module removes even blocks of the first video stream, performs video coding compression on the odd blocks, sends the compressed video stream, a timestamp and a merging identifier to a network packaging module, if the picture moves, the picture content of the first video stream and the second video stream is greatly different and is not suitable for complementary resolution improvement, at the moment, a first video stream and a second data stream are complementary to form a high-frame-rate strategy, and simultaneously, even block and odd block information are reserved, linear interpolation is performed on each corresponding pixel of the even blocks and the odd blocks of the video stream to obtain linearly interpolated blocks, the video is encoded and compressed, and a compressed video stream, a timestamp and a frame doubling identifier are sent to a network packaging module;
a fragment two encoding module: receiving a first video stream of a fragment of a picture disassembling module, receiving a notice whether a picture of a motion detection module moves, if the picture does not move, and the content of the picture of the first video stream of the fragment does not differ much from that of a second video stream of the fragment, adopting a first fragment and a second fragment to complement data to form a high-resolution strategy, removing odd blocks of the first video stream of the fragment by a first fragment coding module, carrying out video coding compression on even blocks, and sending a compressed video stream, a timestamp and a merging identifier to a network packaging module; if the picture moves, the difference between the picture contents of the first video stream and the second video stream is larger, at the moment, a high frame rate strategy is formed by adopting the complementation of the first fragment and the second fragment, the information of even blocks and odd blocks is kept, linear interpolation is carried out on each corresponding pixel of the even blocks and the odd blocks of the video stream to obtain blocks of the linear interpolation, the video is coded and compressed, and the compressed video stream, a time stamp and a double-frame identifier are sent to a network packing module;
a motion detection module: receiving YUV video stream of a video source module, performing subtraction on each brightness Y pixel data of two frames before and after, if the absolute value of the subtraction value is greater than a threshold value, moving the pixel, and obtaining a motion coefficient by dividing the motion area of the motion pixel extracted from the image by the area of the image, if the motion coefficient is less than the threshold value, the image is considered not to move, otherwise, the image is considered to move; sending the notice of picture no-motion and picture motion to a slice one coding module and a slice two coding module;
wifi network transmission module: receiving the video stream of the network packaging module, and sending the video compressed stream to a network receiving module through a wifi network by using a wifi transmission network;
a mobile network transmission module: receiving the video stream of the network packaging module, and sending the video compressed stream to the network receiving module through a mobile network by using a mobile transmission network;
a network receiving module: receiving a video network stream sent by a wifi network transmission module and a mobile network transmission module; sending the video stream of the first fragment to a first fragment decoding module, and sending the video stream of the second fragment to a second fragment decoding module;
a network packing module: receiving a video stream of a first fragment coding module and a second fragment coding module, carrying out RTP (real-time transport protocol) packaging on the video stream, inputting a timestamp into an RTP head, and inputting a double-frame identifier or a combined identifier into additional data of the RTP; the network packaging module sends the video stream of the first fragment to the wifi network transmission module and sends the video stream of the second fragment to the mobile network transmission module;
a slice-decoding module: receiving a video stream of the first fragment of the network receiving module, removing frame compression data of the video stream taken out by an RTP (real time protocol) head, taking out a timestamp of the RTP as a timestamp of a frame, taking out a double frame identifier or a combined identifier from additional data of the RTP, and carrying out video decoding on the frame compression data to obtain YUV (YUV) image data of the first fragment; if the frame doubling identifier exists, transmitting the sliced YUV image data and the timestamp to a picture frame doubling module; if the merging identifier exists, the YUV image data and the timestamp of the fragment one are sent to a picture merging module;
a slice two decoding module: receiving a video stream of a second fragment of the network receiving module, removing a RTP head from frame compressed data of the video stream, taking a timestamp of the RTP as a timestamp of a frame, taking a frame doubling identifier or a merging identifier from additional data of the RTP, and performing video decoding on the frame compressed data to obtain YUV image data of the second fragment; if the frame doubling identifier exists, transmitting the YUV image data and the timestamp of the second slice to the picture frame doubling module; if the merging identifier exists, the YUV image data and the timestamp of the second slice are sent to the picture merging module;
a picture merging module: receiving YUV image data and a time stamp of a first slice decoding module, and receiving YUV image data and a time stamp of a second slice decoding module; sorting the YUV image data according to a timestamp, if the YUV image data of the first slice and the YUV image data of the second slice are not lost, the YUV image data of the first slice and the YUV image data of the second slice are staggered, merging the YUV image data of the first slice and the YUV image data of the second slice which are adjacent, placing the YUV image data of the first slice in an odd column, and placing the YUV image data of the second slice in an even column to finally obtain high-resolution image data; if one part of data is lost between the YUV image data of the first fragment and the YUV image data of the second fragment, the YUV image data is subjected to transverse interpolation, each pixel of YUV adjacent columns is subjected to linear interpolation, a new column is inserted between the two adjacent columns, the last two columns generate a middle column according to the linear interpolation, and then a right column is generated as a last column of the generated image, so that high-resolution image data with limited distortion is still restored; the method comprises the steps that two channels of WiFi and a mobile network are adopted to simultaneously transmit mutually residual videos (two videos which are mutually residual videos can be mutually restored, and the image quality can be improved due to the fact that the two videos exist at the same time), when one channel goes wrong, a limited distortion video image can still be restored by the method, and when the two channels work normally, high-resolution image data are obtained; sending the generated image data to a video display module;
a picture frame doubling module: receiving YUV image data and a time stamp of a first slice decoding module, and receiving YUV image data and a time stamp of a second slice decoding module; sorting YUV image data according to a timestamp, performing transverse interpolation on the YUV image data, performing linear interpolation on each pixel of YUV adjacent columns to obtain a new column, inserting the new column between two adjacent columns, generating a middle column by the last two columns according to the linear interpolation, regenerating a right column serving as a last column of a generated image, and reducing high-resolution image data through image stretching; the method comprises the steps that two channels of WiFi and a mobile network are adopted to transmit video data simultaneously, when one channel has a problem, a low-frame-rate video image is obtained, and when the two channels work normally, high-frame-rate image data is obtained; generating YUV image data and sending the YUV image data to a video display module;
a video display module: receiving YUV image data rendering display of the picture merging module, and receiving YUV image data rendering display of the picture frame doubling module; when the image of the video source module is in a static state, the WiFi and mobile network channels are used for simultaneously transmitting mutual residual video, high-resolution images are jointly output, and when the image of the video source module is in a motion state, the WiFi and mobile network channels are used for simultaneously transmitting interval independent video images, and high-frame-rate video images are jointly output.
The invention has the beneficial effects that: the invention uses wifi and mobile network to transmit independent and complementary video coding at the same time, automatically selects high frame rate or high resolution according to image conversion, outputs high resolution image together if the image is in static state, outputs high frame rate video image together if the image is in motion state, and still can output video image with limited distortion or low frame rate if one channel of wifi and mobile network has problems.
Drawings
Fig. 1 is a flowchart of a system based on a mobile network and wifi video transmission enhancement method according to the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below, and the embodiments described below by referring to the drawings are exemplary and intended to be used for explaining the present invention, and should not be construed as limiting the present invention.
As shown in fig. 1, a system based on a mobile network and a wifi video transmission enhancement method includes a video source module 1, a picture disassembling module 2, a slice-one encoding module 3, a slice-two encoding module 4, a motion detection module 5, a wifi network transmission module 6, a mobile network transmission module 7, a network receiving module 8, a network packing module 9, a slice-one decoding module 10, a slice-two decoding module 11, a picture combining module 12, a picture frame doubling module 13, and a video display module 14.
1. The video source module 1 generates a high frame rate high resolution YUV video stream (60 frames 1920 x 1080), each video frame carrying a timestamp.
2. The picture disassembling module 2 receives and processes the YUV video stream of the video source module 1;
2.1, the picture disassembling module 2 sequences the video streams with high frame rate (60 frames) according to time sequence, and uses parity frames to disassemble to obtain two video streams with low frame rate (30 frames), wherein the odd frames are a video stream with a first fragment and the even frames are a video stream with a second fragment;
2.2 according to the fact that human eyes are not as sensitive to transverse changes as longitudinal changes, splitting pictures (1920 x 1080) of each frame of YUV video streams of the split-one video stream and the split-two video stream into two pictures (960 x 1080) according to odd columns of pixels and even columns of pixels, and obtaining parity block image data of low frame rate and low resolution (30 frames of 960 x 1080 resolution) by each split;
2.3, the segmentation image data and the timestamp are sent to the slice-one coding module 3 and the slice-two coding module 4.
3. The first fragment coding module 3 receives the first fragment video stream of the picture disassembling module 2 and receives the notice whether the picture is moving or not from the motion detection module 5;
3.1 if the picture does not move, the difference between the picture contents of the first video stream and the second video stream is not large, at this time, a high-resolution strategy is formed by adopting the complementation of the first data and the second data, the first video stream even block is removed by the first video stream coding module 3, the odd block is subjected to video coding compression, and the compressed video stream, the timestamp and the merging identifier are sent to the network packing module 9;
3.2 if the picture moves, the difference between the picture contents of the first video stream and the second video stream is large, which is not suitable for complementation to improve the resolution, at this time, a high frame rate strategy is formed by adopting the complementation of the first video stream and the second video stream, and simultaneously, the information of even blocks and odd blocks is reserved, linear interpolation is carried out on each corresponding pixel of the even blocks and the odd blocks of the video stream to obtain blocks of the linear interpolation, the video is coded and compressed, and the compressed video stream, the time stamp and the frame doubling identifier are sent to the network packing module 9.
4. The second fragment coding module 4 receives the first fragment video stream of the picture disassembling module 2 and receives a notice that whether the picture moves or not from the motion detection module 5;
4.1 if the picture does not move, the difference between the picture contents of the first video stream and the second video stream is not large, at this time, a high-resolution strategy is formed by adopting the complementation of the first video stream and the second video stream, the first video stream coding module 4 removes the odd blocks of the first video stream, performs video coding compression on the even blocks, and sends the compressed video stream, the timestamp and the merging identifier to the network packing module 9;
4.2 if the picture moves, the difference between the picture contents of the first video stream and the second video stream is large, at this time, a high frame rate strategy is formed by adopting the complementation of the first video stream and the second video stream, the information of even blocks and odd blocks is kept, linear interpolation (median value taking) is carried out on each corresponding pixel of the even blocks and the odd blocks of the video stream to obtain blocks of the linear interpolation, the video is coded and compressed, and the compressed video stream, the timestamp and the double frame identifier are sent to the network packing module 9.
5. The motion detection module 5 receives the YUV video stream of the video source module 1, performs each subtraction on each lightness Y pixel data of the front and rear frames, and the pixel moves if the absolute value of the subtraction value is greater than the threshold value, so that the motion pixel is moved from the motion area in the image extraction, and the motion area is divided by the area of the image to obtain a motion coefficient, if the motion coefficient is less than the threshold value, the image is considered not to move, otherwise, the image is considered to move; and sending the notice of picture no-motion and picture motion to the slice-one coding module 3 and the slice-two coding module 3.
And 6, the wifi network transmission module 6 receives the video stream of the network packaging module 9, and the wifi transmission network is used for sending the video compression stream to the network receiving module 8 through the wifi network.
7. The mobile network transmission module 7 receives the video stream of the network packaging module 9, and uses the mobile transmission network to send the video compression stream to the network receiving module 8 through the mobile network.
8. The network receiving module 8 receives the video network stream sent by the wifi network transmission module 6 and the mobile network transmission module 7; the video stream of slice one is sent to the slice one decoding module 10, and the video stream of slice two is sent to the slice two decoding module 11.
9. The network packing module 9 receives the video stream of the first fragment coding module 3 and the second fragment coding module 4, performs RTP packing on the video stream, inputs a timestamp into an RTP header, and inputs a double-frame identifier or a combined identifier into additional data of the RTP; the network packaging module 9 sends the video stream of the first segment to the wifi network transmission module 6, and sends the video stream of the second segment to the mobile network transmission module 7.
10. The first fragment decoding module 10 receives the first fragment video stream of the network receiving module 8, removes the RTP header to take out the frame compressed data of the video stream, takes out the RTP timestamp as the frame timestamp, takes out the frame doubling identifier or merging identifier from the additional data of the RTP, and performs video decoding on the frame compressed data to obtain the first fragment YUV image data; if the frame doubling identifier exists, transmitting the sliced YUV image data and the timestamp to the picture frame doubling module 13; if the merging identifier exists, the slice one YUV image data and the timestamp are sent to the picture merging module 12.
11. The second fragment decoding module 11 receives the second fragment video stream of the network receiving module 8, removes the RTP header to take out the frame compressed data of the video stream, takes out the RTP timestamp as the frame timestamp, takes out the frame doubling identifier or merging identifier from the additional data of the RTP, and performs video decoding on the frame compressed data to obtain the YUV image data of the second fragment; if the frame doubling identifier exists, the YUV image data and the timestamp of the second slice are sent to the picture frame doubling module 13; if the merging identifier exists, the YUV image data and the timestamp of the slice two are sent to the picture merging module 12.
12. The picture merging module 12 receives the slice one YUV image data and the timestamp of the slice one decoding module 10; receiving YUV image data and a timestamp of the second slice decoding module 11;
12.1, sorting the YUV image data according to the time stamp, and if the YUV image data of the first slice and the YUV image data of the second slice are not lost, interlacing the YUV image data of the first slice and the YUV image data of the second slice;
12.2 adjacent YUV image data of the first slice and YUV image data of the second slice are merged, the YUV image data of the first slice is placed in an odd column, the YUV image data of the second slice is placed in an even column, and high-resolution (1920 x 1080) image data is finally obtained;
12.3 if one part of data is lost between the YUV image data of the first slice and the YUV image data of the second slice, the YUV image data is subjected to transverse interpolation, each pixel of YUV adjacent columns is subjected to linear interpolation (median value taking), a new column is inserted between the two adjacent columns, the last two columns generate a middle column according to the linear interpolation, and then a right column is generated as a last column of the generated image, so that high-resolution image (1920 x 1080) data with limited distortion is still restored;
12.4, simultaneously transmitting two videos which are mutually residual videos (the two videos which are mutually residual videos can be mutually restored, and the image quality can be improved when the two videos exist at the same time) by adopting two channels of WiFi and a mobile network, when one channel has a problem, still restoring a limited distortion video image, and when the two channels work normally, obtaining high-resolution image data;
12.5 sends the generated image data to the video display module 14.
13. The picture frame doubling module 13 receives the YUV image data and the timestamp of the first slice decoding module 10, and receives the YUV image data and the timestamp of the second slice decoding module 11;
13.1, sorting YUV image data according to a time stamp, performing transverse interpolation on the YUV image data, performing linear interpolation (taking a median value) on each pixel of YUV adjacent columns to obtain a new column, inserting the new column between two adjacent columns, generating a middle column in the last two columns according to the linear interpolation, regenerating a right column serving as the last column of the generated image, and restoring high-resolution image (1920 x 1080) data through image stretching;
13.2, simultaneously transmitting video data by adopting two channels of WiFi and a mobile network, obtaining a low frame rate (30 frames) video image when one channel has a problem, and obtaining a high frame rate (60 frames) image data when the two channels work normally;
13.3 generating YUV image data and sending to the video display module 14.
14. The video display module 14: receiving YUV image data rendering display of the picture merging module 12, and receiving YUV image data rendering display of the picture frame doubling module 13; when the image of the video source module 1 is in a static state, the two channels of the WiFi and the mobile network are used for simultaneously transmitting the mutual residual video, and the high-resolution image is jointly output, and when the image of the video source module 1 is in a motion state, the two channels of the WiFi and the mobile network are used for simultaneously transmitting the video image with the independent interval, and the high-frame-rate video image is jointly output.

Claims (1)

1. A system based on a mobile network and a wifi video transmission enhancement method is characterized by comprising a video source module (1), a picture disassembling module (2), a slice-one coding module (3), a slice-two coding module (4), a motion detection module (5), a wifi network transmission module (6), a mobile network transmission module (7), a network receiving module (8), a network packing module (9), a slice-one decoding module (10), a slice-two decoding module (11), a picture merging module (12), a picture double frame module (13) and a video display module (14);
video source module (1): generating a high-frame-rate and high-resolution YUV video stream, wherein each video frame carries a timestamp;
a picture disassembling module (2): receiving YUV video streams of a video source module (1), sequencing the video streams of high frame rates according to time sequence, splitting by using odd-even frames to obtain two video streams of low frame rates, wherein the odd frames are one-piece video streams, the even frames are two-piece video streams, splitting pictures of each frame of the one-piece video streams and the two-piece video streams according to the fact that human eyes are not longitudinally sensitive to transverse change, splitting the pictures of the YUV video streams into two pictures according to odd columns of pixels and even columns of pixels, obtaining odd-even blocks of low frame rate and low resolution ratio by each piece, and sending the video streams and time stamps obtained by splitting to a one-piece coding module (3) and a two-piece coding module (4);
slice-encoding module (3): receiving a video stream of a first fragment of a picture disassembling module (2), receiving a notice whether a picture of a motion detection module (5) moves, if the picture does not move, the picture content of the video stream of the first fragment and the video stream of a second fragment is not greatly different, at the moment, adopting a strategy of complementing a first fragment and a second fragment to form high resolution, removing an even fragment of the video stream of the first fragment by a coding module (3), carrying out video coding compression on an odd fragment, sending the compressed video stream, a timestamp and a merging identifier to a network packaging module (9), if the picture moves, the picture content of the video stream of the first fragment and the video stream of the second fragment is greatly different and is not suitable for complementing to improve the resolution, adopting a strategy of complementing the first fragment and the second fragment to form high frame rate, simultaneously reserving information of the first fragment and the odd fragment, and carrying out linear interpolation on each corresponding pixel of the even fragment and the odd fragment of the video stream, obtaining linear interpolation blocks, compressing video codes, and sending compressed video streams, time stamps and double frame identifiers to a network packing module (9);
a slice two encoding module (4): receiving a video stream of a first fragment of the picture disassembling module (2), receiving a notice whether the picture of the motion detection module (5) moves, if the picture does not move, the picture content of the video stream of the first fragment is not greatly different from that of a second fragment, at the moment, adopting data complementation of the first fragment and the second fragment to form a high-resolution strategy, removing odd fragments of the video stream of the first fragment by the coding module of the first fragment (3), carrying out video coding compression on even fragments, and sending the compressed video stream, a timestamp and a merging identifier to a network packaging module (9); if the picture moves, the difference between the picture contents of the first video stream and the second video stream is large, a high frame rate strategy is formed by adopting the complementation of the first fragment and the second fragment, the information of even fragments and odd fragments is kept at the same time, linear interpolation is carried out on each corresponding pixel of the even fragments and the odd fragments of the video stream to obtain the fragments of the linear interpolation, the video is coded and compressed, and the compressed video stream, the timestamp and the frame doubling identifier are sent to a network packing module (9);
motion detection module (5): receiving YUV video stream of a video source module (1), carrying out subtraction on each brightness Y pixel data of two frames before and after, and enabling the pixel to move if the absolute value of the subtraction value is larger than a threshold value, and dividing the area of a motion area in the motion pixel extraction from an image by the area of the image to obtain a motion coefficient, wherein if the motion coefficient is smaller than the threshold value, the image is considered not to move, otherwise, the image is considered to move; sending the notice of picture no-motion and picture motion to a slice-one coding module (3) and a slice-two coding module (4);
wifi network transfer module (6): receiving the video stream of the network packaging module (9), and sending the video compressed stream to the network receiving module (8) through a wifi network by using a wifi transmission network;
mobile network transfer module (7): receiving the video stream of the network packaging module (9), and sending the video compressed stream to the network receiving module (8) through a mobile network by using a mobile transmission network;
network reception module (8): receiving a video network stream sent by a wifi network transmission module (6) and a mobile network transmission module (7); sending the video stream of the first slice to a first slice decoding module (10), and sending the video stream of the second slice to a second slice decoding module (11);
network packing module (9): receiving the video stream of the first fragment coding module (3) and the second fragment coding module (4), performing RTP (real-time transport protocol) packaging on the video stream, inputting a timestamp into an RTP head, and inputting a double-frame identifier or a combined identifier into additional data of the RTP; the network packaging module (9) sends the video stream of the first fragment to the wifi network transmission module (6) and sends the video stream of the second fragment to the mobile network transmission module (7);
slice-decoding module (10): receiving a video stream of a first fragment of a network receiving module (8), removing a RTP head from frame compression data of the video stream, taking a timestamp of the RTP as a timestamp of a frame, taking a double frame identifier or a merging identifier from additional data of the RTP, and performing video decoding on the frame compression data to obtain YUV image data of the first fragment; if the frame doubling identifier exists, the sliced YUV image data and the time stamp are sent to a picture frame doubling module (13); if the merging identifier exists, the YUV image data and the timestamp of the slice one are sent to a picture merging module (12);
slice two decoding module (11): receiving a video stream of a second fragment of the network receiving module (8), removing a RTP head from frame compressed data of the video stream, taking a timestamp of the RTP as a timestamp of a frame, taking a double frame identifier or a merging identifier from additional data of the RTP, and performing video decoding on the frame compressed data to obtain YUV image data of the second fragment; if the frame doubling identifier exists, the YUV image data and the time stamp of the second slice are sent to a picture frame doubling module (13); if the merging identifier exists, the YUV image data and the timestamp of the second slice are sent to a picture merging module (12);
picture merging module (12): receiving YUV image data and a time stamp of a first slice of the first slice decoding module (10), and receiving YUV image data and a time stamp of a second slice of the second slice decoding module (11); sorting the YUV image data according to a timestamp, if the YUV image data of the first slice and the YUV image data of the second slice are not lost, the YUV image data of the first slice and the YUV image data of the second slice are staggered, merging the YUV image data of the first slice and the YUV image data of the second slice which are adjacent, placing the YUV image data of the first slice in an odd column, and placing the YUV image data of the second slice in an even column to finally obtain high-resolution image data; if one part of data is lost between the YUV image data of the first fragment and the YUV image data of the second fragment, the YUV image data is subjected to transverse interpolation, each pixel of YUV adjacent columns is subjected to linear interpolation, a new column is inserted between the two adjacent columns, the last two columns generate a middle column according to the linear interpolation, and then a right column is generated as a last column of the generated image, so that high-resolution image data with limited distortion is still restored; the method adopts two channels of WiFi and mobile network to transmit mutual residual video simultaneously, when one channel has a problem, the method can still recover a limited distortion video image, when the two channels work normally, high resolution image data is obtained; sending the generated image data to a video display module (14);
a picture frame doubling module (13): receiving YUV image data and a time stamp of a first slice of the first slice decoding module (10), and receiving YUV image data and a time stamp of a second slice of the second slice decoding module (11); sorting YUV image data according to a timestamp, performing transverse interpolation on the YUV image data, performing linear interpolation on each pixel of YUV adjacent columns to obtain a new column, inserting the new column between two adjacent columns, generating a middle column and a right column for the last two columns according to the linear interpolation, and reducing high-resolution image data by image stretching; the method comprises the steps that two channels of WiFi and a mobile network are adopted to transmit video data simultaneously, when one channel has a problem, a low-frame-rate video image is obtained, and when the two channels work normally, high-frame-rate image data is obtained; generating YUV image data and sending the YUV image data to a video display module (14);
video display module (14): receiving YUV image data rendering display of the picture merging module (12), and receiving YUV image data rendering display of the picture frame doubling module (13); when the image of the video source module (1) is in a static state, the Wi Fi and the mobile network are used for simultaneously transmitting the videos which are mutual residuals, and the high-resolution images are jointly output, and when the image of the video source module (1) is in a motion state, the WiFi and the mobile network are used for simultaneously transmitting the video images which are independent at intervals, and the high-frame-rate video images are jointly output.
CN202011159735.0A 2020-10-27 2020-10-27 System based on mobile network and wifi video transmission enhancement method Active CN112333469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011159735.0A CN112333469B (en) 2020-10-27 2020-10-27 System based on mobile network and wifi video transmission enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011159735.0A CN112333469B (en) 2020-10-27 2020-10-27 System based on mobile network and wifi video transmission enhancement method

Publications (2)

Publication Number Publication Date
CN112333469A CN112333469A (en) 2021-02-05
CN112333469B true CN112333469B (en) 2022-07-08

Family

ID=74311976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011159735.0A Active CN112333469B (en) 2020-10-27 2020-10-27 System based on mobile network and wifi video transmission enhancement method

Country Status (1)

Country Link
CN (1) CN112333469B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127918A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 A video error tolerance control system and method
CN101621652A (en) * 2009-07-21 2010-01-06 上海华平信息技术股份有限公司 Method for transmitting interlaced picture in high quality and changing the interlaced picture into non-interlaced picture in picture transmission system
CN101626512A (en) * 2009-08-11 2010-01-13 北京交通大学 Method and device of multiple description video coding based on relevance optimization rule
CN103609111A (en) * 2011-04-19 2014-02-26 三星电子株式会社 Method and apparatus for video encoding using inter layer prediction with pre-filtering, and method and apparatus for video decoding using inter layer prediction with post-filtering
CN106488243A (en) * 2016-11-03 2017-03-08 河海大学 A kind of many description screen content method for video coding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7369610B2 (en) * 2003-12-01 2008-05-06 Microsoft Corporation Enhancement layer switching for scalable video coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127918A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 A video error tolerance control system and method
CN101621652A (en) * 2009-07-21 2010-01-06 上海华平信息技术股份有限公司 Method for transmitting interlaced picture in high quality and changing the interlaced picture into non-interlaced picture in picture transmission system
CN101626512A (en) * 2009-08-11 2010-01-13 北京交通大学 Method and device of multiple description video coding based on relevance optimization rule
CN103609111A (en) * 2011-04-19 2014-02-26 三星电子株式会社 Method and apparatus for video encoding using inter layer prediction with pre-filtering, and method and apparatus for video decoding using inter layer prediction with post-filtering
CN106488243A (en) * 2016-11-03 2017-03-08 河海大学 A kind of many description screen content method for video coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
可伸缩视频编码关键技术研究;向友君;《中国博士学位论文全文数据库》;20111015;全文 *

Also Published As

Publication number Publication date
CN112333469A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
KR100636785B1 (en) Multi-view image system and method for compressing and decompressing applied to the same
US7894521B2 (en) Grouping of image frames in video coding
KR100913088B1 (en) Method and apparatus for encoding/decoding video signal using prediction information of intra-mode macro blocks of base layer
KR100209412B1 (en) Method for coding chrominance of video signals
CN110351564B (en) Clear-text video compression transmission method and system
US20060062299A1 (en) Method and device for encoding/decoding video signals using temporal and spatial correlations between macroblocks
US20060062298A1 (en) Method for encoding and decoding video signals
CN107113447A (en) High frame rate low frame rate rate transmission technology
EP2932711B1 (en) Apparatus and method for generating and rebuilding a video stream
CN105025347B (en) A kind of method of sending and receiving of GOP images group
GB2512658A (en) Transmitting and receiving a composite image
KR20150017350A (en) Method for generating and reconstructing a three-dimensional video stream, based on the use of the occlusion map, and corresponding generating and reconstructing device
KR20060027778A (en) Method and apparatus for encoding/decoding video signal using base layer
CN102301712A (en) Image Compression Using Checkerboard Mosaic For Luminance And Chrominance Color Space Images
CN111064962B (en) Video transmission system and method
EP2676448A1 (en) Method for generating, transmitting and receiving stereoscopic images, and related devices
GB2512825A (en) Transmitting and receiving a composite image
JPH03167985A (en) High efficiency coding device
CN114125448B (en) Video coding method, decoding method and related devices
CN102843566B (en) Communication method and equipment for three-dimensional (3D) video data
CN112333469B (en) System based on mobile network and wifi video transmission enhancement method
KR100883591B1 (en) Method and apparatus for encoding/decoding video signal using prediction information of intra-mode macro blocks of base layer
US20060133497A1 (en) Method and apparatus for encoding/decoding video signal using motion vectors of pictures at different temporal decomposition level
US20140376644A1 (en) Frame packing method, apparatus and system using a new 3d coding "frame compatible" format
US20120287237A1 (en) Method and apparatus for processing video signals, related computer program product, and encoded signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant