WO2019144818A1 - 视频帧传输方法、探测器及用户设备 - Google Patents

视频帧传输方法、探测器及用户设备 Download PDF

Info

Publication number
WO2019144818A1
WO2019144818A1 PCT/CN2019/071352 CN2019071352W WO2019144818A1 WO 2019144818 A1 WO2019144818 A1 WO 2019144818A1 CN 2019071352 W CN2019071352 W CN 2019071352W WO 2019144818 A1 WO2019144818 A1 WO 2019144818A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
buffer
frame
video
unit
Prior art date
Application number
PCT/CN2019/071352
Other languages
English (en)
French (fr)
Inventor
向建华
Original Assignee
深圳市道通科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通科技股份有限公司 filed Critical 深圳市道通科技股份有限公司
Publication of WO2019144818A1 publication Critical patent/WO2019144818A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/555Constructional details for picking-up images in sites, inaccessible due to their dimensions or hazardous conditions, e.g. endoscopes or borescopes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present application relates to the field of industrial endoscope technology, and in particular, to a video frame transmission method, a detector, and a user equipment.
  • endoscopes have been applied to various aspects such as industrial production, industrial flaw detection, medical treatment, and even daily life.
  • Industrial endoscopes also known as detectors, function to observe parts that are invisible to the naked eye and enable non-destructive testing without disassembly or destruction of assembly and operation of the equipment.
  • the basic principle is to capture the internal environment of the pre-checked object through the lens, and send the captured video stream to the user equipment, and the user equipment is used to display the video stream.
  • the wireless industrial endoscope based on the wireless communication method and the user equipment is affected by the narrow bandwidth of the wireless network, is easily interfered, and the fluctuation of the network speed is large, and the error rate is large, and the transmitted image quality is not high. The problem.
  • the embodiment of the invention provides a video frame transmission method, a detector and a user equipment, which can improve the transmission quality of the video frame and reduce the error rate of the video frame.
  • an embodiment of the present invention provides a video frame transmission method, including:
  • an embodiment of the present invention provides a video frame transmission method, including:
  • the video frame is sent by the detector through a corresponding sending unit of at least one of the at least two buffers.
  • an embodiment of the present invention provides a detector, including:
  • a video collection unit for collecting video data
  • a coding unit configured to encode video data collected by the video collection unit to obtain a video frame to be sent
  • a storage unit configured to store the video frame in at least one buffer of at least two buffers according to a type of the video frame
  • At least one transmitting unit each of the at least one transmitting unit corresponding to one of the at least one buffer, wherein the at least one transmitting unit is configured to be associated with the at least one transmitting unit
  • the corresponding video frame stored in the at least one buffer is sent to the user equipment.
  • an embodiment of the present invention provides a user equipment, including:
  • At least two receiving units at least one of the at least two receiving units configured to receive at least one video frame sent by the probe;
  • the video frame is sent by the detector through a corresponding sending unit of at least one of the at least two buffers.
  • the detector encodes the collected video data to obtain a video frame to be sent; according to the type of the video frame, stores the video frame in at least one buffer of at least two buffers;
  • the sending unit corresponding to the buffer sends the video frame to the user equipment, because the transmitting end stores the video frame through at least one of the at least two buffers, and sends the video frame to the corresponding sending unit of the at least one buffer to User equipment.
  • the user equipment at the receiving end can receive the video frame through at least one of the at least two receiving units. In the above manner, the quality of the transmitted video frame is improved, and the bit error rate is reduced.
  • FIG. 1 is a system architecture diagram 1 according to an embodiment of the present invention.
  • FIG. 2 is a second structural diagram of a system according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an embodiment of a detector according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an embodiment of a user equipment according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of an embodiment of a video frame transmission method according to an embodiment of the present invention.
  • FIG. 6 is a system architecture diagram of another embodiment of a video frame transmission method according to an embodiment of the present invention.
  • the video capture unit may include an endoscope and a video capture sensor.
  • the endoscope consists of a bendable part, a light source and a set of lenses. When used, the endoscope hose is introduced into the pre-examined object, and the change of the relevant part can be directly observed.
  • the video capture sensor can capture the video signal of the internal environment of the pre-checked object through the endoscope.
  • the video collection unit may further include a processing unit for the video signal, etc., which is not limited herein.
  • the video signal includes at least one video frame, and the video frame can be understood as image data included in each frame of the video signal.
  • the video capture unit can transmit the video frames in the video signal to the communication unit.
  • the communication unit may include a receiving unit and/or a transmitting unit.
  • the communication unit may send the video frame to the user equipment through the sending unit, and the user equipment is used to display the video stream.
  • the detector and the user equipment can be connected wirelessly, for example via wireless fidelity Wi-Fi technology.
  • the detector can also be understood as a wireless detector. If the pre-inspection object is a component in a car, the detector may also be referred to as a car detector.
  • UDP User Datagram Protocol
  • OSI Open System Interconnection
  • H.264 Joint Video Team (JVT) consisting of the International Telecommunication Union Telecommunication Standardization Organization (ITU-T) Video Codec Expert Group (VCEG) and ISO/IEC Motion Picture Experts Group (MPEG) )
  • JVT Joint Video Team
  • ITU-T International Telecommunication Union Telecommunication Standardization Organization
  • VCEG Video Codec Expert Group
  • MPEG ISO/IEC Motion Picture Experts Group
  • H.264/AVC or AVC/H.264 or H.264/MPEG-4 AVC or MPEG-4/H.264 AVC.
  • Wireless industrial endoscopes based on wireless real-time video transmission are susceptible to interference due to narrow bandwidth of wireless networks, and have large fluctuations in network speed. Generally, there are large error rates, low image transmission quality, high transmission delay, and Karton's problem. At the same time, some industrial endoscopes add two cameras to increase the scope of observation, which further increases the burden of network transmission.
  • FIG. 1 is a transmission system according to an embodiment of the present invention.
  • the transmission system includes a detector 100 and a user equipment 110.
  • the user equipment 110 may include a user terminal installed with a diagnostic program or a dedicated diagnostic device that communicates with the probe, and is not limited herein.
  • the detector 100 is configured to collect video data, and send the video frame encoded by the video data to the user equipment 110.
  • the user equipment 110 receives the video frame sent by the detector 100, and further displays the video frame for the user to observe the internal environment of the pre-inspection object. To achieve fault diagnosis.
  • the performance of the communication unit in the detector 100 may only satisfy the implementation of a communication connection with the user equipment 110.
  • the communication unit in the detector 100 may be a built-in Wi-Fi module that can only communicate via Wi-Fi technology within a certain distance from the user equipment.
  • the built-in Wi-Fi module may be a module that is enhanced in communication performance compared to the above module, and is not limited herein.
  • another transmission system may include: a detector 200, a router 210, and a user equipment 220.
  • the detector 200 and the user equipment 220 in the embodiment of the present invention may be connected to the router 210, and the video frame is transmitted through the router 210, and specifically, the router may be connected by using a wireless manner.
  • the detector 200 is configured to collect video data, and send the video frame encoded by the video data to the user equipment 220 through the router 210.
  • the user equipment 220 receives the video frame sent by the detector 200 through the router 210, and further displays the video frame for the user to observe. Pre-check the internal environment of the object to achieve troubleshooting.
  • the router can also connect to the Internet, such as the cloud, to connect data to the Internet for data sharing and remote assistance.
  • the built-in WiFi module often cannot be networked.
  • the router can be used as an external WiFi hotspot to establish a connection with the probe and the user equipment.
  • the number of devices that the external WiFi module can connect to is large.
  • the reliability of the video frame transmission can be enhanced, thereby further reducing the bit error rate of the video frame transmission.
  • the detector 300 includes a video collection unit 301, an encoding unit 302, a storage unit 303, and a transmitting unit 304.
  • the video capture unit 301 can be implemented by hardware such as an endoscope and/or a video capture sensor.
  • the video capture unit 301 can be disposed at the end of the hose so that the video capture unit 301 can capture video data of the interior of the vehicle, ie, the invisible area.
  • the video collection unit 301 can be connected to other units through an electrical signal transmission line inside the hose, thereby transmitting the video data to other units through the electrical signal transmission line.
  • the encoding unit 302 can be implemented by a processor chip including an encoding program, and the processor chip can pass one or more Application Specific Integrated Circuits (ASICs), digital signal processors (DSPs), digital signals.
  • ASICs Application Specific Integrated Circuits
  • DSPs digital signal processors
  • Processing Equipment Digital Signal Processing Device, DSPD
  • PLD Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • microprocessor Central Processing Unit (Central Processing Unit, CPU) or other electronic components are implemented.
  • the storage unit 303 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • magnetic Memory Flash Memory
  • Disk Disk
  • the storage unit 303 may include a buffer, and the buffer may be implemented by using a memory. Further, the storage unit 303 may include at least 2 buffers.
  • the storage unit 303 may further include a controller to determine a cache policy of the video frame. For example, the controller in the storage unit 303 is configured to determine the type of the video frame, and according to the type of
  • the transmitting unit 304 can be implemented through a wireless communication interface or a wired communication interface. If the sending unit 304 is implemented by using a wireless communication interface, the wireless communication interface may support the WiFi communication technology, or other communication technologies that are in agreement with the user equipment, and are not limited herein. Further, if the wireless communication interface supports WiFi communication technology, it can be set as a private interface or an open interface. If the wireless communication interface is a private interface, it can only communicate with the user equipment via WiFi; if the wireless communication interface is an open interface, it can communicate with other devices via WiFi, for example, with the router described in the above system.
  • the number of transmitting units 304 can be the same as the number of buffers in the storage unit. That is, the transmitting unit 304 has a one-to-one correspondence with the buffer.
  • the video capturing unit 301 is configured to collect video data
  • the encoding unit 302 is configured to encode the collected video data to obtain a video frame
  • the storage unit 303 is configured to store the video frame in at least two buffers according to the type of the video frame.
  • the sending unit 304 is configured to send the video frame in the corresponding buffer to the user equipment.
  • the detector in the embodiment of the present application may further include a receiving unit 305.
  • the receiving unit 305 is configured to receive information sent by the user equipment side, such as control information of the probe or receiving response information for the transmitted video frame.
  • the receiving unit 305 can be implemented by using a wireless communication interface or a wired communication interface, and is similar to the foregoing sending unit 304, and details are not described herein again.
  • the detector in the embodiment of the present application may further include a servo unit 306; the servo unit may be implemented by a servo motor, a sensor, etc.; the servo unit is configured to control the video collection unit to move, thereby collecting video data for the pre-check object. .
  • the servo unit can control an endoscope at a certain angle, such as rotation within a range of 360 degrees.
  • the detector may receive the control information sent by the user equipment to the servo unit 306 through the receiving unit 305, and then control the servo unit 306 according to the control information.
  • a servo control unit may be further included, configured to control the movement of the servo unit according to the received control information, such as controlling a rotation angle of the servo unit, moving the position, and the like.
  • the user equipment includes:
  • the receiving unit 401 is configured to receive a video frame sent by the probe.
  • the receiving unit 401 can be implemented by using a wireless communication interface or a wired communication interface.
  • the implementation is corresponding to the implementation of the transmitting unit 304 in the probe, ie the receiving unit 401 and the transmitting unit 304 support the same communication protocol.
  • the number of receiving units 401 is the same as the number of transmitting units 304, that is, each receiving unit 401 corresponds to one transmitting unit 304.
  • the corresponding receiving unit 401 and the transmitting unit 304 form a communication link according to the communication protocol, so that the transmitting unit 304 can transmit the video frame to the corresponding receiving unit 401.
  • the user equipment may further include at least one of the storage unit 402, the processing unit 403, and the sending unit 404.
  • the processing unit 402 may be one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), and Programmable Logic Device (PLD), Field-Programmable Gate Array (FPGA), microprocessor, Central Processing Unit (CPU) or other electronic components.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLD Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • microprocessor Central Processing Unit (CPU) or other electronic components.
  • the implementation of the storage unit 402 is similar to that in the probe, and details are not described herein again.
  • the storage unit 402 is configured to store the received video frame
  • the processing unit 403 is configured to perform recovery processing on the received video frame
  • the sending unit 404 is configured to send the received success of the received video frame to the probe.
  • Information, or control signals that control the video capture unit to move the acquisition location.
  • FIG. 5 is a flowchart of an embodiment of a video frame transmission method according to an embodiment of the present invention. As shown in FIG. 5, the video frame transmission method provided in this embodiment includes:
  • Step 501 The detector encodes the collected video data to obtain a video frame to be sent.
  • the video data collected by the video collection unit of the detector may be encoded by using the foregoing coding unit, which may be implemented in the following manner:
  • the video data collected by the video collection unit is encoded by the advanced video coding H.264 method to obtain a video frame;
  • the video frame includes two types, namely an I frame and a P frame.
  • H.264 encoding supports I frame and P frame
  • I frame uses intraframe coding, and the amount of data is large, usually several tens of times of P frame, and the coding rate is low, but no reference frame is needed, and I frame itself carries one frame of data. All the image information required, so the image information can be recovered independently and the error spread caused by the packet loss can be effectively prevented.
  • P-frames use inter-frame coding mode, which uses inter-frame prediction and motion compensation to eliminate redundant information. For example, in a scene with static or non-high-speed motion, the image picture transmitted in the last second and the next second may be 90. The % data is the same, so it is not necessary to transmit all the data in each frame image at all.
  • the P frame is based on the idea that only the difference between the current frame and the reference frame (I frame) is transmitted, so the encoding The rate is high, but a reference frame (I frame) is required and it is easy to cause error diffusion.
  • Step 502 The detector stores the video frame in at least one of the at least two buffers according to the type of the video frame.
  • the storage unit includes at least two buffers, and the buffer is used to buffer the video frames to be sent, and at least two buffers respectively correspond to one sending unit.
  • the video frame to be transmitted is stored by at least one of at least two buffers in the storage unit according to the type of the video frame.
  • step 502 is as follows:
  • the type of the video frame is an I frame, determining whether the remaining capacity of the at least two buffers is greater than or equal to the size of the video frame;
  • the video frame is stored in at least one buffer.
  • the remaining capacity of each of the at least two buffers is smaller than the size of the video frame, the number of failed transmissions of the I frame is accumulated;
  • an abnormality prompt message is output.
  • the I frame plays a very important role in real-time video data transmission.
  • the loss of the I frame may cause the subsequent P frame data to be completely unrecoverable, so the I frame may adopt a repeated transmission strategy, that is, the same
  • An I frame can be stored in at least two buffers.
  • two buffers are taken as an example, and the I frame in each buffer is sent by its corresponding sending unit, thereby enhancing the I frame. Transmission reliability.
  • the remaining capacity refers to the capacity of the remaining space of the buffer.
  • the I frame is separately stored in each buffer in the at least one buffer, and the corresponding sending unit of each of the at least one buffer Transmitted, that is, at least one transmitting unit sends the same I frame.
  • the I frame is discarded, that is, the I frame is not transmitted, and the number of transmission failures of the I frame is accumulated, so that the abnormality reminder can be output, that is, the abnormality prompt information can be output. .
  • the abnormal prompt information may be output through the detector, or the abnormal prompt information may be sent to the user equipment, and an abnormal reminder may be performed on the user equipment side.
  • reminders can be made by voice, or reminded by text, images, videos, etc., or a combination of the above.
  • the video frame is stored in at least one buffer of at least two buffers, and is sent by the corresponding sending unit of at least one buffer, which has a good evasive effect on packet loss caused by environmental interference.
  • the bit error rate is small.
  • step 502 is as follows:
  • the type of the video frame is a P frame, determining whether the remaining capacity of the at least two buffers is greater than or equal to the size of the video frame;
  • the video frame is stored in one of the at least one buffer.
  • the video frame is stored in one of the at least one buffer, and may be implemented as follows:
  • the P frame adopts an inter-frame coding mode, and the difference information between the reference frame and the current frame is transmitted, and the P frame may be sent by using only one sending unit corresponding to the buffer. Specifically, it is necessary to determine the storage and transmission policy according to the remaining capacity of the current buffer.
  • the remaining capacity refers to the capacity of the remaining space of the buffer.
  • the type of the video frame is a P frame, it is determined whether the remaining capacity of each of the at least two buffers is greater than or equal to the size of the P frame;
  • the video frame is stored in one of the at least one buffer, wherein one buffer is any buffer of at least one buffer .
  • one of the at least one buffer may be selected according to the identifier of the at least one buffer and the preset selection policy.
  • the selection strategy may be alternately selected in turn, for example, the first P frame selects the buffer 1 for storage, the second P frame selects the buffer 2 for storage, and the third P frame selects the buffer 1 for storage.
  • the fourth P frame selects buffer 2 for storage, and so on.
  • the selection strategy may be randomly selecting one randomly, or selecting a buffer with a larger remaining capacity.
  • the video frame is discarded.
  • the P frame is discarded, that is, the P frame is not transmitted.
  • Step 503 The detector sends the video frame to the user equipment by using a corresponding sending unit of the at least one buffer.
  • the video frames stored in the at least one buffer are sent to the user equipment through the corresponding sending units of the at least one buffer.
  • Step 504 The user equipment receives at least one video frame sent by the probe by using at least one of the at least two receiving units.
  • the video frame is sent by the detector through a corresponding sending unit of at least one of the at least two buffers.
  • the user equipment side corresponding to the detector receives at least one video frame transmitted by the detector through at least one of the at least two receiving units.
  • the receiving unit corresponds to a transmitting unit in the detector.
  • the transmitting end stores the video frame through at least one buffer of the at least two buffers, and sends the video frame to the user equipment through the corresponding sending unit of the at least one buffer
  • the user equipment of the receiving end can be configured according to At least one of the at least two receiving units receives the video frame sent by the transmitting end and performs decoding, and the decoded image quality is high, and the bit error rate is small.
  • the network bandwidth is increased by being transmitted by the corresponding transmitting unit of at least one buffer and receiving by at least one of the at least two receiving units.
  • the collected original video data is encoded, and the obtained video frame is obtained after being encoded. If the video frame is an I frame, it is stored in the buffer 1 and the buffer 2, and is respectively sent by the buffer 1 and the buffer 2 respectively.
  • the unit transmits the I frame, so that the bit error rate is small, and the decoded image quality is high.
  • the video frame transmission method of this embodiment encodes the collected video data to obtain a video frame to be sent; and stores the video frame in at least one buffer of at least two buffers according to the type of the video frame; A corresponding sending unit of a buffer sends a video frame to the user equipment, because the transmitting end stores the video frame through at least one buffer of the at least two buffers, and sends the video frame through the corresponding sending unit of the at least one buffer.
  • the user equipment at the receiving end can receive the video frame through at least one of the at least two receiving units, the decoded image quality is high, and the error rate is small.
  • the method on the user equipment side further includes:
  • the user equipment stores at least one video frame in a buffer corresponding to each of the at least one receiving unit.
  • the following operations can also be performed:
  • recovering the at least one video frame may be implemented in the following manner:
  • the data packets respectively included in the at least one video frame are combined.
  • the user equipment side corresponds to the detector side, and the user equipment side can also store video frames in multiple buffers.
  • each receiving unit corresponds to one buffer, and the receiving unit will receive at least one of the buffers.
  • Video frames are stored in their respective buffers.
  • At least one video frame may be restored according to the type of the at least one video frame, for example, the received multiple I frames are combined, and the received P frames are subjected to error processing.
  • the recovery process can be implemented by the processing unit shown in FIG. 6.
  • the restored video frame can be decoded, and then the decoded video data is displayed.
  • the decoding can be implemented by the decoding unit shown in Fig. 6, and the display can be realized by the display shown in Fig. 6.
  • the I frame is generally divided into blocks when transmitting, that is, the packet after the block is transmitted, and the data packet carries the packet identification information, so that the receiving end can perform recovery processing.
  • the user equipment may combine the received data packets of the I frame according to the data packet identification information, and remove the duplicate data packets to obtain a data packet set.
  • the data packet identification information includes the data packet. Serial number.
  • the receiving end first sorts and combines the received data. For example, as shown in FIG. 6, for the I frame that is multiplexed, the processing unit of the user equipment according to the I frame received in the buffer A and the buffer B.
  • the data packet is recovered from the I frame. That is, the data packets of the I frame received in the buffer A and the buffer B are sorted and combined, and the duplicate data packets are removed, and the I frame is recovered.
  • a timer can be set in the user equipment, and the data packet that has not arrived within the specified time is considered to be lost. For the lost packet, a certain algorithm is used for error processing.
  • the error processing method for the I frame is as follows. Since the I frame contains complete image information, the method of intraframe compensation may be adopted, that is, replacing the missing pixels with adjacent pixels, that is, according to the data packet identification information of the received data packet, If the sequence number of the received data packet is not continuous, the data packet corresponding to the missing sequence number of the missing sequence number is used to replace the data packet corresponding to the missing sequence number, and the complete I frame is recovered. The data packet corresponding to the missing serial number is replaced by the data packet corresponding to the missing serial number.
  • the lost N frame is replaced with the first N frame of the lost P frame; N is a preset value.
  • a timer can be set in the user equipment, and the data packet that does not arrive within the specified time is considered to be lost.
  • a certain algorithm is used for error processing.
  • the error processing manner for the P frame is as follows. If there is a lost P frame, the former frame replacement method is adopted, that is, the lost N frame is replaced by the first N frame of the lost P frame; the N is a preset value. N is an integer greater than zero.
  • the method further includes:
  • the user equipment sends a success message for receiving the video frame to the probe.
  • the detector clears the video frame from at least one buffer when receiving success information for the video frame from the user equipment.
  • the data transmission in the embodiment of the present invention uses UDP instead of the Transmission Control Protocol (TCP).
  • TCP Transmission Control Protocol
  • UDP is an unreliable data link, so the user equipment at the receiving end is used.
  • the data reception condition can be fed back to the detector of the sending end periodically, that is, after the video frame recovery process is successful, the receiving success information for the video frame is sent to the detector, and the detector adjusts the remaining capacity of the buffer in the storage unit according to the receiving success information, that is, Upon receiving the reception success information for the video frame from the user equipment, the video frame is cleared from the at least one buffer.
  • the detector can also formulate a transmission strategy based on the size of the remaining capacity of the buffer.
  • the remaining capacity in the current buffer is L, and the encoded I frame (the size is I1), the remaining capacity is L-I1. If the detector receives the reception success information sent by the user equipment for the I frame, The information indicates that the receiving end receives the I frame and resumes processing successfully. At this time, the I frame is cleared from the buffer, and the remaining capacity of the buffer is L. If the receiving success information for the I frame sent by the user equipment is not received, the remaining capacity of the buffer is still L-I1.
  • the method further includes:
  • the user equipment sends the first control information to the probe
  • the detector receives the first control signal of the user equipment, and controls the video acquisition unit to move to the collection position according to the first control signal to control the video collection unit to collect the video data at the collection location.
  • the video collection unit can be rotated by the control of the servo unit, for example, the rotation angle is 360 degrees or 180 degrees, and the user equipment sends the first control signal through the sending unit, thereby
  • the servo unit can control the rotation angle of the video acquisition unit and move to the corresponding collection position to collect video data.
  • the servo unit can be controlled by a servo unit control circuit.
  • the video capture unit can be moved to the acquisition position required by the user equipment for acquisition, 360 can be realized without dead angle observation, which greatly increases the observation range compared with the conventional endoscope, compared with the inside of the two-way camera.
  • the amount of data transmitted by the speculum is doubled.
  • the two-way repeated transmission strategy is adopted, that is, the processor in the storage unit determines that the remaining capacity of the buffer A and the buffer B are larger than the size of the video frame to be sent, and then stores the I frame to In the buffer A and the buffer B, that is, each buffer stores the same I frame, and is sent out through each corresponding sending unit, that is, sent to the user equipment through the sending unit A and the sending unit B, respectively, and the user equipment passes respectively.
  • the receiving unit A and the receiving unit B receive, and determine, by the processor in the storage unit, whether the remaining capacity of the buffer corresponding to the receiving unit is greater than the size of the video frame, and if both are greater than, respectively, storing to the receiving unit A and the receiving unit B respectively.
  • the corresponding buffer A and buffer B are then subjected to recovery processing and decoding by the decoding unit and the processing unit, and display video data is performed through the display.
  • the receiving success information can be sent to the probe through the sending unit of the user equipment.
  • the video frame is a P frame
  • it is stored and sent in a single way according to the identifier of the buffer, that is, it is stored in two buffers in turn and sent through the corresponding sending units. That is, the processor in the storage unit determines that the remaining capacity of the buffer A and the buffer B are larger than the size of the video frame to be sent, and the P frame can be sent by two sending units in turn, for example, the first P frame is stored in the buffer.
  • the second P frame is stored in the buffer B, and is sent by the sending unit B, and so on; at the same time, on the user equipment side, the receiving unit A and the receiving unit B respectively receive and receive in turn, and pass
  • the processor in the storage unit determines whether the remaining capacity of the buffer corresponding to the receiving unit is larger than the size of the video frame, and if it is greater, stores it in the buffer A and the buffer B corresponding to the receiving unit A and the receiving unit B, respectively, and then
  • the decoding unit and the processing unit perform recovery processing and decoding, and display video data through the display.
  • the processor in the storage unit determines that the remaining capacity of the buffer A is greater than the size of the video frame to be sent, and the remaining capacity of the buffer B is smaller than the size of the video frame to be sent, and is stored to the buffer A for the I frame.
  • the transmission is performed by the sending unit A corresponding to the buffer A.
  • the monitoring of the buffer B is maintained. If the remaining capacity is larger than the size of the I frame, the I frame is stored in the buffer B and buffered.
  • the sending unit B corresponding to the area B sends, if the remaining capacity of the buffer B is still smaller than the size of the I frame after the encoding of the next I frame is completed, the repeated transmission of the I frame is abandoned, and only the buffer A is used for storage and passes through the buffer.
  • a corresponding transmission unit A transmits.
  • the P frame For the P frame, it is directly stored in the buffer A and sent through the sending unit A corresponding to the buffer A, and is not transmitted in turn.
  • the user equipment side is similar to the above, and is not described here.
  • the processor in the storage unit determines that the remaining capacity of the buffer A is smaller than the size of the video frame to be sent. For the P frame, the P frame is actively discarded, that is, the P frame is not transmitted.
  • an error counter is started. If a consecutive preset (for example, three) I frame transmission fails, the detector outputs an abnormality prompt information, such as a channel blocking abnormality.
  • the abnormality prompt information may be output through the detector, or the detector may send the output abnormality prompt information to the user equipment, and prompt the user through the user equipment.
  • the embodiment of the present application further provides a storage medium, where the storage medium stores computer instructions, which can be called by the foregoing unit to implement the video frame transmission method provided by the present application.
  • the embodiment of the present application provides a computer program product, including: a computer program, which is used to implement the video frame transmission method provided by the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明提供一种视频帧传输方法、探测器及用户设备。该方法包括:将采集的视频数据进行编码,得到待发送的视频帧;根据所述视频帧的类型,将所述视频帧存储在至少两个缓冲区中的至少一个缓冲区中;通过所述至少一个缓冲区各自对应的发送单元,将所述视频帧发送至用户设备。本发明实施例由于通过将待发送的视频帧存储在至少两个缓冲区中的至少一个缓冲区中,并通过至少一个缓冲区各自对应的发送单元发送至用户设备,相应的接收端可以根据至少两个接收单元中的一个接收单元接收到视频帧,解码后的图像质量较高,误码率较小。

Description

视频帧传输方法、探测器及用户设备
本申请要求于2018年1月24日提交中国专利局、申请号为201810067641.7、申请名称为“视频帧传输方法、探测器及用户设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及工业内窥镜技术领域,尤其涉及一种视频帧传输方法、探测器及用户设备。
背景技术
随着工业的发展和科技的进步,内窥镜被应用到诸如工业生产,工业探伤,医疗,甚至日常生活的方方面面。
工业内窥镜,也可称为探测器,其功能用于观察肉眼不能观察到的部位,可在不需拆卸或破坏组装及设备停止运行的情况下,实现无损检测。其基本原理是通过镜头拍摄预检查对象的内部环境,并将拍摄的视频流发送给用户设备,用户设备用于显示该视频流。
然而,基于无线通信方式与用户设备进行传输的无线工业内窥镜,由于受无线网络带宽窄,易被干扰,网速波动性大的影响,普遍存在误码率大,传输的图像质量不高的问题。
发明内容
本发明实施例提供一种视频帧传输方法、探测器及用户设备,可以提升视频帧的传输质量,降低视频帧的误码率。
第一方面,本发明实施例提供一种视频帧传输方法,包括:
将采集的视频数据进行编码,得到待发送的视频帧;
根据所述视频帧的类型,将所述视频帧存储在至少两个缓冲区中的至少一个缓冲区中;
通过所述至少一个缓冲区各自对应的发送单元,将所述视频帧发送至用户设备。
第二方面,本发明实施例提供一种视频帧传输方法,包括:
通过至少两个接收单元中的至少一个接收单元接收探测器发送的至少一个视频帧;
其中,所述视频帧是所述探测器通过至少两个缓冲区中的至少一个缓冲区各自对应的发送单元发送的。
第三方面,本发明实施例提供一种探测器,包括:
视频采集单元,用于采集视频数据;
编码单元,用于将所述视频采集单元采集的视频数据进行编码,得到待发送的视频帧;
存储单元,用于根据所述视频帧的类型,将所述视频帧存储在至少两个缓冲区中的至少一个缓冲区中;
至少一个发送单元,所述至少一个发送单元中的每个发送单元与所述至少一个缓冲区中的一个缓冲区对应,其中,所述至少一个发送单元用于将与所述至少一个发送单元各自对应的所述至少一个缓冲区中存储的所述视频帧发送至用户设备。
第四方面,本发明实施例提供一种用户设备,包括:
至少两个接收单元,所述至少两个接收单元中的至少一个接收单元用于接收探测器发送的至少一个视频帧;
其中,所述视频帧是所述探测器通过至少两个缓冲区中的至少一个缓冲区各自对应的发送单元发送的。
本发明实施例中,探测器将采集的视频数据进行编码,得到待发送的视频帧;根据视频帧的类型,将视频帧存储在至少两个缓冲区中的至少一个缓冲区中;通过至少一个缓冲区各自对应的发送单元,将视频帧发送至用户设备,由于发送端通过至少两个缓冲区中的至少一个缓冲区存储视频帧,并通过至少一个缓冲区各自对应的发送单元发送视频帧至用户设备。相应的,接收端的用户设备可以通过至少两个接收单元中的至少一个接收单元接收视频帧。通过上述方式,使得传输的视频帧质量得到提高,降低了误码率。
附图说明
图1是本发明实施例涉及的系统架构图一;
图2是本发明实施例涉及的系统架构图二;
图3是本发明实施例提供的探测器一实施例的结构示意图;
图4是本发明实施例提供的用户设备一实施例的结构示意图;
图5是本发明实施例提供的视频帧传输方法一实施例的流程图;
图6是本发明实施例提供的视频帧传输方法另一实施例的系统架构图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施 方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的方法和装置的例子。
本发明的说明书和权利要求书及所述附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先对本发明所涉及的名词进行解释:
探测器:也可以称为工业内窥镜或工业探测器,其包括视频采集单元以及通信单元。其中,视频采集单元可以包括内窥镜以及视频采集传感器等。内窥镜由可弯曲部分、光源及一组镜头组成,使用时将内窥镜软管导入预检查的对象,可直接窥视有关部位的变化。视频采集传感器可以通过内窥镜采集预检查对象的内部环境的视频信号。视频采集单元还可以包括对视频信号的处理单元等,在此不予限定。其中,视频信号包括至少1个视频帧,视频帧可以理解为视频信号中每帧图像包括的图像数据。
视频采集单元可以将视频信号中的视频帧发送至通信单元。其中,通信单元可以包括接收单元和/或发送单元。通信单元可以通过发送单元将视频帧发送给用户设备,用户设备用于显示该视频流。
探测器和用户设备之间可以通过无线方式连接,例如通过无线保真Wi-Fi技术实现无线连接。该探测器也可以理解为是无线探测器。若该预检查对象为汽车内的部件,则该探测器也可称为汽车探测器。
用户数据报协议(User Datagram Protocol,简称UDP):是开放式系统互联(Open System Interconnection,简称OSI)参考模型中一种无连接的传输层协议,提供面向事务的简单不可靠信息传送服务。
H.264:是由国际电信联盟远程通信标准化组织(ITU-T)视频编解码专家组(VCEG)和ISO/IEC动态图像专家组(MPEG)联合组成的联合视频组(Joint Video Team,简称JVT)提出的高度压缩数字视频编解码器标准。这个标准通常被称之为H.264/AVC(或者AVC/H.264或者H.264/MPEG-4AVC或MPEG-4/H.264AVC)
基于无线实时视频传输的无线工业内窥镜由于受无线网络带宽窄,易被干扰,网速波动性大影响,普遍存在误码率大,图像传输质量不高,传输延时高,卡顿的问题。同时,有些工业内窥镜为了增加观测范围而增加两路摄像头,这更加增加了网络传输的负担。
基于上述技术问题,下面介绍本申请实施例提供的方法及装置。
如图1所示,图1为本发明实施例涉及的一种传输系统。该传输系统包括探测器100及用户设备110。其中,探测器100可以参见上述具体描述,用户 设备110可以包括安装有诊断程序的用户终端或者与探测器进行通信的专用诊断设备等,在此不予限定。探测器100用于采集视频数据,并将视频数据编码后的视频帧发送给用户设备110,用户设备110接收探测器100发送的视频帧,进一步可以显示出来,供用户观察预检查对象的内部环境,从而实现故障诊断。在该传输系统中,探测器100中的通信单元的性能可以仅满足实现与用户设备110进行通信连接。例如,探测器100中的通信单元可以是内置Wi-Fi模块,该内置Wi-Fi模块可以仅实现与用户设备在一定距离范围内通过Wi-Fi技术进行通信。或者,该内置Wi-Fi模块可以为通信性能较上述模块增强的模块,在此不予限定。
进一步,如图2所示,本申请实施例涉及的另一种传输系统可以包括:探测器200,路由器210以及用户设备220。本发明实施例中的探测器200和用户设备220可以连接路由器210,通过路由器210传输视频帧,具体可以是通过无线方式连接路由器。探测器200用于采集视频数据,并将视频数据编码后的视频帧通过路由器210发送给用户设备220,用户设备220通过路由器210接收探测器200发送的视频帧,进一步可以显示出来,供用户观察预检查对象的内部环境,从而实现故障诊断。
路由器还可以连接Internet,例如云端,将数据通过连接到Internet实现数据分享和远程协助,相比于用户设备需要连接到探测器的内置WiFi模块,而内置WiFi模块往往不能联网的特点,此设计具有明显优势。在此,路由器可以作为外置WiFi热点,分别与探测器和用户设备建立连接。而且外置WiFi模块可以连接的设备数量较多。通过通信单元中的外置Wi-Fi模块,可以增强视频帧传输的可靠性,进而进一步降低视频帧传输的误码率。
基于上述系统,下面介绍本申请实施例提供的探测器及用户设备的结构示意图。
如图3所示,探测器300,包括:视频采集单元301、编码单元302、存储单元303和发送单元304。
其中,视频采集单元301可以通过内窥镜和/或视频采集传感器等硬件实现。
视频采集单元301可以设置在软管的端部,从而可以实现视频采集单元301可以采集车辆内部,即不可视区域的视频数据。视频采集单元301可以通过软管内部的电信号传输线与其他单元连接,进而将视频数据通过电信号传输线传输至其他单元。
编码单元302可以通过包含编码程序的处理器芯片实现,处理器芯片可以通过一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(Digital Signal Processing Device,DSPD)、可编程逻辑器 件(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、微处理器、中央处理单元(Central Processing Unit,CPU)或其他电子元件等实现。
存储单元303可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。本申请实施例中,存储单元303可以包括缓冲区,该缓冲区可以通过内存实现。进一步地,存储单元303可以包括至少2个缓冲区。存储单元303还可以包括控制器,以确定视频帧的缓存策略,例如,存储单元303中的控制器用于判断视频帧的类型,并根据视频帧的类型,确定将该视频帧缓存至一个或多个缓冲区中。
发送单元304可以通过无线通信接口或有线通信接口实现。若发送单元304通过无线通信接口实现,该无线通信接口可以支持WiFi通信技术,或与用户设备协议的其他通信技术,在此不予限定。进一步地,若无线通信接口支持WiFi通信技术,其可被设置为私有接口或开放接口。若无线通信接口为私有接口,其仅能够与用户设备通过WiFi进行通信;若无线通信接口为开放接口,其可以其他设备通过WiFi进行通信,例如,与上述系统中描述的路由器进行通信。
相应地,发送单元304的数量可以与存储单元中缓冲区的数量相同。即发送单元304与缓冲区一一对应。
其中,视频采集单元301用于采集视频数据,编码单元302用于将采集的视频数据进行编码得到视频帧,存储单元303用于根据视频帧的类型,将视频帧存储在至少两个缓冲区中的至少一个缓冲区中,发送单元304用于将对应缓冲区中的视频帧发送至用户设备。
可选的,本申请实施例中的探测器还可以包括接收单元305;接收单元305用于接收用户设备侧发送的信息,例如对探测器的控制信息或针对发送的视频帧的接收响应信息等。本申请实施例中,接收单元305可以通过无线通信接口或有线通信接口实现,与上述发送单元304类似,此处不再赘述。
可选地,本申请实施例中的探测器还可以包括伺服单元306;伺服单元可以通过伺服电机、传感器等实现;伺服单元用于控制视频采集单元进行移动,从而对预检查对象进行采集视频数据。例如,其可控制内窥镜一定角度,如360度范围内旋转等。
一种实现方式中,探测器可以通过接收单元305接收到用户设备发送的对伺服单元306的控制信息,进而根据该控制信息对伺服单元306进行控制。
可选的,还可以包括伺服控制单元,用于根据接收到的控制信息控制伺服单元移动,例如控制伺服单元的旋转角度,移动位置等。
如图4所示,用户设备,包括:
接收单元401,用于接收探测器发送的视频帧。其中,接收单元401可以通过无线通信接口或有线通信接口实现。其实现方式与探测器中的发送单元304的实现方式相对应,即接收单元401与发送单元304支持有相同的通信协议。并且接收单元401的数量与发送单元304的数量相同,即每个接收单元401对应一个发送单元304。相对应的接收单元401与发送单元304根据通信协议构成通信链路,即可实现发送单元304将视频帧发送至对应的接收单元401中。
可选的,该用户设备还可以包括存储单元402、处理单元403和发送单元404中的至少1个。
其中,处理单元402可以是一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(Digital Signal Processing Device,DSPD)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、微处理器、中央处理单元(Central Processing Unit,CPU)或其他电子元件等。
存储单元402的实现方式与探测器中的类似,此处不再赘述。
具体地,存储单元402用于对接收到的视频帧进行存储,处理单元403用于对接收到的视频帧进行恢复处理,发送单元404用于向探测器发送针对接收到的视频帧的接收成功信息,或控制视频采集单元移动采集位置的控制信号。基于上述结构,下面介绍本申请方法实施例。通过上述实施例中描述的装置中包括的各单元所实现的方法,可以参见下述实施例。
图5是本发明实施例提供的视频帧传输方法一实施例的流程图。如图5所示,本实施例提供的视频帧传输方法,包括:
步骤501、探测器将采集的视频数据进行编码,得到待发送的视频帧。
本步骤中,可以通过上述编码单元将探测器的视频采集单元采集的视频数据进行编码,具体可以采用如下方式实现:
将通过视频采集单元采集的视频数据采用高级视频编码H.264方式进行编码,得到视频帧;视频帧包括两种类型,即I帧和P帧。
具体的,H.264编码支持I帧和P帧,I帧采用帧内编码,数据量大,通常是P帧的几十倍,编码率低,但无需参考帧,I帧自身携带一帧数据所需的全部图像信息,所以能独立恢复出图像信息且能有效阻止包丢失而引起的错误扩散。P帧则采用帧间编码模式,利用帧间预测和运动补偿来消除冗余信息,举一个例子,在一个静止或者非高速运动的场景里面,上一秒和下一秒传输的图像画面可能90%的数据是相同的,所以完全没必要在每一帧图像中把所有数据都传输一遍,P帧正是基于这样一种思想,只传输当前帧和参考帧(I帧) 的差异,因此编码率高,但需要参考帧(I帧)且容易导致错误扩散。
步骤502、探测器根据视频帧的类型,将视频帧存储在至少两个缓冲区中的至少一个缓冲区中。
本步骤中,在存储单元包括至少两个缓冲区,缓冲区用于缓存待发送的视频帧,至少两个缓冲区各自对应一个发送单元。对采集的视频数据编码后,根据视频帧的类型,通过存储单元中的至少两个缓冲区中的至少一个缓冲区,存储待发送的视频帧。
可选的,步骤502的一种实施方式如下:
若视频帧的类型为I帧,确定至少两个缓冲区各自的剩余容量是否大于或等于视频帧的大小;
若至少两个缓冲区中至少一个缓冲区各自的剩余空间的容量大于或等于视频帧的大小,将视频帧存储在至少一个缓冲区中。
可选的,若所述至少两个缓冲区各自的剩余容量均小于所述视频帧的大小,累计I帧的发送失败次数;
若所述发送失败次数大于预设阈值,输出异常提示信息。
具体的,基于上述的编码策略,I帧在实时视频数据传输中扮演了非常重要的角色,I帧的丢失可能导致后续P帧数据完全不能恢复,因此I帧可以采取重复发送的策略,即同一个I帧可以存储在至少2个缓冲区中,本申请实施例以2个缓冲区为例,再将每一个缓冲区中的I帧由其对应的发送单元进行发送,从而可以增强I帧的传输可靠性。
具体需要根据当前的缓冲区的剩余容量,确定存储以及发送策略。剩余容量指的是该缓冲区的剩余空间的容量。
若存在至少一个缓冲区的剩余容量大于或等于I帧的大小,则将该I帧分别存储至所述至少一个缓冲区中的每个缓冲区中,并通过至少一个缓冲区各自对应的发送单元发送出去,即至少一个发送单元发送相同的I帧。
若至少两个缓冲区各自的剩余容量均小于I帧的大小,则将I帧丢弃,即不传输该I帧,并累计I帧的发送失败次数,以便进行异常提醒,即可以输出异常提示信息。
在一些实施方式中,可以通过探测器输出异常提示信息,或将异常提示信息发送给用户设备,在用户设备侧进行异常提醒。
例如,可以通过语音进行提醒,或通过文字、图像、视频等进行提醒,或者采用上述方式的结合。
上述具体实施方式中,将视频帧存储至至少两个缓冲区中至少一个缓冲区,并通过至少一个缓冲区各自对应的发送单元发送,对于由环境干扰导致的丢包起到了很好的规避作用,误码率较小。
可选的,步骤502的另一种实施方式如下:
若视频帧的类型为P帧,确定至少两个缓冲区各自的剩余容量是否大于或等于视频帧的大小;
若至少两个缓冲区中至少一个缓冲区各自的剩余容量大于或等于视频帧的大小,则将视频帧存储在至少一个缓冲区中的其中一个缓冲区中。
可选的,将视频帧存储在至少一个缓冲区中的其中一个缓冲区中,具体可以采用如下方式实现:
根据至少一个缓冲区各自的标识以及预设的选取策略,选取至少一个缓冲区中的其中一个缓冲区;
将视频帧存储至其中一个缓冲区中。
具体的,基于上述实施例的编码策略,P帧采用帧间编码模式,传输的是参考帧和当前帧的差异信息,P帧可以仅采用一个缓冲区对应的发送单元进行发送。具体需要根据当前的缓冲区的剩余容量,确定存储以及发送策略。剩余容量指的是该缓冲区的剩余空间的容量。
因此在本实施例中若视频帧的类型为P帧,确定至少两个缓冲区各自的剩余容量是否大于或等于P帧的大小;
若存在至少一个缓冲区各自的剩余容量大于或等于P帧的大小,则将视频帧存储在至少一个缓冲区中的其中一个缓冲区中,其中一个缓冲区为至少一个缓冲区的任意一个缓冲区。
具体的,可以根据至少一个缓冲区各自的标识以及预设的选取策略,选取至少一个缓冲区中的其中一个缓冲区。
在一些实施方式中,选取策略可以是轮流交替选取,例如第一个P帧选取缓冲区1进行存储,第二个P帧选取缓冲区2进行存储,第三个P帧选取缓冲区1进行存储,第四个P帧选取缓冲区2进行存储,以此类推。
在一些实施方式中,选取策略可以是随机任意选取一个,或者选取剩余容量较大的缓冲区。
可选的,若至少一个缓冲区各自的剩余容量均小于视频帧的大小,丢弃视频帧。
具体的,如果至少一个缓冲区各自的剩余容量均小于P帧的大小,即任一个缓冲区的剩余容量都无法满足P帧的存储需求,则丢弃该P帧,即不传输该P帧。
步骤503、探测器通过至少一个缓冲区各自对应的发送单元,将视频帧发送至用户设备。
本步骤中,对存储在至少一个缓冲区中的视频帧,通过至少一个缓冲区各自对应的发送单元发送至用户设备。
步骤504、用户设备通过至少两个接收单元中的至少一个接收单元接收探测器发送的至少一个视频帧。
其中,视频帧是探测器通过至少两个缓冲区中的至少一个缓冲区各自对应的发送单元发送的。
与探测器对应的用户设备侧通过至少两个接收单元中的至少一个接收单元接收,探测器发送的至少一个视频帧。
接收单元与探测器中的发送单元对应。
由于上述方案中,发送端通过至少两个缓冲区中的至少一个缓冲区存储视频帧,并通过至少一个缓冲区各自对应的发送单元发送视频帧至用户设备,相应的,接收端的用户设备可以根据至少两个接收单元中的至少一个接收单元接收发送端发送的视频帧,并进行解码,解码后的图像质量较高,误码率较小。而且通过至少一个缓冲区各自对应的发送单元发送,并通过至少两个接收单元中的至少一个接收单元接收,增大了网络带宽。
以下以存储单元中包括两个缓冲区为例进行说明:
将采集的原始视频数据进行编码,编码后得到待发送的视频帧,若视频帧为I帧,则存储至缓冲区1和缓冲区2中,分别由缓冲区1和缓冲区2各自对应的发送单元将该I帧发送出去,使得误码率较小,解码后的图像质量较高。
本实施例的视频帧传输方法,将采集的视频数据进行编码,得到待发送的视频帧;根据视频帧的类型,将视频帧存储在至少两个缓冲区中的至少一个缓冲区中;通过至少一个缓冲区各自对应的发送单元,将视频帧发送至用户设备,由于发送端通过至少两个缓冲区中的至少一个缓冲区存储视频帧,并通过至少一个缓冲区各自对应的发送单元发送视频帧至用户设备,相应的,接收端的用户设备可以通过至少两个接收单元中的至少一个接收单元接收视频帧,解码后的图像质量较高,误码率较小。
在上述实施例的基础上,可选的,用户设备侧的方法还包括:
用户设备将至少一个视频帧分别存储在至少一个接收单元各自对应的缓冲区中。
可选的,在将视频帧存储至缓冲区中,还可以进行如下操作:
根据至少一个视频帧的类型,对至少一个视频帧进行恢复处理。
可选的,对至少一个视频帧进行恢复处理,具体可以采用如下方式实现:
若至少一个视频帧的类型为I帧,对至少一个视频帧各自包括的数据包进行合并处理。
具体的,用户设备侧与探测器侧对应,用户设备侧同样也可以多个缓冲区存储视频帧,例如如图6所示,每个接收单元对应一个缓冲区,接收单元将接收到的至少一个视频帧分别存储在各自对应的缓冲区中。
存储之后,还可以根据至少一个视频帧的类型,对至少一个视频帧进行恢复处理,例如将接收到的多个I帧进行合并处理,以及将接收到的P帧进行差 错处理。恢复处理可以通过图6中所示的处理单元实现。
进一步,可以对恢复处理后的视频帧进行解码,然后对解码后的视频数据进行显示。解码可以通过图6中所示的解码单元实现,显示可以通过图6中所示的显示器实现。
由于I帧较大,因此在发送时一般将I帧进行分块,即发送分块后的数据包,数据包中携带数据包标识信息,为了便于接收端进行恢复处理。
在一些实施方式中,用户设备可以根据数据包标识信息,将接收到的I帧的数据包进行合并,并去掉重复的数据包,得到数据包集合;
若数据包集合中的数据包的序号不连续,则利用缺失的序号的预设范围的序号对应的数据包代替缺失的序号对应的数据包,恢复出I帧;数据包标识信息包括数据包的序号。
具体的,接收端首先对接收到的数据进行排序合并,例如如图6所示,对于多路发送的I帧,用户设备的处理单元根据缓冲区A和缓冲区B中接收到的I帧的数据包,恢复出I帧。即对缓冲区A和缓冲区B中接收到的I帧的数据包进行排序合并,并去掉重复的数据包,恢复出I帧。用户设备中可以设置一个计时器,在规定时间内没有到达的数据包,则认为丢失,对于丢失的包,采用一定的算法进行差错处理。
对于I帧的差错处理方式如下,由于I帧包含完整的图像信息,所以可采用帧内补偿的方法,即用临近像素替代丢失的像素,即根据接收到的数据包的数据包标识信息,得到数据包的序号,若接收到的数据包的序号不连续,则利用缺失的序号的预设范围的序号对应的数据包代替所述缺失的序号对应的数据包,恢复出完整的I帧,即采用缺失的序号的周围的序号对应的数据包代替所述缺失的序号对应的数据包。
在一些实施方式中,若存在丢失的P帧,则利用丢失的P帧的前N帧代替丢失的P帧;N为预设值。
具体的,用户设备中可以设置一个计时器,在规定时间内没有到达的数据包,则认为丢失,对于丢失的包,采用一定的算法进行差错处理。
对于P帧的差错处理方式如下,若存在丢失的P帧,则采用前帧替代的方式,即利用丢失的P帧的前N帧代替丢失的P帧;所述N为预设值。N为大于0的整数。
在上述实施例的基础上,可选的,所述方法还包括:
若恢复处理成功,用户设备向探测器发送针对视频帧的接收成功信息。
当接收到来自用户设备的针对视频帧的接收成功信息时,探测器从至少一个缓冲区中清除视频帧。
具体的,为了提高效率和降低资源使用率,本发明实施例中数据发送采用UDP而非传输控制协议(Transmission Control Protocol,简称TCP),众所 周知UDP是不可靠的数据链路,所以接收端的用户设备可以定时将数据接收情况反馈给发送端的探测器,即在视频帧恢复处理成功后向探测器发送针对视频帧的接收成功信息,探测器根据接收成功信息调节存储单元中缓冲区的剩余容量,即在接收到来自用户设备的针对视频帧的接收成功信息时,从至少一个缓冲区中清除该视频帧。通过上述反馈机制,能够弥补UDP协议的不可靠性。探测器还可以根据缓冲区剩余容量的大小制定发送策略。
例如当前缓冲区中的剩余容量为L,放入编码后的I帧(大小为I1),则剩余容量为L-I1,若探测器接收到用户设备发送的针对该I帧的接收成功信息,该信息指示接收端接收到I帧并恢复处理成功,则此时将I帧从该缓冲区中清除,此时缓冲区的剩余容量为L。若没有接收到用户设备发送的针对该I帧的接收成功信息,则此时缓冲区的剩余容量依旧为L-I1。
可选的,所述方法,还包括:
用户设备向探测器发送第一控制信息;
探测器接收用户设备的第一控制信号,根据第一控制信号控制视频采集单元移动至采集位置,以控制视频采集单元在采集位置采集视频数据。
具体的,如图3、图4、图6所示,视频采集单元可以通过伺服单元的控制进行旋转,例如旋转角度为360度或180度等,用户设备通过发送单元发送第一控制信号,从而通过伺服单元可以控制视频采集单元的旋转角度,移动至对应的采集位置采集视频数据。
在一些实施方式中,伺服单元可以通过伺服单元控制电路控制。
上述具体实施方式中,由于视频采集单元可以移动至用户设备所需的采集位置进行采集,可以实现360无死角观测,相比传统内窥镜大大增加了观测范围,相比携带两路摄像头的内窥镜传输数据量小了一倍。
以下以两个缓冲区为例进行说明:
如图6所示,假设当前待发送的视频帧大小为BufferZ,缓冲区A的剩余容量大小为BufferA,缓冲区B的剩余容量大小为BufferB,存在下面四种情况:
Case1:BufferZ<=BufferA且BufferZ<=BufferB;
若视频帧为I帧,则采取两路重复发送策略,即通过存储单元中的处理器判断出缓冲区A和缓冲区B的剩余容量均大于待发送的视频帧大小,则将I帧存储至缓冲区A和缓冲区B中,即每个缓冲区都存储相同的I帧,并通过各个对应的发送单元发送出去,即分别通过发送单元A和发送单元B发送至用户设备,用户设备分别通过接收单元A和接收单元B接收,并通过存储单元中的处理器判断接收单元对应的缓冲区的剩余容量是否大于视频帧的大小,若均大于,则存储至与接收单元A和接收单元B分别对应的缓冲区A和缓冲区B中, 然后经过解码单元以及处理单元进行恢复处理和解码,并通过显示器进行显示视频数据。同时可以通过用户设备的发送单元向探测器发送接收成功信息。
若视频帧为P帧,则根据缓冲区的标识进行单路存储并发送,即轮流通过两个缓冲区存储并通过各自对应的发送单元发送。即在存储单元中的处理器判断出缓冲区A和缓冲区B的剩余容量均大于待发送的视频帧大小,P帧可以通过两个发送单元轮流发送,例如第一个P帧存储至缓冲区A,并通过发送单元A发送,第二个P帧存储至缓冲区B,并通过发送单元B,依次类推;同时,在用户设备侧,分别通过接收单元A和接收单元B轮流接收,并通过存储单元中的处理器判断接收单元对应的缓冲区的剩余容量是否大于视频帧的大小,若大于,则存储至与接收单元A和接收单元B分别对应的缓冲区A和缓冲区B中,然后经过解码单元以及处理单元进行恢复处理和解码,并通过显示器进行显示视频数据。
Case2:BufferZ<=BufferA且BufferZ>BufferB;
通过存储单元中的处理器判断出缓冲区A的剩余容量大于待发送的视频帧大小,缓冲区B的剩余容量小于待发送的视频帧大小,则对于I帧来说,存储至缓冲区A并通过缓冲区A对应的发送单元A发送,在下一个I帧编码完成前,保持对缓冲区B的监测,若剩余容量大于I帧的大小,则将该I帧存储至缓冲区B,并通过缓冲区B对应的发送单元B发送,若下一个I帧编码完成后缓冲区B的剩余容量仍小于I帧的大小,则放弃该I帧的重复发送,仅通过缓冲区A进行存储并通过缓冲区A对应的发送单元A发送。
对于P帧来说,直接存储至缓冲区A并通过缓冲区A对应的发送单元A发送,不进行轮流发送。用户设备侧与上述类似,此处不再赘述。
Case3:BufferZ>BufferA且BufferZ<=BufferB,
与Case2类似,此处不再赘述。
Case4:BufferZ>BufferA且BufferZ>BufferB,
通过存储单元中的处理器判断出缓冲区A的剩余容量均小于待发送的视频帧大小,则对于P帧来说,主动丢弃,即不传输该P帧。
对于I帧来说,启动错误计数器,若连续预设个(例如3个)I帧发送失败,则探测器输出异常提示信息,例如信道阻塞异常。其中,可以通过探测器输出异常提示信息,或者,探测器将该输出异常提示信息发送给用户设备,通过用户设备提示用户。
本申请实施例还提供一种存储介质,所述存储介质存储有计算机指令,该计算机指令可以被上述单元调用以实现本申请提供的视频帧传输方法。
本申请实施例提供一种计算机程序产品,包括:计算机程序,所述计算机程序用于实现本申请提供的视频帧传输方法。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求书来限制。

Claims (26)

  1. 一种视频帧传输方法,其特征在于,应用于探测器,所述探测器用于采集车辆非可视区域内的视频数据,包括:
    将采集的视频数据进行编码,得到待发送的视频帧;
    根据所述视频帧的类型,将所述视频帧存储在至少两个缓冲区中的至少一个缓冲区中;
    通过所述至少一个缓冲区各自对应的发送单元,将所述视频帧发送至用户设备。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述视频帧的类型,将所述视频帧存储在至少两个缓冲区中的至少一个缓冲区中,包括:
    若所述视频帧的类型为I帧,确定所述至少两个缓冲区各自的剩余容量是否大于或等于所述视频帧的大小;
    若所述至少两个缓冲区中至少一个缓冲区各自的剩余容量大于或等于所述视频帧的大小,将所述视频帧存储在所述至少一个缓冲区中。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    若所述至少两个缓冲区各自的剩余容量均小于所述视频帧的大小,累计I帧的发送失败次数;
    若所述发送失败次数大于预设阈值,输出异常提示信息。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述视频帧的类型,将所述视频帧存储在至少两个缓冲区中的至少一个缓冲区中,包括:
    若所述视频帧的类型为P帧,确定所述至少两个缓冲区各自的剩余容量是否大于或等于所述视频帧的大小;
    若所述至少两个缓冲区中至少一个缓冲区各自的剩余容量大于或等于所述视频帧的大小,则将所述视频帧存储在所述至少一个缓冲区中的其中一个缓冲区中。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述视频帧存储在所述至少一个缓冲区中的其中一个缓冲区中,包括:
    根据所述至少一个缓冲区各自的标识以及预设的选取策略,选取所述至少一个缓冲区中的其中一个缓冲区;
    将所述视频帧存储至所述其中一个缓冲区中。
  6. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    若所述至少一个缓冲区各自的剩余容量均小于所述视频帧的大小,丢弃所述视频帧。
  7. 根据权利要求2-6任一项所述的方法,其特征在于,所述方法还包括:
    当接收到来自所述用户设备的针对所述视频帧的接收成功信息时,从所述 至少一个缓冲区中清除所述视频帧。
  8. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    接收所述用户设备的第一控制信号,根据所述第一控制信号控制视频采集单元移动至采集位置,以控制所述视频采集单元在所述采集位置采集视频数据。
  9. 一种视频帧传输方法,其特征在于,包括:
    通过至少两个接收单元中的至少一个接收单元接收探测器发送的至少一个视频帧;
    其中,所述视频帧是所述探测器通过至少两个缓冲区中的至少一个缓冲区各自对应的发送单元发送的。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    将所述至少一个视频帧分别存储在所述至少一个接收单元各自对应的缓冲区中。
  11. 根据权利要求9或10所述的方法,其特征在于,所述方法还包括:
    根据所述至少一个视频帧的类型,对所述至少一个视频帧进行恢复处理。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述至少一个视频帧的类型,对所述至少一个视频帧进行恢复处理,包括:
    若所述至少一个视频帧的类型为I帧,对所述至少一个视频帧各自包括的数据包进行合并处理。
  13. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    若所述恢复处理成功,向所述探测器发送针对所述视频帧的接收成功信息。
  14. 一种探测器,其特征在于,包括:
    视频采集单元,用于采集视频数据;
    编码单元,用于将所述视频采集单元采集的视频数据进行编码,得到待发送的视频帧;
    存储单元,用于根据所述视频帧的类型,将所述视频帧存储在至少两个缓冲区中的至少一个缓冲区中;
    至少一个发送单元,所述至少一个发送单元中的每个发送单元与所述至少一个缓冲区中的一个缓冲区对应,其中,所述至少一个发送单元用于将与所述至少一个发送单元各自对应的所述至少一个缓冲区中存储的所述视频帧发送至用户设备。
  15. 根据权利要求14所述的探测器,其特征在于,所述存储单元还用于:
    若所述视频帧的类型为I帧,确定所述至少两个缓冲区各自的剩余容量是否大于或等于所述视频帧的大小;
    若所述至少两个缓冲区中至少一个缓冲区各自的剩余容量大于或等于所 述视频帧的大小,将所述视频帧存储在所述至少一个缓冲区中。
  16. 根据权利要求15所述的探测器,其特征在于,所述存储单元,还用于:
    若所述至少两个缓冲区各自的剩余容量均小于所述视频帧的大小,累计I帧的发送失败次数;
    若所述发送失败次数大于预设阈值,输出异常提示信息。
  17. 根据权利要求14所述的探测器,其特征在于,所述存储单元还用于:
    若所述视频帧的类型为P帧,确定所述至少两个缓冲区各自的剩余容量是否大于或等于所述视频帧的大小;
    若所述至少两个缓冲区中至少一个缓冲区各自的剩余容量大于或等于所述视频帧的大小,则将所述视频帧存储在所述至少一个缓冲区中的其中一个缓冲区中。
  18. 根据权利要求17所述的探测器,其特征在于,所述存储单元还用于:
    根据所述至少一个缓冲区各自的标识以及预设的选取策略,选取所述至少一个缓冲区中的其中一个缓冲区;
    将所述视频帧存储至所述其中一个缓冲区中。
  19. 根据权利要求17所述的探测器,其特征在于,所述存储单元还用于:
    若所述至少一个缓冲区各自的剩余容量均小于所述视频帧的大小,丢弃所述视频帧。
  20. 根据权利要求14-19任一项所述的探测器,其特征在于,所述存储单元还用于:
    当接收到来自所述用户设备的针对所述视频帧的接收成功信息时,从所述至少一个缓冲区中清除所述视频帧。
  21. 根据权利要求14-19任一项所述的探测器,其特征在于,还包括:
    接收单元,用于接收所述用户终端的第一控制信号;
    伺服单元,用于根据所述第一控制信号控制所述视频采集单元移动至采集位置,以控制所述视频采集单元在所述采集位置采集视频数据。
  22. 一种用户设备,其特征在于,包括:
    至少两个接收单元,所述至少两个接收单元中的至少一个接收单元用于接收探测器发送的至少一个视频帧;
    其中,所述视频帧是所述探测器通过至少两个缓冲区中的至少一个缓冲区各自对应的发送单元发送的。
  23. 根据权利要求22所述的用户设备,其特征在于,还包括:
    存储单元,用于将所述至少一个视频帧分别存储在所述至少一个接收单元各自对应的缓冲区中。
  24. 根据权利要求22或23所述的用户设备,其特征在于,还包括:
    处理单元,用于根据所述至少一个视频帧的类型,对所述至少一个视频帧进行恢复处理。
  25. 根据权利要求24所述的用户设备,其特征在于,所述处理单元还用于:
    若所述至少一个视频帧的类型为I帧,对所述至少一个视频帧各自包括的数据包进行合并处理。
  26. 根据权利要求24所述的用户设备,其特征在于,还包括:
    发送单元,用于若所述恢复处理成功,向所述探测器发送针对所述视频帧的接收成功信息。
PCT/CN2019/071352 2018-01-24 2019-01-11 视频帧传输方法、探测器及用户设备 WO2019144818A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810067641.7A CN108282657A (zh) 2018-01-24 2018-01-24 视频帧传输方法、探测器及用户设备
CN201810067641.7 2018-01-24

Publications (1)

Publication Number Publication Date
WO2019144818A1 true WO2019144818A1 (zh) 2019-08-01

Family

ID=62804895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/071352 WO2019144818A1 (zh) 2018-01-24 2019-01-11 视频帧传输方法、探测器及用户设备

Country Status (2)

Country Link
CN (1) CN108282657A (zh)
WO (1) WO2019144818A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111246284A (zh) * 2020-03-09 2020-06-05 深圳创维-Rgb电子有限公司 视频流播放方法、系统、终端及存储介质
CN113541832A (zh) * 2021-06-24 2021-10-22 青岛海信移动通信技术股份有限公司 一种终端、网络传输质量检测方法及存储介质
CN113873293A (zh) * 2021-10-09 2021-12-31 兰州乐智教育科技有限责任公司 动态调整视频帧率自适应网络的方法及相关设备
CN115189810A (zh) * 2022-07-07 2022-10-14 福州大学 一种面向低时延实时视频fec编码传输控制方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282657A (zh) * 2018-01-24 2018-07-13 深圳市道通科技股份有限公司 视频帧传输方法、探测器及用户设备
US11677902B2 (en) 2018-10-09 2023-06-13 Shenzhen Corerain Technologies Co., Ltd. Data processing method and related product
CN112653893B (zh) * 2020-12-22 2022-09-20 展讯通信(上海)有限公司 视频流解码的处理方法、装置及电子设备
CN113596515A (zh) * 2021-08-10 2021-11-02 伟乐视讯科技股份有限公司 一种非压缩数据无缝输出方法及装置
CN113747063B (zh) * 2021-08-27 2023-08-04 深圳市芯中芯科技有限公司 一种视频传输的方法、装置、电子设备及可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101330609A (zh) * 2008-07-31 2008-12-24 南京大学 一种多路径无线视频传输方法和系统
WO2015034123A1 (ko) * 2013-09-04 2015-03-12 주식회사 모브릭 여러 대의 액션캠 영상을 재생하기 위한 버퍼관리 방법 및 그에 따른 재생장치
CN108282657A (zh) * 2018-01-24 2018-07-13 深圳市道通科技股份有限公司 视频帧传输方法、探测器及用户设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141615B (zh) * 2007-10-16 2010-07-14 中兴通讯股份有限公司 会议电视终端支持双流的外置实现方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101330609A (zh) * 2008-07-31 2008-12-24 南京大学 一种多路径无线视频传输方法和系统
WO2015034123A1 (ko) * 2013-09-04 2015-03-12 주식회사 모브릭 여러 대의 액션캠 영상을 재생하기 위한 버퍼관리 방법 및 그에 따른 재생장치
CN108282657A (zh) * 2018-01-24 2018-07-13 深圳市道通科技股份有限公司 视频帧传输方法、探测器及用户设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111246284A (zh) * 2020-03-09 2020-06-05 深圳创维-Rgb电子有限公司 视频流播放方法、系统、终端及存储介质
CN113541832A (zh) * 2021-06-24 2021-10-22 青岛海信移动通信技术股份有限公司 一种终端、网络传输质量检测方法及存储介质
CN113541832B (zh) * 2021-06-24 2023-11-03 青岛海信移动通信技术有限公司 一种终端、网络传输质量检测方法及存储介质
CN113873293A (zh) * 2021-10-09 2021-12-31 兰州乐智教育科技有限责任公司 动态调整视频帧率自适应网络的方法及相关设备
CN115189810A (zh) * 2022-07-07 2022-10-14 福州大学 一种面向低时延实时视频fec编码传输控制方法
CN115189810B (zh) * 2022-07-07 2024-04-16 福州大学 一种面向低时延实时视频fec编码传输控制方法

Also Published As

Publication number Publication date
CN108282657A (zh) 2018-07-13

Similar Documents

Publication Publication Date Title
WO2019144818A1 (zh) 视频帧传输方法、探测器及用户设备
US11190570B2 (en) Video encoding using starve mode
US10009630B2 (en) System and method for encoding video content using virtual intra-frames
JP5731672B2 (ja) 暗黙基準フレームを用いる動画像符号化システム
US8089514B2 (en) Moving image communication device, moving image communication system and semiconductor integrated circuit used for communication of moving image
JPWO2006085500A1 (ja) 監視カメラ装置、それを用いた監視システムおよび監視画像伝送方法
US9306987B2 (en) Content message for video conferencing
US9264737B2 (en) Error resilient transmission of random access frames and global coding parameters
US9948903B2 (en) Method for configuration of video stream output from a digital video camera
KR101029466B1 (ko) 복수 영상신호 송수신 시스템 및 그 운용 방법
JP2022064307A (ja) 画像処理デバイス、カメラ、およびビデオ画像のシーケンスをエンコードするための方法
KR102546764B1 (ko) 영상 제공 장치 및 방법
EP4383729A1 (en) Video failover recording
KR20140072668A (ko) 네트워크 카메라 서버 및 그의 비디오 스트림 처리 방법
US20230034162A1 (en) Transmission apparatus and transmission method
JP5884076B2 (ja) 無線伝送端末及び無線伝送方法、それに用いる符号化装置及び符号化方法、並びにコンピュータ・プログラム
Rosu et al. Real time adaptive video streaming
Vohra Streaming low-bandwidth real-time video using video super-resolution
US9578327B2 (en) Encoding apparatus, encoding method, and non-transitory computer-readable storage medium
JP2006238121A (ja) 情報処理装置
JP2013229911A (ja) データ伝送システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19744344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19744344

Country of ref document: EP

Kind code of ref document: A1