WO2023103954A1 - 视频读取方法、装置、电子设备及存储介质 - Google Patents

视频读取方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023103954A1
WO2023103954A1 PCT/CN2022/136545 CN2022136545W WO2023103954A1 WO 2023103954 A1 WO2023103954 A1 WO 2023103954A1 CN 2022136545 W CN2022136545 W CN 2022136545W WO 2023103954 A1 WO2023103954 A1 WO 2023103954A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
cache block
cache
eye
video frame
Prior art date
Application number
PCT/CN2022/136545
Other languages
English (en)
French (fr)
Inventor
王迎智
高倩
马晓忠
Original Assignee
极限人工智能有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 极限人工智能有限公司 filed Critical 极限人工智能有限公司
Publication of WO2023103954A1 publication Critical patent/WO2023103954A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Definitions

  • the disclosure applies to the technical field of three-dimensional display, and in particular relates to a video reading method, device, electronic equipment and storage medium.
  • 3D display technology is applied to the field of medical imaging, and the related way of viewing 3D medical images is realized by users wearing 3D endoscopes.
  • the viewing principle of the 3D endoscope is that the left and right eyes of a person respectively receive the left and right eye video images played in frame order, and then synthesize the left and right eye video images through the brain to produce a three-dimensional effect.
  • the left and right eyes need to receive video images through the 3D endoscope, so that the left eye video frame of the left eye video stream composed of left eye images can only be received by the left eye, and the right eye video stream composed of right eye images Right eye video frames are only received by the right eye.
  • the embodiments of the present disclosure propose a video reading method, device, electronic equipment and storage medium to solve the technical problem that related technologies cannot meet the real-time requirements of 3D endoscope image and video processing.
  • the first aspect of the present disclosure provides a video reading method, the method comprising:
  • the cache block number of the video to be played is obtained, wherein the at least two video streams correspond to at least two buffers respectively, and the cache blocks in the at least two buffers correspond one-to-one , and the cache blocks corresponding to each other in the at least two buffers have the same cache block number;
  • each video stream in the buffer corresponding to each video stream, read and output the video frame from the cache block indicated by each cache block address, wherein, the number corresponding to a single cache block number
  • the timings of the video frames stored in the corresponding cache blocks are the same.
  • a video reading device comprising:
  • the first obtaining module is used to obtain the cache block number of the video to be played when at least two video streams are played at the same time, wherein the at least two video streams correspond to at least two buffers respectively, and the at least two buffers There is a one-to-one correspondence between the cache blocks in the at least two buffers, and the cache blocks corresponding to each other in the at least two buffers have the same cache block number;
  • a query module configured to query the cache block address of each video stream corresponding to the cache block number of the video frame to be played in the cache block address list;
  • a reading module for each video stream, in the buffer corresponding to each video stream, read the video frame from the cache block indicated by each cache block address and output it, wherein, with a single The video frames stored in the cache blocks corresponding to the cache block numbers have the same timing.
  • a readable storage medium stores programs or instructions, and when the programs or instructions are executed by a processor, the video reading described in the above first aspect is realized. method.
  • an electronic device including a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is executed by the processor Realize the video reading method described in the first aspect above.
  • the present disclosure has the following advantages:
  • a video reading method, device, electronic device, and storage medium provided by the present disclosure obtain the cache block number of the video to be played when at least two video streams are played at the same time, wherein the at least two video streams correspond to at least two In the buffer, cache blocks in at least two buffers correspond one-to-one, and the cache blocks corresponding to each other in at least two buffers have the same cache block number.
  • FIG. 1 is a flow chart of the steps of a video reading method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a video reading structure provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of a cache block address list provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of another video reading structure provided by an embodiment of the present disclosure.
  • FIG. 5 is a flow chart of steps of another video reading method provided by an embodiment of the present disclosure.
  • FIG. 6 is a diagram of a cache block division provided by an embodiment of the present disclosure.
  • FIG. 7 is a flow chart of steps of another video reading method provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a cache block address boundary provided by an embodiment of the present disclosure.
  • FIG. 9 is a flow chart of the steps of another video reading method provided by an embodiment of the present disclosure.
  • FIG. 10 is a structural block diagram of a video reading device provided by an embodiment of the present disclosure.
  • Fig. 11 is a structural block diagram of an electronic device provided by an embodiment of the present disclosure.
  • first”, “second” and the like in the specification and claims of the present disclosure are used to distinguish similar objects, and are not used to describe a specific order or sequence. It should be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the present disclosure are capable of practice in sequences other than those illustrated or described herein, and references to "first”, “second”, etc. to distinguish Objects are generally of one type, and the number of objects is not limited. For example, there may be one or more first objects.
  • “and/or” in the specification and claims means at least one of the connected objects, and the character “/” generally means that the related objects are an "or” relationship.
  • 3D display technology is a new type of display technology. Compared with ordinary 2D (Two Dimensions, two-dimensional stereoscopic display) screen display, 3D technology can make the screen become three-dimensional and realistic. The image is no longer limited to the plane of the screen, as if it can Going out of the screen allows the audience to have an immersive feeling.
  • 3D technology three main types can be subdivided: chromatic aberration type, polarized light type and active shutter type. Among them, the polarized light-type 3D technology is the best Excellent image effects and low cost of use, the most widely used.
  • 3D display technology has been applied to the field of medical imaging.
  • Medical 3D endoscopes are used to collect images of patients in vivo, and 3D images are presented on monitors, display screens and other display devices, which are more realistic than 2D images.
  • the observation of the lesion is clearer and more layered, and it has achieved remarkable results in providing diagnostic efficiency and accuracy.
  • Polarized 3D uses the principle that light has a "vibration direction" to decompose the original image.
  • the image is divided into two groups of vertically polarized light and horizontally polarized light, and then the left and right sides of the 3D glasses use polarized lenses with different polarization directions. , so that the left and right eyes of a person can receive two sets of pictures, and then synthesize a three-dimensional image through the brain.
  • two video streams need to be collected and sent to the display after video processing. Since the two channels of video acquisition and processing are relatively independent, there is a certain time deviation between them. Before sending to the display, the two channels of video need to be synchronized to ensure that the time deviation is within the required range.
  • this disclosure proposes a video reading method, device, electronic equipment and storage medium, which solves the problem that a large number of calculations need to be performed by periodically calculating the video progress difference in related solutions, and can improve the real-time performance of video synchronization , reduce processing complexity, and improve video synchronization accuracy.
  • a video reading method provided by an embodiment of the present disclosure, the method includes:
  • Step S101 acquiring at least two video streams.
  • the at least two video streams may include one main video stream and at least one secondary video stream
  • the main video stream may be a video stream randomly selected from the at least two video streams
  • the secondary video stream may be randomly selected from the at least two video streams.
  • the video stream of the main channel can be the video stream of other channels except the video stream of the main channel; the video stream of the main channel can also be a fixed video stream of The video stream can be determined according to actual needs, and is not limited here.
  • the controller may acquire at least two video streams through the acquisition device.
  • At least two video streams are acquired through a collection device such as a camera or lens of a 3D endoscope, and the controller detects video frames of the at least two video streams through a video detection module.
  • a collection device such as a camera or lens of a 3D endoscope
  • the video frame of the main channel video stream detected by the controller through the video detection module is the main channel video frame
  • the video frame of the secondary channel video stream detected by the controller through the video detection module is the secondary channel video frame
  • Step S102 Store video frames with the same time sequence in at least two video streams in at least two cache blocks corresponding to the same cache block number.
  • the video frames with the same time sequence in the at least two video streams refer to the video frames of the video streams with the same time stamp in the at least two video streams, or the video frames in the same receiving sequence in the at least two video streams.
  • the video frames of each video stream; that is, the same sequence refers to the same time stamp or the same receiving order, which can be determined according to actual needs, and is not limited here.
  • the cache block is a storage area whose capacity and address interval have been determined. The capacity is determined according to the resolution and color depth of the captured video image.
  • Each cache block can store at least one video frame, and the address interval
  • the start address and end address of each cache block are sent to the cache block address list through the application processor (Application Processor Unit, APU), and a cache block number and cache block address are established.
  • APU Application Processor Unit
  • One-to-one correspondence the start address and end address of the beginning and end of the cache block are put into the cache block address list, so as to realize the fast conversion between the cache block number and the cache address.
  • Each cache block is set with a number, that is, the cache block number, and the cache blocks are divided as shown in FIG. 3 , which can be cache block 0, cache block 1...cache block n, and cache block n+1.
  • the at least two video streams may correspond to at least two buffers respectively, and the cache blocks in the at least two buffers correspond one-to-one, wherein at least two buffers correspond to each other cache blocks have the same cache block number.
  • the controller detects the video frames of the at least two channels of video streams through the video detection module, and then at least two channels of video streams
  • the video frames of the video streams with the same time stamp or the same receiving order in the video stream are respectively stored in at least two buffer blocks corresponding to the same buffer block number.
  • the configuration file is loaded successfully, the application processor (APU) starts to work normally, and the cache management (master) end and the cache management (slave) end are initialized and configured, and the memory
  • the space is divided into at least two video data buffers, which are respectively used to store at least two video data streams, wherein the main video stream in the at least two video streams can be video stream 1, and the slave channel in the at least two video streams
  • the video stream can be video stream 2, the buffer management module corresponding to video stream 1 is used as the buffer management (master) end, and the buffer management module corresponding to video stream 2 is used as the buffer management (slave) end; through the camera or lens of the 3D endoscope
  • the controller detects the video frames of video stream 1 and video stream 2 through the video detection module, and then converts the main channel video frames among the video frames with the same time stamp or the same receiving order, Store it in the video stream 1 data buffer through the cache management (master) end
  • Step S103 when at least two video streams are played simultaneously, acquire the cache block number of the video to be played.
  • the video to be played may refer to the video obtained by synchronizing the 3D endoscopic image video, for example: the synchronous video stream 1 and the synchronous video stream 2 shown in FIG. 4 , which may be determined according to actual needs, There is no limit here.
  • the controller may acquire the cache block number of the video to be played.
  • the video obtained by performing synchronous processing on the 3D endoscope image video includes: synchronous video stream 1 and synchronous video stream 2, and synchronous video stream 1 and synchronous video stream 2 are simultaneously played on the display device of the 3D endoscope.
  • video stream 2 obtain the serial number of the cache block storing synchronous video stream 1 and synchronous video stream 2.
  • Step S104 query the cache block address of each video stream corresponding to the cache block number of the video frame to be played in the cache block address list.
  • the application processor (APU) needs to configure the cache block address of each cache block as an initialization parameter in the cache block address list, and at the same time establish a cache The one-to-one mapping relationship between the block number and the start address and end address included in the cache block address, so that after obtaining the cache block number of the video to be played, the controller can extract the corresponding start address and end address of the cache block number according to the cache block number. The end address, so as to obtain the cache block address stored in each video stream corresponding to the cache block number.
  • the controller extracts the start address corresponding to the cache block number of the video frame to be played from the cache block address list through the address extraction module and the end address to obtain the cache block address stored in each video stream corresponding to the cache block number of the video frame to be played.
  • Step S105 for each video stream, in the buffer corresponding to the video stream, read the video frame from the cache block indicated by each cache block address and output it, wherein each corresponding to a single cache block number
  • the video frames stored in the cache block have the same timing.
  • performing S105 for each of the at least two video streams is performed synchronously.
  • the cache block address includes a start address and an end address.
  • the controller reads from the start address of a single cache block to the end address of the cache block according to the position indicated by the start address and end address included in each cache block address, and obtains the Video frames of the video stream and output.
  • the controller extracts the start address and the end address corresponding to the cache block number; the read operation bus timing control sub The module obtains the use authority of the interface bus through arbitration, and after obtaining the use authority, reads and outputs the video frame data stored in the cache block corresponding to the cache block number according to the address information of the start address and end address.
  • a video reading method provided by the present disclosure obtains the cache block number of the video to be played when at least two video streams are played at the same time; since the video frames stored in each cache block corresponding to a single cache block number have the same timing , so that by obtaining the cache block number, you can directly query the cache block address of each video stream corresponding to the cache block number in the cache block address list; Read to complete the synchronous processing of at least two video streams, and obtain and output video frames with the same timing of each video stream.
  • the method adopts the buffer block number as the synchronization signal of at least two video streams, and does not need to analyze the video stream data.
  • Synchronization deviation the synchronization of two video stream data and video frames is converted into the synchronization of cache read and write operations, and the amount of calculation is small; therefore, when synchronizing at least two video streams, a large number of calculations can be avoided and the processing complexity is reduced. , simplifies the synchronization processing process, each read and write operation is controlled by the same cache block number, no additional calculation is required, the processing delay is minimized, and the real-time performance is good, so that the two-way video of the 3D endoscope is processed synchronously
  • streaming the real-time performance of video synchronization can be improved, so as to ensure that the real-time requirements of 3D endoscope image and video processing can be met, and the video synchronization accuracy is improved, and the effect is good.
  • At least two video streams are processed synchronously and converted into synchronous read and write operations on the cache block, which removes the complicated calculations in the existing schemes, improves the real-time performance of the processing, and is very suitable for on-site programmable Gate Array (Field-Programmable Gate Array, FPGA) platform.
  • FPGA Field-Programmable Gate Array
  • a video reading method provided by an embodiment of the present disclosure, the method includes:
  • Step S201 acquiring at least two video streams.
  • step S101 For this step, reference may be made to the detailed description of step S101, which will not be repeated here.
  • Step S202 the at least two video streams include a main video stream and a secondary video stream, and the main video frame of the main video stream is stored in a first cache block.
  • the first buffer block is used to store the main channel video frame of the main channel video stream, the first buffer block is located in the first buffer, and the first buffer corresponds to the buffer management (master) end , the first buffer may be located in the first memory.
  • the main video frame is stored in the first cache block of the first memory.
  • the main video stream among the at least two video streams may be video stream 1, and the first buffer may be a video stream 1 data buffer.
  • the controller detects the main channel video frame of the video stream 1 through the video detection module, the main channel video frame is stored in the video stream 1 data buffer.
  • step S202 may include:
  • Sub-step S2021 obtain the current available cache block number.
  • the number of the available cache block is the number of the cache block in the memory where data can be written.
  • the memory may include a first memory and a second memory, a first buffer is set in the first memory, and a second buffer is set in the second memory, that is, the buffers of the two video stream data adopt two independent memories;
  • the first buffer and the second buffer can be directly set in the memory, that is, the buffers of the two channels of video stream data share a memory, and each buffer is divided into several cache blocks, and each cache block has a fixed number. for storing at least one video frame image data.
  • the synchronous reading of video frames at the buffer management (master) end and the buffer management (slave) end is related to the bandwidth of the two physical hardware interfaces.
  • Using the same chip and physical hardware interface design can minimize Buffer management (master) end and buffer management (slave) end synchronization deviation; when sharing a memory, the video frame reading at both ends of the buffer management (master) end and buffer management (slave) end needs to share a physical hardware interface (memory interface ), the physical bandwidth of the memory interface is much higher than the video stream reading bandwidth, and a time-division multiplexing mechanism can be used to control each reading of fixed-length data, switch the master-slave channel, and realize memory interface sharing; both methods can ensure at least two Synchronization of video stream reading.
  • the controller searches according to the order of the cache block numbers, and determines the number of the free cache block with the highest cache block number and is not occupied as the available cache block number for use in this write operation .
  • the determined number of the available cache block is the number of the available cache block described in this embodiment.
  • each buffer is divided into several cache blocks by the application software developer, and each cache block has a fixed cache block number, for example: cache block 0, cache block 1... cache block n, cache block n+1.
  • the controller obtains the number of the cache block that can perform the write operation as the number of the available cache block, so that the controller notifies the write operation bus timing control submodule to extract the number of the available cache block.
  • Sub-step S2022 in the cache block address list, search for the writing start address corresponding to the available cache block number.
  • the write start address refers to the start address of the available cache block
  • the controller can search the start address corresponding to the available cache block number in the cache block address list according to the available cache block number as Write start address.
  • each cache block has a start address and an end address, these two addresses are the two boundaries of the cache block, which may be the physical address of the memory chip or the relative address.
  • the cache management module can perform data read and write operations on a cache block according to the start address and end address, and the cache block address boundaries are divided as shown in Figure 8.
  • the controller configures the address information of the start address and end address of each cache block into the cache block address list, and after obtaining the number of the available cache block, the cache block can be set according to the number of the available cache block. In the address list, find the start address corresponding to the available cache block number as the write start address.
  • Sub-step S2023 writing the main channel video frame data of the main channel video stream from the starting address to the cache block corresponding to the available cache block number, wherein the cache block corresponding to the available cache block number is the first cache block.
  • the controller detects the main channel video frame of the main channel video stream through the video detection module
  • the data of the main channel video frame of the main channel video stream is written from the start address to the available buffer block
  • the cache block corresponding to the available cache block number is the first cache block.
  • the main end of the cache management module receives a main channel video frame data, and according to the available cache block number, in the cache block address list, extracts the cache block in the first buffer corresponding to the available cache block number (that is, the first A cache block) start address information, write a main channel video frame data into the first cache block through the interface bus.
  • the present disclosure can determine the cache block that can currently write data in the memory by obtaining the number of the available cache block, so that the write start corresponding to the number of the available cache block can be found in the address list of the cache block according to the number of the available cache block address, and then write a main channel video frame data of the main channel video stream from the write start address into the cache block (that is, the first cache block) corresponding to the available cache block number, so as not to write the main channel video stream
  • the cache block of the written data is overwritten, resulting in damage and loss of the main channel video stream data.
  • Sub-step S2024 detecting the frame end mark of the main channel video frame.
  • a video frame includes a frame header and a frame trailer
  • the frame trailer flag refers to a flag that identifies the frame trailer of the video frame.
  • the controller can determine that the main channel video frame is completely stored when detecting the frame end indicator by detecting the frame end indicator of the main channel video frame.
  • the controller detects a frame head mark and a frame end mark of a main channel video frame, and when a valid frame end mark is detected, it can be determined that the main channel video frame is completely stored.
  • Sub-step S2025 when the end-of-frame flag is detected, add one to the number of the available cache block.
  • the controller detects the end-of-frame flag of the main-way video frame.
  • it can determine that the main-way video frame is completely stored in the current available buffer block.
  • the current The available cache block becomes the occupied cache block, and adding one to the available cache block number can obtain the cache block number of the cache block storing the next main channel video frame.
  • the controller detects the frame head mark and the frame end mark of a main channel video frame, and when a valid frame end mark is detected, it can be determined that the main channel video frame is completely stored in the current available buffer block, and will be available Add 1 to the cache block number to obtain the cache block number of the cache block storing the next main channel video frame.
  • This disclosure detects the end-of-frame flag of the video frame data on the main road; when the end-of-frame flag is detected, the number of the available cache block can be increased by 1, thereby obtaining the updated number of the available cache block, and the cache block with the number of the available cache block The next main channel video frame data can be stored.
  • Step S203 acquiring the target cache block number of the first cache block.
  • the target cache block number refers to the cache block number of the cache block to be stored in the slave way video frame which is synchronized with the receiving time stamp or receiving order of the main way video frame.
  • the controller acquires the cache block number of the first cache block, and uses the cache block number as the target cache block number.
  • the cache block number of the cache management (slave) end is provided by the cache management (master) end, then the controller obtains the target cache block number of the first cache block, and the target cache block number is real-time Synchronize to the cache management (slave) end to ensure that the main channel video frame and the secondary channel video frame that receive the time stamp or receive order synchronization are written into two cache blocks with the same cache block number.
  • Step S204 storing the secondary video frame of the secondary video stream in the second cache block corresponding to the target cache block number, at least two cache blocks including the first cache block and the second cache block.
  • the second cache block is used to store the secondary video frame of the secondary video stream, and the second cache block is located in the second buffer, and the second buffer corresponds to the cache management (slave) end.
  • the controller looks up the start address corresponding to the target cache block number in the cache block address list according to the target cache block number as the target write start address, and writes the slave video stream of the slave video stream to The frame is written from the target address to the second cache block corresponding to the target cache block number.
  • the controller looks up the start address corresponding to the target cache block number in the cache block address list according to the target cache block number as the target write start address, and then writes a The secondary video frame is written into the start address from the target, and written into the second cache block corresponding to the target cache block number.
  • This disclosure stores the main channel video frame of the main channel video stream in the first cache block, and then obtains the target cache block number of the first cache block, so that the target cache block number of the cache management (master) end can be provided to the cache Manage the (slave) side so that the slave video frames of the slave video stream are stored in the second cache block corresponding to the target cache block number, so as to ensure that the time stamp or reception order is synchronized with the master video frame and the slave video frame A video frame is written into two buffer blocks with the same buffer block number.
  • Step S205 at least two video streams include a plurality of secondary video streams, and at least two cache blocks include multiple second cache blocks; the secondary video frames of the multiple secondary video streams are respectively stored in the corresponding cache block numbers. In the corresponding different second cache blocks, the secondary video frames of the single secondary video stream are stored in the single second cache block.
  • the multiple second cache blocks may all be set in the same memory, or they may be set in different memories respectively, which may be determined according to actual requirements, and is not limited here.
  • the controller when at least two video streams include multiple secondary video streams, the controller respectively stores the secondary video frames of each of the multiple secondary video streams in the In the different second cache blocks corresponding to the cache block numbers, the secondary video frames of the single secondary video stream are stored in the single second cache block.
  • each cache management (slave) end when at least two video streams include multiple slave video streams, there are multiple buffer management (slave) terminals corresponding to each slave video stream, and the controller controls the buffer management After the (master) end provides the target cache block number to each cache management (slave) end, each cache management (slave) end will each slave video frame of each slave video stream, from the start address corresponding to the target cache block number Starting at , write to each second cache block corresponding to the target cache block number.
  • the controller when at least two video streams include multiple secondary video streams, the controller respectively stores the secondary video frames of the multiple secondary video streams in different second cache blocks corresponding to the target cache block number In this method, it is possible to make each video frame of the slave channel whose reception time stamp or reception sequence is synchronized be written into the cache block with the same cache block number.
  • Step S206 when at least two video streams are played simultaneously, acquire the cache block number of the video to be played.
  • step S103 For this step, reference may be made to the detailed description of step S103, which will not be repeated here.
  • Step S207 query the cache block address of each video stream corresponding to the cache block number in the cache block address list.
  • step S104 For this step, reference may be made to the detailed description of step S104, which will not be repeated here.
  • Step S208 detecting status signals of output interfaces corresponding to at least two video streams.
  • a single output interface is used to output the video frame of a single video stream, and the output interface corresponding to any video stream can be used to output the video frame of the video stream; the status signal refers to whether the output interface is idle state.
  • the controller detects whether the corresponding output interfaces of the at least two video streams are in an idle state, so as to determine whether to read the video frame of the video to be played from the cache block indicated by the cache block address.
  • the output interface may include a first output interface and a second output interface, as shown in FIG.
  • the second output interface is connected with the cache management (slave) terminal of the cache management module, and the cache management (master) terminal and the cache management (slave) terminal monitor whether each output interface is in an idle status signal respectively.
  • Step S209 when the state signals of the corresponding output interfaces of at least two video streams are idle state signals, in the buffer zone corresponding to each video stream, read from the cache block indicated by each cache block address The video frame of the video stream is obtained, and the obtained video frame is output as a video frame of a single video stream through a single output interface.
  • the controller when the state signals are all idle state signals, the controller reads from the start address of a single cache block according to the position indicated by the start address and end address included in each cache block address. Go to the end address of the cache block, get the video frame of each video stream, and output the video frame of the single video stream through a single output interface.
  • the output interface may include a first output interface and a second output interface, as shown in FIG.
  • the cache management (master) end searches the cache block address list for the start address corresponding to the cache block number, and reads the video stream 1 data buffer through the interface bus A video frame data in the district, obtains a video frame of synchronous video flow 1, this video frame is output by the first output interface, buffer management (master) terminal is real-time synchronous buffer memory block number to buffer memory management (slave) end, buffer memory The management (master) end and the cache management (slave) end start a video frame read operation synchronously, first extract the starting address of the storage block from the cache block address list according to the cache block number, and then read the video stream 2 data buffer One video frame data corresponding to the above, one video frame of the synchronous video stream 2 is obtained, and the video frame is output through the second output interface.
  • the state signal of each output interface when the state signal is an idle state signal, it reads separately in the buffer block indicated by each buffer block address to obtain the video frame of each video stream, and Outputting the video frame through the output interface can ensure the validity of the read main channel video frame data and slave channel video frame data, and avoid reading the main channel video frame data when the first cache block or the second cache block is occupied And slave video frame data, resulting in the loss of video frame data, resulting in errors in the obtained synchronization processing results.
  • step S209 may include:
  • Sub-step S2091 sequentially read the data of the target length from the cache block indicated by each cache block address until the video frame is completely read, and output the video frame of the single video stream through a single output interface.
  • the target length can be set by the user based on actual experience, or can be a default value of 3D endoscopy, which can be determined according to actual needs, and is not limited here.
  • the at least two buffers when at least two buffers are located in the same memory, the at least two buffers transmit the video frame data of each video stream through the same interface bus.
  • a time-division multiplexing mechanism is adopted to control Read video frame data of a fixed target length each time until the video frame is completely read, and output the video frame of a single video stream through a single output interface.
  • the at least two buffers are respectively connected to the cache management (master) end and the cache management (slave) end through the same interface bus
  • the cache management (master) terminal and the cache management (slave) terminal need to share a physical memory interface to read a video frame.
  • the physical bandwidth of the memory interface is much higher than the video stream reading bandwidth.
  • Time division multiplexing mechanism can be used to control each Read fixed-length data, switch the master-slave channel, and realize memory interface sharing, and then output the main road video frame through the first output interface connected to the cache management (master) end, and then output the main road video frame through the second The output interface outputs slave video frames.
  • the data of the target length is sequentially read from the cache block indicated by each cache block address until the video frame is completely read, and the video frame of a single video stream is output through a single output interface, which can make the
  • the time-division multiplexing is used on the interface bus during the entire time of transmitting the video frame data of the main channel and the video frame data of the slave channel, which improves the utilization rate of the interface bus.
  • substep S2091 may include:
  • Substep A reading the video frame of the target length from the cache block indicated by each cache block address in turn;
  • the controller adopts a time-division multiplexing mechanism to control each time to read video frame data of a fixed target length from the cache block indicated by each cache block address.
  • the read operation bus timing control submodule obtains the interface bus usage permission through arbitration, and after obtaining the permission, reads a fixed target length from the cache block indicated by each cache block address each time. Video frame data.
  • Sub-step B detecting the frame end sign of the video frame
  • the controller determines that the video frame data has been completely read by detecting the end-of-frame flag of the video frame in the buffer block indicated by the address of each buffer block.
  • the controller detects the frame head flag and the frame tail flag of the video frame in the cache block indicated by each cache block address, and when a valid frame tail flag is detected, it can be determined that the video frame data has been completely read .
  • sub-step C when the number of end-of-frame markers equal to the number of video frames is detected, video frames of a single video stream are output through a single output interface, and the buffer block number is updated.
  • the controller detects the end-of-frame flags of the video frames in the cache block indicated by the address of each cache block, and when detecting the end-of-frame flags with the same number of video frames, outputs The video frame of a single video stream, and add one to the cache block number to update the cache block number.
  • the controller detects the frame head flag and the frame tail flag of the video frame in the cache block indicated by each cache block address, and when a valid frame tail flag is detected, the video frame data in the current cache block can be determined has been completely read, output the video frame of a single video stream through a single output interface, and add 1 to the buffer block number to obtain the buffer block number for storing the next video frame.
  • each time the cache management module (master) end and the cache management module (slave) end finish reading a video frame they will extract a new cache block number and read the next video frame data; in extreme cases , because the idle state of the output interface of the cache management module (master) and the cache management module (slave) may be different, and the video reading rate may be different, there will be a situation where one end reads a frame first, and when one end completes a frame Reading, when the reading progress at the other end is slow, the video synchronization deviation between the cache management module (master) end and the cache management module (slave) end will be close to the time of one video frame, and the synchronization deviation is greatly reduced compared with the existing solutions.
  • the present disclosure can time-division multiplex the interface bus by sequentially reading video frames of the target length from the cache block indicated by each cache block address, thereby improving the utilization rate of the interface bus; and then detecting the end-of-frame flag of the video frame; When the number of end-of-frame markers equal to the number of video frames is detected, a video frame of a single video stream is output through a single output interface, and the buffer block number is updated, so that the controller can continue to read subsequent video frame data.
  • the main video frame of the main video stream is stored in the first cache block, and then the target cache block of the first cache block is acquired number, can make it possible to provide the target cache block number of the cache management (master) end to the cache management (slave) end, so that the slave road video frame of the slave road video stream is stored in the second cache block corresponding to the target cache block number
  • the receiving time stamp or the receiving order of the master video frame and the slave video frame are written into two cache blocks with the same buffer block number; then when at least two video streams are played at the same time, the video to be played is obtained cache block number; you can directly query the cache block address of each video stream corresponding to the cache block number in the cache block address list; then read in the cache block indicated by each cache block address,
  • another video reading method provided by an embodiment of the present disclosure includes:
  • Step S301 acquiring at least two video streams.
  • step S101 For this step, reference may be made to the detailed description of step S101, which will not be repeated here.
  • Step S302 storing the left-eye video frame data in the left-eye buffer block, and storing the right-eye video frame data in the right-eye buffer block.
  • the cache block numbers of the left-eye cache block and the right-eye cache block are the same.
  • the controller detects the video frames of the two video streams through the video detection module, and then the time stamps in the two video streams are the same Or the left-eye video frame data received in the same order is stored in the left-eye buffer block, and the right-eye video frame data is stored in the right-eye buffer block.
  • the main video stream in the two video streams can be video stream 1, and the secondary video stream in the two video streams can be video stream 2.
  • the two-way video stream 1 and video stream 2 collected by each lens enter the video detection module at the buffer management (main) end, and after detecting the main road video frame, notify the write operation bus timing control sub-module to extract the cache block number of the currently available cache block and send it to Enter the cache block address management submodule to obtain the starting address of the cache block corresponding to the cache block number from the cache block address list.
  • the write operation bus timing control sub-module writes the current main channel video frame into the corresponding cache block according to the starting address; at the same time, it detects the frame header and frame end signs of the main channel video frame, and when a valid frame is detected Add 1 to the cache block number when the tail flag is set.
  • Step S303 when at least two video streams are played simultaneously, acquire the cache block number of the video to be played.
  • step S103 For this step, reference may be made to the detailed description of step S103, which will not be repeated here.
  • Step S304 query the cache block address of each video stream corresponding to the cache block number in the cache block address list.
  • step S104 For this step, reference may be made to the detailed description of step S104, which will not be repeated here.
  • Step S305 read the left-eye video frame in the left-eye cache block indicated by each cache block address, obtain and output the video frame of the left-eye video stream, and store it in the right-eye cache block indicated by each cache block address Read the right-eye video frame, get the video frame of the right-eye video stream and output it.
  • the controller reads the left-eye video frame in the left-eye cache block indicated by each cache block address according to the position indicated by the start address and the end address included in each cache block address, The video frame of the left-eye video stream is obtained and output, and the right-eye video frame is read in the right-eye buffer block indicated by the address of each buffer block, and the video frame of the right-eye video stream is obtained and output.
  • the read operation bus timing control submodule after the read operation bus timing control submodule finishes reading the last video frame, it obtains the cache block number updated by adding 1, sends it into the cache block address list, and extracts the cache block number corresponding to The start address and end address of the cache block; the read operation bus timing control sub-module obtains the interface bus usage authority through arbitration, and reads and outputs the corresponding video frame data according to the address information of the start address and end address of the cache block.
  • Another video reading method provided by the present disclosure is to store the left-eye video frame data in the left-eye buffer block and store the right-eye video frame data in the right-eye buffer block by acquiring two video streams collected by a 3D endoscope.
  • the cache block wherein, the cache block numbers of the left-eye cache block and the right-eye cache block are the same, when the 3D endoscope plays the two-way video streams simultaneously, obtain the cache block number of the video to be played;
  • the timing of the video frames stored in the corresponding cache blocks is the same, so that by obtaining the cache block number, the cache block address of each video stream corresponding to the cache block number can be directly queried in the cache block address list; and then Read the left-eye video frame in the left-eye cache block indicated by each cache block address to obtain the video frame of the left-eye video stream, and read the right-eye video in the right-eye cache block indicated by each cache block address Frame, to get the video frame of the right-eye video stream, so it can avoid a large number of calculations when performing
  • the embodiment of the present disclosure also provides a video reading device 400 including:
  • the first obtaining module 401 is used to obtain the cache block number of the video to be played when at least two video streams are played simultaneously;
  • the query module 402 is used to query the buffer block address of each video stream corresponding to the buffer block number in the buffer block address list;
  • the reading module 403 is used to read in the buffer block indicated by each buffer block address, obtain the video frame of each video stream and output the video stored in each buffer block corresponding to the single buffer block number The timing of the frames is the same.
  • the reading module 403 is also used for:
  • Detect the status signal of each output interface a single output interface is used to output video frames of a single video stream; when the status signals are all idle status signals, read in the buffer block indicated by each buffer block address, The video frame of each video stream is obtained, and the video frame of a single video stream is output through a single output interface.
  • the reading module 403 is also used to:
  • the data of the target length is sequentially read from the cache block indicated by each cache block address until the video frame is completely read, and the video frame of the single video stream is output through a single output interface.
  • the reading module 403 is also used for:
  • the cache block address includes a start address and an end address; the reading module 403 is also used for:
  • the video reading device 400 also includes:
  • the second acquiring module 404 is configured to acquire at least two video streams
  • the storage module 405 is configured to store video frames with the same time sequence in at least two video streams in at least two cache blocks corresponding to the same cache block number.
  • At least two video streams include: a master video stream and a slave video stream; the storage module 405 is also used for:
  • At least two cache blocks include a first cache block and a second cache block.
  • At least two video streams include multiple secondary video streams, and at least two cache blocks include multiple second cache blocks; the storage module 405 is also used for:
  • the secondary video frames of multiple secondary video streams are respectively stored in different second cache blocks corresponding to the target cache block numbers, and the secondary video frames of a single secondary video stream are stored in a single second cache block.
  • the storage module 405 is also used for:
  • the available cache block code is the number of the cache block that can write data in the memory; in the cache block address list, find the write start address corresponding to the available cache block number; send the main channel video stream The main channel video frame data is written into the cache block corresponding to the available cache block number from the writing start address, wherein the cache block corresponding to the available cache block number is the first cache block.
  • the storage module 405 is also used for:
  • the at least two video streams include a left-eye video stream and a right-eye video stream captured by a three-dimensional display endoscope
  • the video frames include: left-eye video frames of the left-eye video stream and right-eye video frames of the right-eye video stream Frames
  • the cache block includes: a left-eye cache block and a right-eye cache block, the left-eye video frame is stored in the left-eye cache block, and the right-eye video frame is stored in the right-eye cache block;
  • the storage module 405 is also used for: storing the left-eye video frame data in the left-eye cache block, and storing the right-eye video frame data in the right-eye cache block, wherein the cache of the left-eye cache block and the right-eye cache block The block numbers are the same.
  • the reading module 403 is also configured to: read the left-eye video frame in the left-eye cache block indicated by each cache block address, obtain and output the video frame of the left-eye video stream, and The right-eye video frame is read from the right-eye buffer block indicated by the block address, and the video frame of the right-eye video stream is obtained and output.
  • a video reading device obtains the cache block number of the video to be played when at least two video streams are played at the same time; since the timing of the video frames stored in each cache block corresponding to a single cache block number is the same , so that by obtaining the cache block number, you can directly query the cache block address of each video stream corresponding to the cache block number in the cache block address list; Read to complete the synchronous processing of at least two video streams, and obtain and output video frames with the same timing of each video stream.
  • an embodiment of the present disclosure also provides an electronic device 500, including a processor 501, a memory 502, and a program or instruction stored in the memory 502 and operable on the processor 501.
  • the program or instruction is processed When executed by the device 501, each process of the above-mentioned video reading method embodiment can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the embodiment of the present disclosure also provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, each process of the above video reading method embodiment can be achieved, and the same Technical effects, in order to avoid repetition, will not be repeated here.
  • a readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • ROM computer read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk and the like.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and apparatus in the disclosed embodiments is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order depending on the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开提供视频读取方法、装置、电子设备及存储介质,应用于三维显示技术领域。方法包括:在至少两路视频流同时播放时,获取待播放视频的缓存块编号,其中,至少两路视频流分别对应至少两个缓冲区,至少两个缓冲区中缓存块一一对应,且至少两个缓冲区中相互对应的缓存块有相同缓存块编号;在缓存块地址列表中查询与缓存块编号相对应的每路视频流的缓存块地址;在每路视频流对应的缓冲区中,从每个缓存块地址所指示缓存块中读取视频帧并输出,与单个缓存块编号对应的各缓存块中存储的视频帧的时序相同。这样在对至少两路视频流进行同步处理时,可避免进行大量计算,减少处理复杂度,简化同步处理过程,提高视频同步的实时性和对齐精度。

Description

视频读取方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求在2021年12月06日提交中国专利局、申请号202111474863.9、发明名称为“视频读取方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开应用于三维显示技术领域,特别是涉及一种视频读取方法、装置、电子设备及存储介质。
背景技术
随着3D(Three Dimensions,三维立体显示)显示技术的日渐成熟,3D显示技术被应用到医疗影像领域,相关的观看3D医疗影像的方式是用户通过佩戴3D内窥镜来实现的。
3D内窥镜的观看原理,是由人的左右眼分别接收按照帧顺序播放的左右眼视频图像,再经过大脑将左右眼视频图像予以合成,产生立体效果。观影时,左右眼需通过3D内窥镜来接收视频图像,以使左眼图像组成的左眼视频流的左眼视频帧只能被左眼接收,右眼图像组成的右眼视频流的右眼视频帧只能被右眼接收。
这样,需要采集左右眼两路上视频图像组成的两路视频流,并完成两路视频流的同步处理后发送给显示器。目前,往往是通过计算视频进度差值或视频片段同步补偿值进行同步处理,由于这两种方法都需要大量的计算,实现起来比较复杂,因此无法满足3D内窥镜影像视频处理在实时性上的要求。
发明内容
有鉴于此,本公开实施例提出一种视频读取方法、装置、电子设备及存储介质,用于解决相关技术无法满足3D内窥镜影像视频处理实时性上的要求的技术问题。
本公开第一方面提供一种视频读取方法,所述方法包括:
在至少两路视频流同时播放时,获取待播放视频的缓存块编号,其中,所述至少两路视频流分别对应至少两个缓冲区,所述至少两个缓冲区中的缓存块一一对应,并且所述至少两个缓冲区中相互对应的缓存块具有相同的缓存块编号;
在缓存块地址列表中查询与所述待播放视频帧的缓存块编号相对应的每路所述视频流的缓存块地址;
针对每路视频流,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中读取视频帧并输出,其中,与单个所述缓存块编号相对应的各缓存块中存 储的视频帧的时序相同。
依据本公开第二方面,提供一种视频读取装置,所述装置包括:
第一获取模块,用于在至少两路视频流同时播放时,获取待播放视频的缓存块编号,其中,所述至少两路视频流分别对应至少两个缓冲区,所述至少两个缓冲区中的缓存块一一对应,并且所述至少两个缓冲区中相互对应的缓存块具有相同的缓存块编号;
查询模块,用于在缓存块地址列表中查询与所述待播放视频帧的缓存块编号相对应的每路所述视频流的缓存块地址;
读取模块,用于针对每路视频流,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中读取视频帧并输出,其中,与单个所述缓存块编号相对应的各缓存块中存储的视频帧的时序相同。
依据本公开第三方面,提供一种可读存储介质,所述可读存储介质上存储有程序或指令,所述有程序或指令被处理器执行时实现上述第一方面所述的视频读取方法。
依据本公开第四方面,提供一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现上述第一方面所述的视频读取方法。
针对相关技术,本公开具备如下优点:
本公开提供的一种视频读取方法、装置、电子设备及存储介质,在至少两路视频流同时播放时,获取待播放视频的缓存块编号,其中,至少两路视频流分别对应至少两个缓冲区,至少两个缓冲区中的缓存块一一对应,并且至少两个缓冲区中相互对应的缓存块具有相同的缓存块编号。由于与单个缓存块编号相对应的各缓存块中存储的视频帧的时序相同,这样通过获取缓存块编号,就可以直接在缓存块地址列表中查询与该待播放视频帧的缓存块编号相对应的每路视频流的缓存块地址;然后在每个缓存块地址所指示的缓存块中分别进行读取,以完成至少两路视频流的同步处理,得到每路视频流的时序相同的视频帧并输出,因此在对至少两路视频流进行同步处理时,可以避免进行大量的计算,减少了处理复杂度,简化了同步处理过程,提高了视频同步的实时性和对齐精度。
上述说明仅是本公开技术方案的概述,为了能够更清楚了解本公开的技的术手段,而可依照说明书的内容予以实施,并且为了让本公开的上述和其它目的、特征和优点能够更明显易懂,以下特举本公开的具体实施方式。
附图说明
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可 以根据这些附图获得其他的附图。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1是本公开实施例提供的一种视频读取方法的步骤流程图;
图2是本公开实施例提供的一种视频读取结构示意图;
图3是本公开实施例提供的一种缓存块地址列表示意图;
图4是本公开实施例提供的另一种视频读取结构示意图;
图5是本公开实施例提供的另一种视频读取方法的步骤流程图;
图6是本公开实施例提供的一种缓存块划分意图;
图7是本公开实施例提供的再一种视频读取方法的步骤流程图;
图8是本公开实施例提供的一种缓存块地址边界示意图;
图9是本公开实施例提供的又一种视频读取方法的步骤流程图;
图10是本公开实施例提供的一种视频读取装置的结构框图;
图11是本公开实施例提供的一种电子设备的结构框图。
具体实施例
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本公开保护的范围。
本公开的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本公开实施例提供的视频读 取方法、装置、电子设备及存储介质进行详细地说明。
3D显示技术,是一种新型显示技术,与普通2D(Two Dimensions,二维立体显示)画面显示相比,3D技术可以使画面变得立体逼真,图像不再局限于屏幕的平面上,仿佛能够走出屏幕外面,让观众有身临其境的感觉,在眼镜式3D技术中,可以细分出三种主要的类型:色差式、偏光式和主动快门式,其中偏光式3D技术以其较好的图像效果和较低的使用成本,应用最为广泛。
随着新技术的发展,3D显示技术被应用到医疗影像领域,通过医用3D内窥镜采集病人体内图像画面,在监视器、显示屏等显示设备上呈现3D影像,相对于2D影像画面更加逼真,病灶部位观察更加清晰,更有层次感,在提供诊断效率及准确性方面取得显著效果。
偏光式3D是利用光线有“振动方向”的原理来分解原始图像的,先通过把图像分为垂直向偏振光和水平向偏振光两组画面,然后3D眼镜左右分别采用不同偏振方向的偏光镜片,这样人的左右眼就能接收两组画面,再经过大脑合成立体影像。
根据偏光式3D显示技术,需要采集两路视频流,完成视频处理后发送给显示器。由于两路视频采集与处理过程相对独立,相互之间存在一定时间偏差,发送到显示器之前,需要对两路视频完成同步处理,保证时间偏差在要求范围内。
目前常用的视频同步方法有以下两种,一种是比较视频进度差值,通过时间戳来纠正同步偏差;另一种方法是根据视频片段计算同步补偿值,根据同步补偿值,实现两路视频同步。两种方法都需要大量的计算,根据计算结果调整视频同步的偏差,实现起来比较复杂,不适合3D内窥镜影像视频处理实时性上的要求。而且通过上述两种方案,得到视频数据同步结果精度不高,完全依赖于处理的性能,同步效果与视频参数的计算处理时间相关。
为了解决上述问题,本公开提出一种视频读取方法、装置、电子设备及存储介质,解决了相关方案通过周期性计算视频进度差值需要进行大量的计算的问题,可以提高视频同步的实时性,减少处理复杂度,提升视频同步精度。
如图1所示,本公开实施例提供的一种视频读取方法,该方法包括:
步骤S101,获取至少两路视频流。
在本公开实施例中,至少两路视频流可以包括一路主路视频流和至少一路从路视频流,主路视频流可以是从至少两路视频流中随机选取的一路的视频流,而从路视频流可以是除主路视频流之外的其他路的视频流;主路视频流也可以是固定的一路的视频流,而从路视频流可以是除主路视频流之外的其他路的视频流,具体可以根据实际需求确定,此处不做限定。
在本公开实施例中,控制器可以通过采集设备获取至少两路视频流。
示例性地,参见图2所示,通过3D内窥镜的摄像头或镜片等采集设备获取至少两路视频流,控制器通过视频检测模块检测该至少两路视频流的视频帧。
在本公开实施例中,控制器通过视频检测模块检测到的主路视频流的视频帧是主路视频帧,控制器通过视频检测模块检测到的从路视频流的视频帧是从路视频帧。
步骤S102,将至少两路视频流中时序相同的视频帧分别存储在与同一缓存块编号相对应的至少两个缓存块中。
在本公开实施例中,至少两路视频流中时序相同的视频帧是指至少两路视频流中时间戳相同的各路视频流的视频帧,或者指至少两路视频流中接收次序相同的各路视频流的视频帧;即时序相同是指时间戳相同或者指接收次序相同,具体可以根据实际需求确定,此处不做限定。
在本公开实施例中,缓存块是容量、地址区间均已确定的存储区域,容量大小根据采集视频图像的分辨率、色深等来确定,每个缓存块可以存放至少一个视频帧,地址区间是由应用软件开发者划分后,通过应用处理器(Application Processor Unit,APU)将各个缓存块的起始地址和结束地址,下发到缓存块地址列表中,建立缓存块编号与缓存块地址一一对应关系,将缓存块首尾的起始地址和结束地址放入缓存块地址列表,实现缓存块编号与缓存地址的快速转换。每个缓存块设定一个编号,即缓存块编号,缓存块划分如图3所示,可以为缓存块0、缓存块1…缓存块n、缓存块n+1。
值得说明的是,本公开实施例中,所述至少两路视频流可以分别对应至少两个缓冲区,至少两个缓冲区中的缓存块一一对应,其中,至少两个缓冲区中相互对应的缓存块具有相同的缓存块编号。
在本公开实施例中,通过3D内窥镜的摄像头或镜片等采集设备获取至少两路视频流后,控制器通过视频检测模块检测该至少两路视频流的视频帧后,然后将至少两路视频流中时间戳相同或接收次序相同的各路视频流的视频帧分别存储在与同一缓存块编号相对应的至少两个缓存块中。
示例性地,参见图4所示,系统上电后,配置文件加载成功,应用处理器(APU)开始正常工作,对缓存管理(主)端和缓存管理(从)端进行初始化配置,将存储器空间划分为至少两个视频数据缓冲区,分别用于至少两路视频数据流的存储,其中至少两路视频流中的主路视频流可以是视频流1,至少两路视频流中的从路视频流可以是视频流2,将视频流1对应的缓存管理模块作为缓存管理(主)端,视频流2对应的缓存管理模块作为缓存管理(从)端;通过3D内窥镜的摄像头或镜片等采集 设备获取视频流1和视频流2后,控制器通过视频检测模块检测视频流1和视频流2的视频帧,然后将时间戳相同或接收次序相同的视频帧中的主路视频帧,通过缓存管理(主)端存入到视频流1数据缓冲区,再获取视频流1数据缓冲区中该主路视频帧所存储的缓存块的缓存块编号,将缓存管理(主)端的缓存块编号实时的同步到缓存管理(从)端,保证两个缓存管理模块写操作的缓存块编号相同,使得该缓存管理(从)端将与该主路视频帧时间戳相同或接收次序相同的视频帧中的从路视频帧,存储在视频流2数据缓冲区中的该缓存块编号对应的缓存块中。
步骤S103,在至少两路视频流同时播放时,获取待播放视频的缓存块编号。
在本公开实施例中,待播放视频可以指3D内窥镜影像视频进行同步处理得到的视频,例如:如图4所示的同步视频流1和同步视频流2,具体可以根据实际需求确定,此处不做限定。
在本公开实施例中,在3D内窥镜的显示设备同时播放至少两路视频流时,控制器可以获取待播放视频的缓存块编号。
示例性地,参见图4所示,3D内窥镜影像视频进行同步处理得到的视频包括:同步视频流1和同步视频流2,在3D内窥镜的显示设备同时播放同步视频流1和同步视频流2时,获取存储同步视频流1和同步视频流2的缓存块编号。
步骤S104,在缓存块地址列表中查询与待播放视频帧的缓存块编号相对应的每路视频流的缓存块地址。
在本公开实施例中,为了实现对不同的缓存块的读写操作,应用处理器(APU)需要把每个缓存块的缓存块地址作为初始化参数,配置到缓存块地址列表中,同时建立缓存块编号与缓存块地址包括的起始地址和结束地址的一一映射关系,从而在获取待播放视频的缓存块编号后,控制器可以根据缓存块编号提取该缓存块编号对应的起始地址和结束地址,从而得到与该缓存块编号相对应的每路视频流所存储的缓存块地址。
示例性地,参见图5所示,在获取待播放视频的缓存块编号后,控制器通过地址提取模块,从缓存块地址列表中,提取该待播放视频帧的缓存块编号对应的起始地址和结束地址,以得到与该待播放视频帧的缓存块编号相对应的每路视频流所存储的缓存块地址。
步骤S105,针对每路视频流,在该路视频流对应的缓冲区中,从每个缓存块地址所指示的缓存块中读取视频帧并输出,其中,与单个缓存块编号相对应的各缓存块中存储的视频帧的时序相同。
其中,针对至少两路视频流中的每一者执行S105是同步进行的。
在本公开实施例中,缓存块地址包括起始地址和结束地址。
在本公开实施例中,控制器根据每个缓存块地址所包括的起始地址和结束地址所指示的位置,分别从单个缓存块的起始地址读取到缓存块的结束地址,得到每路视频流的视频帧并输出。
示例性地,参见图2所示,缓存块编号被送入缓存块地址管理的缓存块地址列表中后,控制器提取该缓存块编号对应的起始地址和结束地址;读操作总线时序控制子模块通过仲裁获取接口总线的使用权限,在获取到使用权限后根据该起始地址和结束地址的地址信息,读取该缓存块编号对应的缓存块中存储的视频帧数据并输出。
本公开提供的一种视频读取方法,在至少两路视频流同时播放时,获取待播放视频的缓存块编号;由于与单个缓存块编号相对应的各缓存块中存储的视频帧的时序相同,这样通过获取缓存块编号,就可以直接在缓存块地址列表中查询与该缓存块编号相对应的每路视频流的缓存块地址;然后在每个缓存块地址所指示的缓存块中分别进行读取,以完成至少两路视频流的同步处理,得到每路视频流的时序相同的视频帧并输出,方法采用缓存块编号作为至少两路视频流的同步信号,不需要分析视频流数据的同步偏差,将两路视频流数据视频帧同步转化为缓存读写操作的同步,计算量少;因此在对至少两路视频流进行同步处理时,可以避免进行大量的计算,减少了处理复杂度,简化了同步处理过程,每次读写操作采用同一个缓存块编号控制,不需要进行额外的计算,把处理延迟降到最低,实时性好,从而在同步处理3D内窥镜的两路视频流时,可以提高视频同步的实时性,从而确保可以满足3D内窥镜影像视频处理实时性上的要求,提升了视频同步精度,效果好。
在本公开实施例中,将至少两路视频流同步处理,转换成对缓存块的同步读写操作,去掉了已有方案中复杂的计算,提高了处理的实时性,非常适合在现场可编程门阵列(Field-Programmable Gate Array,FPGA)平台中实现。
如图6所示,本公开实施例提供的一种视频读取方法,该方法包括:
步骤S201,获取至少两路视频流。
该步骤可参照步骤S101的详细描述,此处不再赘述。
步骤S202,至少两路视频流包括主路视频流和从路视频流,将主路视频流的主路视频帧存储在第一缓存块中。
在本公开实施例中,第一缓存块用于存储主路视频流的主路视频帧,第一缓存块位于第一缓冲区中,第一缓冲区与所述缓存管理(主)端相对应,第一缓冲区可以位于第一存储器中。
在本公开实施例中,控制器通过视频检测模块检测到主路视频流的主路视频帧后,将该主路视频帧存储在第一存储器的第一缓存块中。
示例性地,参见图2所示,至少两路视频流中的主路视频流可以是视频流1,第一缓冲区可以是视频流1数据缓冲区。控制器通过视频检测模块检测视频流1的主路视频帧后,将该主路视频帧存储在视频流1数据缓冲区中。
可选地,如图7所示,步骤S202,可以包括:
子步骤S2021,获取当前的可用缓存块编号。
在本公开实施例中,可用缓存块编号是存储器中可以写入数据的缓存块的编号。存储器可以包括第一存储器和第二存储器,在第一存储器中设置第一缓冲区,在第二存储器中设置第二缓冲区,即两路视频流数据的缓冲区采用两个独立的存储器;也可以直接在存储器中设置第一缓冲区和第二缓冲区,即两路视频流数据的缓冲区共用一个存储器,每个缓冲区划分成若干个缓存块,每个缓存块具有一个固定的编号,用于存储至少一个视频帧图像数据。当采用两个独立的存储器时,缓存管理(主)端和缓存管理(从)端视频帧同步读取与两路物理硬件接口的带宽相关,采用相同的芯片及物理硬件接口设计可以最大限度减少缓存管理(主)端和缓存管理(从)端同步偏差;当共用一个存储器时,缓存管理(主)端和缓存管理(从)端两端视频帧读取需要共用一个物理硬件接口(存储器接口),存储器接口物理带宽远高于视频流读取带宽,可以采用时分复用的机制,控制每次读取固定长度数据,切换主从通道,实现存储器接口共用;两种方式均可保证至少两路视频流读取的同步。
在本公开实施例中,控制器依据缓存块编号顺序进行查询,将缓存块编号最靠前且未被占用的空闲缓存块的编号确定为可用缓存块编号,以供本次写入操作进行使用。相应地,所确定的可用缓存块的编号即为本实施例中描述的可用缓存块编号。
示例性地,参见图3所示,每个缓冲区被应用软件开发者划分成若干个缓存块,每个缓存块具有一个固定的缓存块编号,例如:缓存块0、缓存块1…缓存块n、缓存块n+1。参见图2所示,控制器获取可以执行写操作的缓存块的编号作为可用缓存块编号,以便控制器通知写操作总线时序控制子模块提取该可用缓存块编号。
子步骤S2022,在缓存块地址列表中,查找可用缓存块编号对应的写入起始地址。
在本公开实施例中,写入起始地址是指可用缓存块的起始地址,控制器可以根据可用缓存块编号,在缓存块地址列表中,查找该可用缓存块编号对应的起始地址作为写入起始地址。
示例性地,每个缓存块都有一个起始地址和结束地址,这两个地址是缓存块的 两个边界,可以是存储芯片的物理地址,也可以是相对地址。缓存管理模块根据起始地址和结束地址可以对某个缓存块进行数据读写操作,缓存块地址边界划分如图8所示。参见图5所示,控制器将每个缓存块的起始地址和结束地址的地址信息配置到缓存块地址列表中,在获取可用缓存块编号后,可以根据该可用缓存块编号,在缓存块地址列表中,查找该可用缓存块编号对应的起始地址作为写入起始地址。
子步骤S2023,将主路视频流的主路视频帧数据从写入起始地址,写入可用缓存块编号对应的缓存块中,其中,可用缓存块编号对应的缓存块为第一缓存块。
在本公开实施例中,在控制器通过视频检测模块检测到主路视频流的主路视频帧后,将主路视频流的主路视频帧数据从写入起始地址,写入可用缓存块编号对应的缓存块中,其中,可用缓存块编号对应的缓存块即为第一缓存块。
示例性地,缓存管理模块主端接收到一个主路视频帧数据,根据可用缓存块编号,在缓存块地址列表中,提取该可用缓存块编号对应的第一缓冲区中缓存块(即,第一缓存块)的起始地址信息,通过接口总线将一个主路视频帧数据写入该第一缓存块中。
本公开通过获取可用缓存块编号,可以确定存储器中当前可以写入数据的缓存块,从而可以根据该可用缓存块编号,在缓存块地址列表中,查找该可用缓存块编号对应的写入起始地址,进而将主路视频流的一个主路视频帧数据,从写入起始地址写入该可用缓存块编号对应的缓存块(即,第一缓存块)中,以免在将主路视频流的一个主路视频帧数据写入存储器时,覆盖写入已经写入数据的缓存块,造成主路视频流数据的损坏和丢失。
子步骤S2024,检测主路视频帧的帧尾标志。
在本公开实施例中,一个视频帧包括帧头和帧尾,帧尾标志是指标识视频帧的帧尾的标志。
在本公开实施例中,控制器通过检测主路视频帧的帧尾标志,在检测到帧尾标志时,可以确定完整存储了该主路视频帧。
示例性地,控制器检测一个主路视频帧的帧头标志和帧尾标志,当检测到有效的帧尾标志时,可以确定完整存储了该主路视频帧。
子步骤S2025,在检测到帧尾标志时,给可用缓存块编号加一。
在本公开实施例中,控制器通过检测主路视频帧的帧尾标志,在检测到帧尾标志时,可以确定在当前的可用缓存块中完整存储了该主路视频帧,此时当前的可用缓存块就变为占用缓存块,给可用缓存块编号加一可以得到存储下一个主路视频帧的缓存块的缓存块编号。
示例性地,控制器检测一个主路视频帧的帧头标志和帧尾标志,当检测到有效的帧尾标志时,可以确定当前的可用缓存块中完整存储了该主路视频帧,将可用缓存块编号加1得到存储下一个主路视频帧的缓存块的缓存块编号。
本公开通过检测主路视频帧数据的帧尾标志;可以在检测到帧尾标志时,将可用缓存块编号加1,从而得到更新后的可用缓存块编号,具有该可用缓存块编号的缓存块可以存储下一个主路视频帧数据。
步骤S203,获取第一缓存块的目标缓存块编号。
在本公开实施例中,目标缓存块编号是指与主路视频帧接收时间戳或接收次序同步的从路视频帧所要存储的缓存块的缓存块编号。
在本公开实施例中,控制器获取第一缓存块的缓存块编号,将该缓存块编号作为目标缓存块编号。
示例性地,参见图4所示,缓存管理(从)端的缓存块编号由缓存管理(主)端提供,则控制器获取第一缓存块的目标缓存块编号,并将该目标缓存块编号实时同步给缓存管理(从)端,以保证接收时间戳或接收次序同步的主路视频帧和从路视频帧,写入缓存块编号相同的两个缓存块中。
步骤S204,将从路视频流的从路视频帧存储在与目标缓存块编号相对应的第二缓存块中,至少两个缓存块包括第一缓存块和第二缓存块。
在本公开实施例中,第二缓存块用于存储从路视频流的从路视频帧,第二缓存块位于第二缓冲区中,第二缓冲区与缓存管理(从)端相对应。
在本公开实施例中,控制器根据目标缓存块编号,在缓存块地址列表中,查找该目标缓存块编号对应的起始地址作为目标写入起始地址,将从路视频流的从路视频帧从该目标写入起始地址,写入目标缓存块编号对应的第二缓存块中。
示例性地,参见图4所示,控制器根据目标缓存块编号,在缓存块地址列表中,查找该目标缓存块编号对应的起始地址作为目标写入起始地址,然后通过接口总线将一个从路视频帧从该目标写入起始地址,写入目标缓存块编号对应的第二缓存块中。
本公开通过将主路视频流的主路视频帧存储在第一缓存块中,再获取第一缓存块的目标缓存块编号,可以使得能够将缓存管理(主)端的目标缓存块编号提供给缓存管理(从)端,以便将从路视频流的从路视频帧存储在与目标缓存块编号相对应的第二缓存块中,从而保证接收时间戳或接收次序同步的主路视频帧和从路视频帧,写入两个缓存块编号相同的缓存块中。
步骤S205,至少两路视频流包括多个从路视频流,至少两个缓存块包括多个第 二缓存块;分别将多个从路视频流的从路视频帧存储在与目标缓存块编号相对应的不同第二缓存块中,单个从路视频流的从路视频帧存储在单个第二缓存块中。
在本公开实施例中,多个第二缓存块可以均设置在同一存储器中,也可以分别设置在不同的存储器中,具体可以根据实际需求确定,此处不做限定。
在本公开实施例中,在至少两路视频流中包括多个从路视频流时,控制器分别将多个从路视频流中的各个从路视频流的从路视频帧,存储在与目标缓存块编号相对应的不同第二缓存块中,单个从路视频流的从路视频帧存储在单个第二缓存块中。
示例性地,参见图4所示,在至少两路视频流中包括多个从路视频流时,存在与各个从路视频流对应的多个缓存管理(从)端,在控制器控制缓存管理(主)端将目标缓存块编号提供给各个缓存管理(从)端后,各个缓存管理(从)端将各个从路视频流的各个从路视频帧,从目标缓存块编号对应的起始地址处开始,写入目标缓存块编号对应的各个第二缓存块中。
本公开通过在至少两路视频流中包括多个从路视频流时,控制器分别将多个从路视频流的从路视频帧,存储在与目标缓存块编号相对应的不同第二缓存块中,可以使得接收时间戳或接收次序同步的各个从路视频帧,写入缓存块编号相同的缓存块中。
步骤S206,在至少两路视频流同时播放时,获取待播放视频的缓存块编号。
该步骤可参照步骤S103的详细描述,此处不再赘述。
步骤S207,在缓存块地址列表中查询与缓存块编号相对应的每路视频流的缓存块地址。
该步骤可参照步骤S104的详细描述,此处不再赘述。
步骤S208,检测至少两路视频流各自对应的输出接口的状态信号。
在本公开实施例中,单个输出接口用于输出单路视频流的视频帧,任意一路视频流对应的输出接口可以用于输出该路视频流的视频帧;状态信号是指输出接口是否处于空闲状态。
在本公开实施例中,控制器检测所述至少两路视频流各自对应的输出接口是否处于空闲状态,以确定是否得从缓存块地址所指示的缓存块中读取待播放视频的视频帧。
示例性地,输出接口可以包括第一输出接口和第二输出接口,参见图2所示,视频帧回读操作同样在缓存管理模块实现,第一输出接口与缓存管理模块的缓存管理(主)端连接,第二输出接口与缓存管理模块的缓存管理(从)端连接,缓存管理(主)端和缓存管理(从)端分别监测各输出接口是否处于空闲状态的状态信号。
步骤S209,在至少两路视频流各自对应的输出接口的状态信号均为空闲状态信号时,在每路视频流对应的缓冲区中,从每个缓存块地址所指示的缓存块中分别进行读取,得到该路视频流的视频帧,并将得到的视频帧通过单个输出接口输出单路视频流的视频帧。
在本公开实施例中,在状态信号均为空闲状态信号时,控制器根据每个缓存块地址所包括的起始地址和结束地址所指示的位置,分别从单个缓存块的起始地址读取到缓存块的结束地址,得到每路视频流的视频帧,并通过单个输出接口输出单路视频流的视频帧。
示例性地,输出接口可以包括第一输出接口和第二输出接口,参见图2所示,当与缓存管理(主)端连接的第一输出接口和与缓存管理(从)端连接的第二输出接口均处于空闲状态(例如Ready)时,根据当前的缓存块编号,缓存管理(主)端从缓存块地址列表查找缓存块编号对应的起始地址,通过接口总线读取视频流1数据缓冲区中的一个视频帧数据,得到同步视频流1的一个视频帧,将该视频帧通过第一输出接口输出,缓存管理(主)端将缓存块编号实时同步到缓存管理(从)端,缓存管理(主)端和缓存管理(从)端两端同步启动一个视频帧读操作,先根据缓存块编号从缓存块地址列表中提取该存储块起始地址,再读取视频流2数据缓冲区中对应的一个视频帧数据,得到同步视频流2的一个视频帧,将该视频帧通过第二输出接口输出。
本公开通过检测每个输出接口的状态信号,在状态信号均为空闲状态信号时,才在每个缓存块地址所指示的缓存块中分别进行读取,得到每路视频流的视频帧,并通过输出接口输出该视频帧,可以保证读取的主路视频帧数据和从路视频帧数据的有效性,避免在第一缓存块或第二缓存块被占用时进行读取主路视频帧数据和从路视频帧数据,造成视频帧数据的丢失,导致所得到的同步处理结果错误。
可选地,步骤S209,可以包括:
子步骤S2091,依次从每个缓存块地址所指示的缓存块中读取目标长度的数据,直至完整读取视频帧,并通过单个输出接口输出单路视频流的视频帧。
在本公开实施例中,目标长度可以是用户基于实际经验进行设置,也可以是3D内窥默认的数值,具体可以根据实际需求确定,此处不做限定。
在本公开实施例中,在至少两个缓冲区位于同一存储器时,所述至少两个缓冲区通过同一接口总线传输每路视频流的视频帧数据,此时采用分时复用的机制,控制每次读取固定的目标长度视频帧数据,直至完整读取视频帧,并通过单个输出接口输出单路视频流的视频帧。
示例性地,参见图4所示,在至少两个缓冲区均位于同一存储器时,所述至少两个缓冲区通过同一接口总线分别与缓存管理(主)端和缓存管理(从)端连接,缓存管理(主)端和缓存管理(从)端读取一个视频帧需要共用一个物理的存储器接口,存储器接口物理带宽远高于视频流读取带宽,可以采用时分复用的机制,控制每次读取固定长度数据,切换主从通道,实现存储器接口共用,然后再通过与缓存管理(主)端连接的第一输出接口输出主路视频帧,通过与缓存管理(从)端连接的第二输出接口输出从路视频帧。
本公开实施例中,依次从每个缓存块地址所指示的缓存块中读取目标长度的数据,直至完整读取视频帧,并通过单个输出接口输出单路视频流的视频帧,可以使得在传输主路视频帧数据和从路视频帧数据的整个时间中分时复用在接口总线,提高了接口总线的利用率。
可选地,子步骤S2091,可以包括:
子步骤A、依次从每个缓存块地址所指示的缓存块中读取目标长度的视频帧;
在本公开实施例中,控制器采用分时复用的机制,控制每次从每个缓存块地址所指示的缓存块中,读取固定的目标长度视频帧数据。
示例性地,参见图4所示,读操作总线时序控制子模块通过仲裁获取接口总线使用权限,获取到权限后每次从每个缓存块地址所指示的缓存块中,读取固定的目标长度视频帧数据。
子步骤B、检测视频帧的帧尾标志;
在本公开实施例中,控制器通过检测每个缓存块地址所指示的缓存块中的视频帧的帧尾标志,以确定完整读取了该视频帧数据。
示例性地,控制器检测每个缓存块地址所指示的缓存块中的视频帧的帧头标志和帧尾标志,当检测到有效的帧尾标志时,可以确定完整读取了该视频帧数据。
子步骤C、在检测到与视频帧的数目相同的帧尾标志时,通过单个输出接口输出单路视频流的视频帧,并更新缓存块编号。
在本公开实施例中,控制器通过检测每个缓存块地址所指示的缓存块中的视频帧的帧尾标志,在检测到与视频帧的数目相同的帧尾标志时,通过单个输出接口输出单路视频流的视频帧,并给缓存块编号加一,更新缓存块编号。
示例性地,控制器检测每个缓存块地址所指示的缓存块中的视频帧的帧头标志和帧尾标志,当检测到有效的帧尾标志时,可以确定当前的缓存块中视频帧数据已被完整读取,通过单个输出接口输出单路视频流的视频帧,并将缓存块编号加1,可以得到存储下一个视频帧的缓存块编号。
在本公开实施例中,缓存管理模块(主)端和缓存管理模块(从)端每完成一个视频帧读取,会提取新的缓存块编号,读取下一个视频帧数据;在极端情况下,由于缓存管理模块(主)端和缓存管理模块(从)端的输出接口空闲状态可能不同,视频读取速率可能不同,会出现其中一端先读取完一帧的情况,当其中一端完成一帧读取,另一端读取进度较慢时,缓存管理模块(主)端和缓存管理模块(从)端视频同步偏差会接近一个视频帧的时间,同步偏差相对于已有方案大幅降低。
本公开通过依次从每个缓存块地址所指示的缓存块中读取目标长度的视频帧,可以分时复用在接口总线,提高了接口总线的利用率;接着检测视频帧的帧尾标志;在检测到与视频帧的数目相同的帧尾标志时,通过单个输出接口输出单路视频流的视频帧,并更新缓存块编号,可以使得控制器可以继续读取后续视频帧数据。
本公开提供的另一种视频读取方法,在获取至少两路视频流后,通过将主路视频流的主路视频帧存储在第一缓存块中,再获取第一缓存块的目标缓存块编号,可以使得能够将缓存管理(主)端的目标缓存块编号提供给缓存管理(从)端,以便将从路视频流的从路视频帧存储在与目标缓存块编号相对应的第二缓存块中,从而保证接收时间戳或接收次序同步的主路视频帧和从路视频帧,写入两个缓存块编号相同的缓存块中;然后在至少两路视频流同时播放时,获取待播放视频的缓存块编号;就可以直接在缓存块地址列表中查询与该缓存块编号相对应的每路视频流的缓存块地址;然后在每个缓存块地址所指示的缓存块中分别进行读取,以完成至少两路视频流的同步处理,得到每路视频流的时序相同的视频帧,因此可以避免对至少两路视频流进行时同步处理时,进行大量的计算,减少了处理复杂度,简化了至少两路视频流的同步处理过程,从而在同步处理3D内窥镜的两路视频流时,提高视频同步的实时性,从而确保可以满足3D内窥镜影像视频处理实时性上的要求,提升了视频同步精度。
如图9所示,本公开实施例提供的另一种视频读取方法,该方法包括:
步骤S301,获取至少两路视频流。
该步骤可参照步骤S101的详细描述,此处不再赘述。
步骤S302,将左眼视频帧数据存储在左眼缓存块中,并将右眼视频帧数据存储在右眼缓存块中。
在本公开实施例中,左眼缓存块和右眼缓存块的缓存块编号相同。
在本公开实施例中,3D内窥镜的两个镜片获取两路视频流后,控制器通过视频检测模块检测该两路视频流的视频帧后,然后将该两路视频流中时间戳相同或接收次序相同的左眼视频帧数据存储在左眼缓存块中,并将右眼视频帧数据存储在右眼 缓存块中。
示例性地,参见图4所示,两路视频流中的主路视频流可以是视频流1,两路视频流中的从路视频流可以是视频流2,通过3D内窥镜的左右两个镜片采集的两路视频流1和视频流2进入缓存管理(主)端的视频检测模块,检测到主路视频帧后通知写操作总线时序控制子模块提取当前可用缓存块的缓存块编号,送入缓存块地址管理子模块,从缓存块地址列表中获取缓存块编号对应的缓存块的起始地址。写操作总线时序控制子模块根据该起始地址,把当前的主路视频帧写入对应的缓存块中;同时,检测主路视频帧的帧头标志、帧尾标志,当检测到有效的帧尾标志时,将缓存块编号加1。
步骤S303,在至少两路视频流同时播放时,获取待播放视频的缓存块编号。
该步骤可参照步骤S103的详细描述,此处不再赘述。
步骤S304,在缓存块地址列表中查询与缓存块编号相对应的每路视频流的缓存块地址。
该步骤可参照步骤S104的详细描述,此处不再赘述。
步骤S305,在每个缓存块地址所指示的左眼缓存块中读取左眼视频帧,得到左眼视频流的视频帧并输出,并在每个缓存块地址所指示的右眼缓存块中读取右眼视频帧,得到右眼视频流的视频帧并输出。
在本公开实施例中,控制器根据每个缓存块地址所包括的起始地址和结束地址所指示的位置,在每个缓存块地址所指示的左眼缓存块中读取左眼视频帧,得到左眼视频流的视频帧并输出,并在每个缓存块地址所指示的右眼缓存块中读取右眼视频帧,得到右眼视频流的视频帧并输出。
示例性地,参见图2所示,读操作总线时序控制子模块完成上一个视频帧读后,获取加1更新后的缓存块编号,送入缓存块地址列表中,提取该缓存块编号对应的缓存块的起始地址和结束地址;读操作总线时序控制子模块通过仲裁获取接口总线使用权限,根据缓存块的起始地址和结束地址的地址信息,读取对应的视频帧数据并输出。
本公开提供的另一种视频读取方法,通过获取3D内窥镜采集的两路视频流,将左眼视频帧数据存储在左眼缓存块中,并将右眼视频帧数据存储在右眼缓存块中,其中,左眼缓存块和右眼缓存块的缓存块编号相同,在3D内窥镜同时播放该两路视频流时,获取待播放视频的缓存块编号;由于与单个缓存块编号相对应的各缓存块中存储的视频帧的时序相同,这样通过获取缓存块编号,就可以直接在缓存块地址列表中查询与该缓存块编号相对应的每路视频流的缓存块地址;然后在每个缓存块 地址所指示的左眼缓存块中读取左眼视频帧,得到左眼视频流的视频帧,并在每个缓存块地址所指示的右眼缓存块中读取右眼视频帧,得到右眼视频流的视频帧,因此可以避免对3D内窥镜采集的两路视频流进行时同步处理时,进行大量的计算,减少了处理复杂度,简化了同步处理过程,从而提高视频同步的实时性,从而确保可以满足3D内窥镜影像视频处理实时性上的要求,提升了视频同步精度。
如图10所示,本公开实施例还提供一种视频读取装置400包括:
第一获取模块401,用于在至少两路视频流同时播放时,获取待播放视频的缓存块编号;
查询模块402,用于在缓存块地址列表中查询与缓存块编号相对应的每路视频流的缓存块地址;
读取模块403,用于在每个缓存块地址所指示的缓存块中分别进行读取,得到每路视频流的视频帧并输出,与单个缓存块编号相对应的各缓存块中存储的视频帧的时序相同。
可选地,读取模块403,还用于:
检测每个输出接口的状态信号;单个输出接口用于输出单路视频流的视频帧;在状态信号均为空闲状态信号时,在每个缓存块地址所指示的缓存块中分别进行读取,得到每路视频流的视频帧,并通过单个输出接口输出单路视频流的视频帧。
可选地,在缓存块均位于同一存储器时,读取模块403,还用于:
依次从每个缓存块地址所指示的缓存块中读取目标长度的数据,直至完整读取视频帧,并通过单个输出接口输出单路视频流的视频帧。
可选地,读取模块403,还用于:
依次从每个缓存块地址所指示的缓存块中读取目标长度的视频帧;检测视频帧的帧尾标志;在检测到与视频帧的数目相同的帧尾标志时,通过单个输出接口输出单路视频流的视频帧,并更新缓存块编号。
可选地,缓存块地址包括起始地址和结束地址;读取模块403,还用于:
分别从单个缓存块的起始地址读取到缓存块的结束地址,得到每路视频流的视频帧,并通过单个输出接口输出单路视频流的视频帧。
可选地,视频读取装置400还包括:
第二获取模块404,用于获取至少两路视频流;
存储模块405,用于将至少两路视频流中时序相同的视频帧分别存储在与同一缓存块编号相对应的至少两个缓存块中。
可选地,至少两路视频流包括:主路视频流和从路视频流;存储模块405,还 用于:
将主路视频流的主路视频帧存储在第一缓存块中;获取第一缓存块的目标缓存块编号;将从路视频流的从路视频帧存储在与目标缓存块编号相对应的第二缓存块中,至少两个缓存块包括第一缓存块和第二缓存块。
可选地,至少两路视频流包括多个从路视频流,至少两个缓存块包括多个第二缓存块;存储模块405,还用于:
分别将多个从路视频流的从路视频帧存储在与目标缓存块编号相对应的不同第二缓存块中,单个从路视频流的从路视频帧存储在单个第二缓存块中。
可选地,存储模块405,还用于:
获取当前的可用缓存块编号;可用缓存块编码是存储器中可以写入数据的缓存块的编号;在缓存块地址列表中,查找可用缓存块编号对应的写入起始地址;将主路视频流的主路视频帧数据从写入起始地址,写入可用缓存块编号对应的缓存块中,其中,可用缓存块编号对应的缓存块为第一缓存块。
可选地,存储模块405,还用于:
检测主路视频帧的帧尾标志;在检测到帧尾标志时,给缓存块编号加一。
可选地,至少两路视频流包括三维立体显示内窥镜采集的左眼视频流和右眼视频流,视频帧包括:左眼视频流的左眼视频帧和右眼视频流的右眼视频帧,缓存块包括:左眼缓存块和右眼缓存块,左眼缓存块中存储有左眼视频帧,右眼缓存块中存储有右眼视频帧;
存储模块405,还用于:将左眼视频帧数据存储在左眼缓存块中,并将右眼视频帧数据存储在右眼缓存块中,其中,左眼缓存块和右眼缓存块的缓存块编号相同。
可选地,读取模块403,还用于:在每个缓存块地址所指示的左眼缓存块中读取左眼视频帧,得到左眼视频流的视频帧并输出,并在每个缓存块地址所指示的右眼缓存块中读取右眼视频帧,得到右眼视频流的视频帧并输出。
本公开提供的一种视频读取装置,在至少两路视频流同时播放时,获取待播放视频的缓存块编号;由于与单个缓存块编号相对应的各缓存块中存储的视频帧的时序相同,这样通过获取缓存块编号,就可以直接在缓存块地址列表中查询与该缓存块编号相对应的每路视频流的缓存块地址;然后在每个缓存块地址所指示的缓存块中分别进行读取,以完成至少两路视频流的同步处理,得到每路视频流的时序相同的视频帧并输出,因此在对至少两路视频流进行时同步处理时,可以避免进行大量的计算,减少了处理复杂度,简化了同步处理过程,从而在同步处理3D内窥镜的两路视频流时,提高视频同步的实时性,从而确保可以满足3D内窥镜影像视频处理实 时性上的要求,提升了视频同步精度。
如图11所示,本公开实施例还提供一种电子设备500,包括处理器501,存储器502,存储在存储器502上并可在处理器501上运行的程序或指令,该程序或指令被处理器501执行时实现上述视频读取方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种可读存储介质,可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述视频读取方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,处理器为上述实施例中的电子设备中的处理器。可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本公开实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,网络设备,嵌入式设备,或手术机器人等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。

Claims (20)

  1. 一种视频读取方法,其中,所述方法包括:
    在至少两路视频流同时播放时,获取待播放视频的缓存块编号,其中,所述至少两路视频流分别对应至少两个缓冲区,所述至少两个缓冲区中的缓存块一一对应,并且所述至少两个缓冲区中相互对应的缓存块具有相同的缓存块编号;
    在缓存块地址列表中查询与所述待播放视频帧的缓存块编号相对应的每路所述视频流的缓存块地址;
    针对每路视频流,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中读取视频帧并输出,其中,与单个所述缓存块编号相对应的各缓存块中存储的视频帧的时序相同。
  2. 根据权利要求1所述的方法,其中,所述在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中读取视频帧并输出,包括:
    检测所述至少两路视频流各自对应的输出接口的状态信号,其中,任意一路视频流对应的输出接口用于输出所述任意一路视频流的视频帧;
    在所述至少两路视频流各自对应的输出接口的状态信号均为空闲状态信号时,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中分别进行读取,得到所述每路视频流的视频帧,并将得到的视频帧通过所述输出接口输出。
  3. 根据权利要求2所述的方法,其中,所述在所述至少两路视频流各自对应的输出接口的状态信号均为空闲状态信号时,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中分别进行读取,得到所述每路视频流的视频帧,并将得到的视频帧通过所述输出接口输出,包括:
    依次从每个所述缓存块地址所指示的缓存块中读取目标长度的数据,直至完整读取所述视频帧,并通过所述输出接口输出所述视频帧。
  4. 根据权利要求3所述的方法,其中,所述依次从每个所述缓存块地址所指示的缓存块中读取目标长度的数据,直至完整读取所述视频帧,并通过所述输出接口输出所述视频帧,包括:
    依次从每个所述缓存块地址所指示的缓存块中读取目标长度的所述视频帧;
    检测所述视频帧的帧尾标志;
    在检测到与所述视频帧的数目相同的所述帧尾标志时,通过所述输出接口输出所述视频帧,并更新所述待播放视频帧的缓存块编号。
  5. 根据权利要求2所述的方法,其中,所述缓存块地址包括起始地址和结束地址;所述在所述至少两路视频流各自对应的输出接口的状态信号均为空闲状态信号时,在所 述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中分别进行读取,得到所述每路视频流的视频帧,并将得到的视频帧通过所述输出接口输出,包括:
    分别从单个所述缓存块的起始地址读取到所述缓存块的结束地址,得到每路所述视频流的视频帧,并通过所述输出接口输出所述视频帧。
  6. 根据权利要求1所述的方法,其中,在所述在至少两路视频流同时播放时,获取待播放视频的缓存块编号之前,所述方法还包括:
    获取至少两路视频流;
    将所述至少两路视频流中时序相同的视频帧分别存储在与同一缓存块编号相对应的至少两个缓存块中。
  7. 根据权利要求6所述的方法,其中,所述至少两路视频流包括:主路视频流和从路视频流;所述将所述至少两路视频流中时序相同的视频帧分别存储在与同一缓存块编号相对应的至少两个缓存块中,包括:
    将所述主路视频流的主路视频帧存储在第一缓存块中;
    获取所述第一缓存块的目标缓存块编号;
    将所述从路视频流的从路视频帧存储在与所述目标缓存块编号相对应的第二缓存块中,所述至少两个缓存块包括所述第一缓存块和所述第二缓存块。
  8. 根据权利要求7所述的方法,其中,所述至少两路视频流包括多个所述从路视频流,所述至少两个缓存块包括多个所述第二缓存块;所述将所述从路视频流的从路视频帧存储在与所述目标缓存块编号相对应的第二缓存块中,包括:
    分别将多个所述从路视频流的从路视频帧存储在与所述目标缓存块编号相对应的不同第二缓存块中,单个从路视频流的从路视频帧存储在单个第二缓存块中。
  9. 根据权利要求7所述的方法,其中,所述将所述主路视频流的主路视频帧存储在第一缓存块中,包括:
    获取当前的可用缓存块编号,所述可用缓存块编号是存储器中当前可以写入数据的缓存块的编号;
    在缓存块地址列表中,查找所述可用缓存块编号对应的写入起始地址;
    将所述主路视频流的主路视频帧数据从所述写入起始地址,写入所述可用缓存块编号对应的缓存块中,其中,所述可用缓存块编号对应的缓存块为所述第一缓存块。
  10. 根据权利要求9所述的方法,其中,在所述将所述主路视频流的主路视频帧数据从所述写入起始地址,写入所述可用缓存块编号对应的缓存块中之后,所述方法还包括:
    检测所述主路视频帧的帧尾标志;
    在检测到所述帧尾标志时,给所述可用缓存块编号加一。
  11. 根据权利要求6所述的方法,其中,所述至少两路视频流包括三维立体显示内窥镜采集的左眼视频流和右眼视频流,所述视频帧包括:所述左眼视频流的左眼视频帧和所述右眼视频流的右眼视频帧,所述缓存块包括:左眼缓存块和右眼缓存块,所述左眼缓存块中存储有所述左眼视频帧,所述右眼缓存块中存储有所述右眼视频帧;
    所述将所述至少两路视频流中时序相同的视频帧分别存储在同一缓存块编号相对应的不同缓存块中,包括:
    将所述左眼视频帧数据存储在所述左眼缓存块中,并将所述右眼视频帧数据存储在所述右眼缓存块中,其中,所述左眼缓存块和所述右眼缓存块的缓存块编号相同。
  12. 根据权利要求11所述的方法,其中,所述针对每路视频流,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中读取视频帧并输出,包括:
    在每个所述缓存块地址所指示的所述左眼缓存块中读取所述左眼视频帧,得到所述左眼视频流的视频帧并输出,并在每个所述缓存块地址所指示的所述右眼缓存块中读取所述右眼视频帧,得到所述右眼视频流的视频帧并输出。
  13. 一种视频读取装置,其中,所述装置包括:
    第一获取模块,用于在至少两路视频流同时播放时,获取待播放视频的缓存块编号,其中,所述至少两路视频流分别对应至少两个缓冲区,所述至少两个缓冲区中的缓存块一一对应,并且所述至少两个缓冲区中相互对应的缓存块具有相同的缓存块编号;
    查询模块,用于在缓存块地址列表中查询与所述待播放视频帧的缓存块编号相对应的每路所述视频流的缓存块地址;
    读取模块,用于针对每路视频流,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中读取视频帧并输出,其中,与单个所述缓存块编号相对应的各缓存块中存储的视频帧的时序相同。
  14. 根据权利要求13所述的装置,其中,所述读取模块,还用于:
    检测所述至少两路视频流各自对应的输出接口的状态信号,其中任意一路视频流对应的输出接口用于输出所述任意一路视频流的视频帧;
    在所述至少两路视频流各自对应的输出接口的状态信号均为空闲状态信号时,在所述每路视频流对应的缓冲区中,从每个所述缓存块地址所指示的缓存块中分别进行读取,得到所述每路视频流的视频帧,并将得到的视频帧通过所述输出接口输出。
  15. 根据权利要求14所述的装置,其中,在所述缓存块均位于同一存储器时,所述读取模块,还用于:
    依次从每个所述缓存块地址所指示的缓存块中读取目标长度的数据,直至完整读取 所述视频帧,并通过所述输出接口输出所述视频帧。
  16. 根据权利要求13所述的装置,其中,所述装置还包括:
    第二获取模块,用于获取至少两路视频流;
    存储模块,用于将所述至少两路视频流中时序相同的视频帧分别存储在与同一缓存块编号相对应的至少两个缓存块中。
  17. 根据权利要求16所述的装置,其中,所述至少两路视频流包括三维立体显示内窥镜采集的左眼视频流和右眼视频流,所述视频帧包括:所述左眼视频流的左眼视频帧和所述右眼视频流的右眼视频帧,所述缓存块包括:左眼缓存块和右眼缓存块,所述左眼缓存块中存储有所述左眼视频帧,所述右眼缓存块中存储有所述右眼视频帧;
    所述存储模块,还用于:
    将所述左眼视频帧数据存储在所述左眼缓存块中,并将所述右眼视频帧数据存储在所述右眼缓存块中,其中,所述左眼缓存块和所述右眼缓存块的缓存块编号相同。
  18. 根据权利要求17所述的装置,其中,所述读取模块,还用于:
    在每个所述缓存块地址所指示的所述左眼缓存块中读取所述左眼视频帧,得到所述左眼视频流的视频帧并输出,并在每个所述缓存块地址所指示的所述右眼缓存块中读取所述右眼视频帧,得到所述右眼视频流的视频帧并输出。
  19. 一种可读存储介质,其中,所述可读存储介质上存储有程序或指令,所述有程序或指令被处理器执行时实现权利要求1至12中任一所述的视频读取方法。
  20. 一种电子设备,其中,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至12中任一所述的视频读取方法。
PCT/CN2022/136545 2021-12-06 2022-12-05 视频读取方法、装置、电子设备及存储介质 WO2023103954A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111474863.9 2021-12-06
CN202111474863.9A CN113923432B (zh) 2021-12-06 2021-12-06 视频读取方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023103954A1 true WO2023103954A1 (zh) 2023-06-15

Family

ID=79248762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136545 WO2023103954A1 (zh) 2021-12-06 2022-12-05 视频读取方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113923432B (zh)
WO (1) WO2023103954A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116647713A (zh) * 2023-07-27 2023-08-25 北京睿芯高通量科技有限公司 一种多路视频写读优化方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923432B (zh) * 2021-12-06 2022-02-11 极限人工智能有限公司 视频读取方法、装置、电子设备及存储介质
CN116668764B (zh) * 2022-11-10 2024-04-19 荣耀终端有限公司 处理视频的方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011166615A (ja) * 2010-02-12 2011-08-25 Nippon Telegr & Teleph Corp <Ntt> 映像同期装置、映像表示装置、映像同期方法及びプログラム
CN102789804A (zh) * 2011-05-17 2012-11-21 华为软件技术有限公司 视频播放方法、播放器、监控平台及视频播放系统
CN105549933A (zh) * 2015-12-16 2016-05-04 广东威创视讯科技股份有限公司 显卡信号同步方法和系统
CN113923432A (zh) * 2021-12-06 2022-01-11 极限人工智能有限公司 视频读取方法、装置、电子设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1997160A (zh) * 2006-01-06 2007-07-11 腾讯科技(深圳)有限公司 一种多路节目接收显示系统和方法
US10304421B2 (en) * 2017-04-07 2019-05-28 Intel Corporation Apparatus and method for remote display and content protection in a virtualized graphics processing environment
CN107277595B (zh) * 2017-07-28 2019-11-29 京东方科技集团股份有限公司 一种多路视频同步方法及装置
US10672098B1 (en) * 2018-04-05 2020-06-02 Xilinx, Inc. Synchronizing access to buffered data in a shared buffer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011166615A (ja) * 2010-02-12 2011-08-25 Nippon Telegr & Teleph Corp <Ntt> 映像同期装置、映像表示装置、映像同期方法及びプログラム
CN102789804A (zh) * 2011-05-17 2012-11-21 华为软件技术有限公司 视频播放方法、播放器、监控平台及视频播放系统
CN105549933A (zh) * 2015-12-16 2016-05-04 广东威创视讯科技股份有限公司 显卡信号同步方法和系统
CN113923432A (zh) * 2021-12-06 2022-01-11 极限人工智能有限公司 视频读取方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116647713A (zh) * 2023-07-27 2023-08-25 北京睿芯高通量科技有限公司 一种多路视频写读优化方法
CN116647713B (zh) * 2023-07-27 2023-09-26 北京睿芯高通量科技有限公司 一种多路视频写读优化方法

Also Published As

Publication number Publication date
CN113923432A (zh) 2022-01-11
CN113923432B (zh) 2022-02-11

Similar Documents

Publication Publication Date Title
WO2023103954A1 (zh) 视频读取方法、装置、电子设备及存储介质
KR100475060B1 (ko) 다시점 3차원 동영상에 대한 사용자 요구가 반영된 다중화장치 및 방법
CN102226852B (zh) 一种数码体视显微镜的成像系统
US8937647B2 (en) Stereoscopic imaging system, recording control method, stereoscopic image reproduction system, and reproduction control method
WO2009151249A2 (ko) 이동 기기용 입체영상생성칩 및 이를 이용한 입체영상표시방법
KR101750047B1 (ko) 3차원 영상 제공 및 처리 방법과 3차원 영상 제공 및 처리 장치
JP2019050451A (ja) 画像処理装置及びその制御方法及びプログラム及び画像処理システム
JP2019022151A (ja) 情報処理装置、画像処理システム、制御方法、及び、プログラム
US20160373725A1 (en) Mobile device with 4 cameras to take 360°x360° stereoscopic images and videos
CN112015264B (zh) 虚拟现实显示方法、虚拟现实显示装置及虚拟现实设备
KR20180052255A (ko) 스트리밍 컨텐츠 제공 방법, 및 이를 위한 장치
JP2006128816A (ja) 立体映像・立体音響対応記録プログラム、再生プログラム、記録装置、再生装置及び記録メディア
TW201340686A (zh) 三維影像產生方法及裝置
CN103051866B (zh) 网络3d 视频监控系统、方法和视频处理平台
WO2017098586A1 (ja) 動画撮影指示端末、動画撮影システム、動画撮影指示方法、およびプログラム
JP2019140483A (ja) 画像処理システム、画像処理システムの制御方法、伝送装置、伝送方法及びプログラム
KR101396008B1 (ko) 다시점 영상 획득을 위한 방법 및 장치
TWI520577B (zh) 立體影像輸出裝置與相關的立體影像輸出方法
JP3913076B2 (ja) 画像合成処理装置
CN113099212A (zh) 3d显示方法、装置、计算机设备和存储介质
KR100703713B1 (ko) 3차원 영상 획득 및 디스플레이가 가능한 3차원 모바일 장치
US20230239447A1 (en) Smart wearable device for vision enhancement and method for realizing stereoscopic vision transposition
NO337022B1 (no) Styring for stereoprojeksjon
JP4423416B2 (ja) 映像合成処理システム
CN111183394B (zh) 一种分时还原光场的方法及还原装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22903376

Country of ref document: EP

Kind code of ref document: A1