WO2021073054A1 - 数据处理的方法、装置、设备和存储介质 - Google Patents

数据处理的方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2021073054A1
WO2021073054A1 PCT/CN2020/083597 CN2020083597W WO2021073054A1 WO 2021073054 A1 WO2021073054 A1 WO 2021073054A1 CN 2020083597 W CN2020083597 W CN 2020083597W WO 2021073054 A1 WO2021073054 A1 WO 2021073054A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video
identifier
video capture
data stream
Prior art date
Application number
PCT/CN2020/083597
Other languages
English (en)
French (fr)
Inventor
刘慧�
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Priority to EP20876718.6A priority Critical patent/EP3896964A4/en
Priority to JP2021538335A priority patent/JP7273975B2/ja
Priority to EP23190861.7A priority patent/EP4246965A3/en
Publication of WO2021073054A1 publication Critical patent/WO2021073054A1/zh
Priority to US17/374,981 priority patent/US11671678B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4347Demultiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]

Definitions

  • the embodiments of the present disclosure mainly relate to the field of data processing, and more specifically, to methods, apparatuses, devices, and computer-readable storage media for data processing.
  • video capture and processing technology has important application value in scenes such as car assisted driving.
  • a car can use sensors and video capture devices to sense the surrounding environment and collect various data during the driving process of the car, so as to achieve assisted driving.
  • the video acquisition and processing device is not flexible enough to control multiple video acquisition devices, and it is difficult to realize the start and stop control of individual video acquisition devices.
  • these devices often require more hardware resources to transmit multiple videos, and the synchronization accuracy between these videos is also low.
  • a data processing solution is provided.
  • a data processing method includes receiving a single aggregated data stream from a data aggregation device, the aggregated data stream includes a plurality of data packets respectively collected by a plurality of video acquisition devices, each data packet has an identifier of the video acquisition device that collects the data packet.
  • the method also includes determining a plurality of videos associated with the plurality of video capture devices from the aggregated data stream based on the identifier, each video including a data packet with the same identifier.
  • an apparatus for data processing includes a data receiving module configured to receive a single aggregated data stream from a data aggregation device.
  • the aggregated data stream includes a plurality of data packets respectively collected by a plurality of video acquisition devices, and each data packet has a video acquisition device that collects the data packets.
  • the identifier The device further includes a video determining module configured to determine a plurality of videos associated with a plurality of video capturing devices from the aggregated data stream based on the identifier, each video including a data packet with the same identifier.
  • an electronic device including one or more processors; and a storage device, for storing one or more programs, when one or more programs are used by one or more processors Execution enables one or more processors to implement the method according to the first aspect of the present disclosure.
  • a computer-readable storage medium having a computer program stored thereon, which when executed by a processor implements the method according to the first aspect of the present disclosure.
  • FIG. 1 shows a schematic diagram of an example environment in which multiple embodiments of the present disclosure can be implemented
  • FIG. 2 shows a flowchart of a process of data processing according to an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of data processing according to an embodiment of the present disclosure
  • Fig. 4 shows a schematic block diagram of an apparatus for data processing according to an embodiment of the present disclosure.
  • Figure 5 shows a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
  • data stream means that data obtained from a video capture device is received by a data aggregation device in the form of a stream.
  • the data stream may include at least one data packet, and the data packet may include at least one frame of image.
  • aggregation refers to aggregating multiple data packets of multiple data streams end to end or interleaved into a single data stream.
  • identifier refers to an identifier used to indicate which video capture device the data packet comes from. Identifiers can be numbers, letters, symbols, or a combination of one or more of them.
  • the video capture and processing device is not flexible enough to control multiple video capture devices, and it is difficult to realize the start and stop control of individual video capture devices.
  • these devices often require more hardware resources to transmit multiple videos, and the synchronization accuracy between these videos is also low.
  • a data processing solution for obtaining videos from multiple video capture devices.
  • a single aggregated data stream is received from a data aggregation device.
  • the aggregated data stream includes multiple data packets collected by multiple video capture devices, and each data packet has an identifier of the video capture device that collects the data packet. Based on the identifier, multiple videos associated with multiple video capture devices are determined from the aggregated data stream, and each video includes a data packet with the same identifier.
  • data streams from multiple video capture devices are aggregated into a single aggregate data stream for transmission, and the single aggregate data stream can also be determined as multiple videos based on the identifier for the user to use. Therefore, the solution of the present disclosure can stably and efficiently transmit videos from multiple video capture devices, and at the same time reduce the cost of data transmission.
  • Figure 1 shows a schematic diagram of an example environment 100 in which multiple embodiments of the present disclosure can be implemented.
  • the data aggregation device 130 receives the data streams 120-1 and 120-2 from the video capture devices 110-1 and 110-2.
  • the data stream 120-1 may include data packets 121-1, 122-1, and 123-1
  • the data stream 120-2 may include 121-2, 122-2, and 123-2.
  • the video capture device may include a camera, a camera, or any other device capable of capturing video.
  • the data aggregation device 130 may aggregate the data streams 120-1 and 120-2 from the video capture devices 110-1 and 110-2 into a single aggregate data stream 140.
  • the data aggregation device 130 may include a serializer or a deserializer, and may also include any other device capable of aggregating multiple data streams into a single data stream.
  • the aggregate data stream 140 includes data packets 121-1, 122-1, 123-1, 121-2, 122-2, and 123-2.
  • the data packets 121-1, 122-1, and 123-1 have the identifier 141 of the video capture device 110-1, and the data packets 121-2, 122-2, and 123-2 have the identifier 142 of the video capture device 110-2, To indicate the video capture device associated with these packets.
  • Figure 1 shows that these data packets are arranged in an end-to-end order such as 121-1, 122-1, 123-1, 121-2, 122-2, and 123-2, it should be understood that these data packets
  • the packets can also be arranged in an interleaved order, for example, in the order of 121-1, 122-2, 123-1, 121-2, 122-1, and 123-2, which is not limited in the present disclosure.
  • the computing device 150 receives the aggregated data stream 140. Since the data packets 121-1, 122-1, 123-1, 121-2, 122-2, and 123-2 have identifiers 141 or 142 indicating the video capture device 110-1 or the video capture device 110-2, the computing device Based on these identifiers, 150 can determine the video 160-1 including data packets 121-1, 122-1, 123-1 and the video including data packets 121-2, 122-2, and 123-2 from the aggregated data stream 140 160-2, for users to use.
  • the computing device 150 may be a stationary computing device, such as a server, a desktop computer, a development board, etc., or a portable computing device, such as a mobile phone, a tablet computer, and the like. It should be understood that the above are only some examples of the computing device 150, which may also be other appropriate systems or devices such as a distributed computing system.
  • FIG. 1 only shows that the data aggregation device 130 receives data streams from the two video capture devices 110-1 and 110-2, it should be understood that the data The aggregation device 130 may also receive data streams from more video capture devices, or aggregate more data streams into a single aggregate data stream. It should also be understood that multiple data aggregation devices may be used to receive data streams from multiple video capture devices, and multiple computing devices may also be used to determine multiple videos, which is not limited in the present disclosure.
  • FIG. 2 shows a flowchart of a process 200 of data processing according to an embodiment of the present disclosure.
  • the process 200 may be implemented by the computing device 150 of FIG. 1 or by a distributed computing system including the computing device 150. To facilitate discussion, the process 200 will be described in conjunction with FIG. 1.
  • the computing device 150 receives a single aggregated data stream 140 from the data aggregation device 130.
  • the aggregate data stream 140 includes data packets 121-1, 122-1, 123-1, 121-2, 122-2, and 123-2 collected by the video collection devices 110-1 and 110-2.
  • the data packets 121-1, 122-1, and 123-1 have an identifier 141 that indicates the video capture device 110-1
  • the data packets 121-2, 122-2, and 123-2 have an identifier that indicates the video capture device 110-2 142.
  • the identifiers 141 and 142 are added to each data packet by the data aggregation device 130 when the data streams 120-1 and 120-2 are aggregated. In this way, the computing device 150 only needs to receive a single aggregated data stream from the data aggregation device 130 to receive data packets from multiple video capture devices, which significantly reduces the hardware resources required for data packet transmission.
  • the computing device 150 may receive a user's request for access to at least one of the video capture devices 110-1 or 110-2. In response to receiving the access request, the computing device 150 may send an instruction for starting the at least one video capture device to the data aggregation device 130. The data aggregation device 130 may activate the at least one video capture device according to the instruction. The computing device 150 may receive the aggregated data stream 140 from the data aggregation device 130, and the aggregated data stream 140 at least includes data packets collected by the at least one video capture device. In this way, users can start individual video capture devices in a targeted manner as needed, thereby reducing the overall power consumption of the device.
  • the computing device 150 determines the videos 160-1 and 160-2 associated with the video capture devices 110-1 and 110-2 from the aggregated data stream 140 based on the identifiers 141 and 142.
  • the video 160-1 includes data packets 121-1, 122-1, and 123-1
  • the video 160-2 includes data packets 121-2, 122-2, and 123-2. In this way, the computing device 150 can determine the received single aggregated data stream 140 as the video collected from the video capture devices 110-1 and 110-2, respectively, thereby improving the data transmission efficiency and reducing the consumption of transmission resources. .
  • the computing device 150 may determine that the data packets 121-1, 122-1, 123-1, 121-2, 122-2, and 123-2 contained in the aggregated data stream 140 have identifiers 141 and 142.
  • the computing device 150 determines the data packets 121-1, 122-1, and 123-1 with the identifier 141 as the first data set, and determines the data packets 121-2, 122-2, and 123-2 with the same identifier 142 Is the second data set. In this way, the computing device 150 can quickly determine the first data set from the first video capture device 110-1 and the second data set from the second video capture device 110-2, so as to improve the efficiency of obtaining the video.
  • the computing device 150 can determine the video 160-1 from the first video capture device 110-1 and the video 160-2 from the second video capture device 110-2 based on the first data set and the second data set. . In this way, the computing device 150 can obtain videos from multiple video capture devices from the aggregated data stream 140 using the identifier, thereby simplifying the data transmission process.
  • the computing device 150 may obtain the identifier of the at least one video from the access request. According to the identifier, the computing device 150 can provide the user with a video matching the identifier for the user to use. For example, if the user currently needs to obtain the video 160-1, the user may send an access request for the video 160-1 to the computing device 150, and the request includes the identifier 141. The computing device 150 can obtain the identifier 141 from the access request. In some embodiments, the computing device 150 may provide the user with a video 160-1 matching the identifier 141 according to the identifier 141 for the user to use. In this way, the computing device 150 can selectively provide the user with the video required by the user, thereby improving the operating efficiency of the device and reducing the power consumption.
  • FIG. 3 shows a schematic diagram 300 of data processing according to an embodiment of the present disclosure.
  • the transmission and conversion process of the data stream and the data packet is not shown in FIG. 3, and those skilled in the art should understand that the transmission and conversion process of the data stream and the data packet involved in FIG. 3 is as follows Figure 1 and Figure 2 describe the way to proceed.
  • the computing device 150 may receive a single aggregated data stream from the two data aggregation devices 130 and 230, respectively, and then determine from these aggregated data streams that the video capture devices 110-1, 110-2, 210-1 And 210-2 captured videos 160-1, 160-2, 260-1, and 260-2. In this way, the computing device 150 can obtain videos collected from more video collection devices, expanding the applicability of the devices.
  • the computing device 150 may send trigger signals 310 and 320 with predetermined frequencies to the data aggregation devices 130 and 230, respectively, where the trigger signals 310 and 320 have the same frequency.
  • the data aggregation devices 130 and 230 can synchronously acquire the data packets collected by the video capture devices 110-1, 110-2, 210-1, and 210-2 at the predetermined frequency.
  • the computing device 150 synchronously sends 25 Hz trigger signals 310 and 320 to the data aggregation devices 130 and 230, and the data aggregation devices 130 and 230 can synchronize from the video acquisition devices 110-1, 110-2, and 110-2 at an interval of 40 milliseconds.
  • 210-1 and 210-2 get the data packet. Since the trigger signals 310 and 320 are synchronously sent to the data aggregation devices 130 and 230, the data packets obtained from the video capture devices 110-1, 110-2, 210-1, and 210-2 are also synchronized, and finally get The videos 160-1, 160-2, 260-1 and 260-2 are also synchronized. In this way, the synchronization accuracy of videos obtained from multiple video capture devices can be improved.
  • FIG. 3 it should be understood that although 4 video collection devices and two data aggregation devices are shown in FIG. 3, more video collection devices and data aggregation devices can be provided according to actual needs. It should also be understood that although only one computing device 150 is shown in FIG. 3 to receive aggregated data streams from the data aggregation devices 130 and 230, respectively, additional computing devices may be provided to receive aggregated data streams from the data aggregation devices 130 and 230. The additional computing device can also obtain videos 160-1, 160-2, 260-1, and 260-2. In this way, the required video can be obtained on different computing devices, so that the user can independently perform various processing on the video.
  • the obtained image often has the problem of image cutting. This is caused by the processing speed of a frame of the image by the processor is slower than the speed of a frame of the image being stored in the buffer. For example, after a frame of image is stored in the buffer, the processor processes the frame of image, but due to some reasons, the processor's processing speed of the frame of image is slow, resulting in that the frame of image in the buffer has not yet been After being processed, it is covered by the next frame image, so that part of the image obtained by the user is displayed as the previous frame image, and the other part is displayed as the next frame image, which reduces the quality of image processing and greatly affects the user experience.
  • the computing device 150 may allocate n low-level buffers for each video, where n is an integer greater than or equal to 2.
  • the computing device 150 allocates a temporary buffer again, which is used to read a frame of image currently to be processed from the low-level buffer to the temporary buffer for processing by the computing device 150.
  • the computing device 150 when the computing device 150 processes the first frame image, it first reads the first frame image from the first buffer into the temporary buffer, and processes the frame image in the temporary buffer. Assuming that the time interval for the computing device 150 to store one frame in the lower-level buffer is T, then after the first frame image is stored in the first buffer, the second frame image is stored in the second buffer after the time T has elapsed. , The third frame image is stored in the third buffer after the time T has elapsed, the fourth frame image is stored in the first buffer after the time T has passed, and then the fifth frame image is stored after the time T has elapsed In the second buffer.
  • the second frame image, the third frame image, and the fourth frame image are stored in the second, third, and first buffers respectively, they will not be processed in the temporary buffer. If the computing device 150 reads the second frame image from the second buffer after processing the first frame image, it is sufficient to ensure that the second frame image is not covered by the fifth frame image. Therefore, as long as the image processing speed of the computing device 150 is less than (n+1)*T, the occurrence of abnormal situations can be avoided. In this way, according to actual needs, by adjusting the number of low-level buffers allocated to the video, the problems of image frame cutting and frame loss can be effectively avoided, and the quality of image processing is improved.
  • FIG. 4 shows a schematic block diagram of an apparatus 400 for data processing according to an embodiment of the present disclosure.
  • the apparatus 400 may be included in the computing device 150 of FIG. 1 or implemented as the computing device 150.
  • the device 400 includes a data receiving module 410 configured to receive a single aggregated data stream from a data aggregation device.
  • the aggregated data stream includes multiple data packets collected by multiple video capture devices, and each data packet has The identifier of the video capture device that captured the data packet.
  • the device 400 further includes a video determining module 420 configured to determine multiple videos associated with multiple video capture devices from the aggregated data stream based on the identifier, each video including a data packet with the same identifier.
  • the data receiving module 410 includes an instruction sending module configured to send to the data aggregation device a request for activating at least one of the multiple video acquisition devices in response to receiving an access request for at least one of the multiple video acquisition devices.
  • the video determining module 420 includes: an identifier determining module configured to determine the identifiers of the multiple data packets included in the aggregated data stream; the data set determining module is configured to combine multiple The data packets with the same identifier in the data packets are determined to be a data set; and the identifier utilization module is configured to determine the video associated with the video capture device corresponding to the identifier based on the data set.
  • the device 400 further includes a trigger module configured to simultaneously send a trigger signal having a predetermined frequency to the data aggregation device and another data aggregation device, so that the data aggregation device and the other data aggregation device operate at the predetermined frequency. Synchronously obtain data packets collected by different video capture devices.
  • the apparatus 400 further includes: an identifier obtaining module configured to obtain an identifier of at least one video from the access request in response to receiving an access request for at least one video of the plurality of videos; and the video The providing module is configured to provide a video matching the obtained identifier among the plurality of videos.
  • the data aggregation device 130 includes a serializer and a deserializer.
  • FIG. 5 shows a schematic block diagram of an example device 500 that can be used to implement embodiments of the present disclosure.
  • the device 500 may be used to implement the computing device 150 of FIG. 1.
  • the device 500 includes a central processing unit (CPU) 501, which can be loaded according to computer program instructions stored in a read-only memory (ROM) 502 or loaded from a storage unit 508 to a random access memory (RAM) 503. Program instructions to perform various appropriate actions and processing.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for the operation of the device 500 can also be stored.
  • the CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504.
  • the I/O interface 505 includes: an input unit 506, such as a keyboard, a mouse, etc.; an output unit 507, such as various types of displays, speakers, etc.; and a storage unit 508, such as a magnetic disk, an optical disk, etc. ; And a communication unit 509, such as a network card, a modem, a wireless communication transceiver, and so on.
  • the communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
  • the processing unit 501 executes the various methods and processes described above, such as the process 200.
  • the process 200 may be implemented as a computer software program, which is tangibly contained in a machine-readable medium, such as the storage unit 508.
  • part or all of the computer program may be loaded and/or installed on the device 500 via the ROM 502 and/or the communication unit 509.
  • the CPU 501 may be configured to execute the process 200 in any other suitable manner (for example, by means of firmware).
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip System (SOC), Load programmable logic device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip System
  • CPLD Load programmable logic device
  • the program code for implementing the method of the present disclosure can be written in any combination of one or more programming languages. These program codes can be provided to the processors or controllers of general-purpose computers, special-purpose computers, or other programmable data processing devices, so that when the program codes are executed by the processors or controllers, the functions specified in the flowcharts and/or block diagrams/ The operation is implemented.
  • the program code can be executed entirely on the machine, partly executed on the machine, partly executed on the machine and partly executed on the remote machine as an independent software package, or entirely executed on the remote machine or server.
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read only memory
  • magnetic storage device or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请公开了数据处理的方法、装置、设备和计算机可读存储介质,涉及数据处理领域。该方法包括从数据聚合装置接收单个聚合数据流,聚合数据流包括分别由多个视频采集装置采集的多个数据包,每个数据包具有采集数据包的视频采集装置的标识符。该方法还包括基于标识符,从聚合数据流确定与多个视频采集装置相关联的多个视频,每个视频包括具有相同标识符的数据包。以此方式,能够稳定且高效地传输来自多个视频采集装置的视频,同时节省了传输过程所占用的资源。本方案可用于人工智能领域,尤其是自动驾驶(包括自主泊车)领域。

Description

数据处理的方法、装置、设备和存储介质 技术领域
本公开的实施例主要涉及数据处理领域,并且更具体地,涉及数据处理的方法、装置、设备和计算机可读存储介质。
背景技术
随着计算机技术的发展,越来越多的场景中需要对视频进行采集和处理。例如,视频采集与处理技术在汽车辅助驾驶等场景中具有重要应用价值。一般而言,汽车可以利用传感器和视频采集装置,在汽车行驶过程中感应周围环境、收集各种数据,从而实现辅助驾驶。
通常,视频采集与处理装置对多个视频采集装置的控制不够灵活,难以实现针对个别视频采集装置的启停控制。此外,随着视频采集装置数目的增加,这些装置往往需要占用较多的硬件资源来传输多个视频,并且这些视频之间的同步精度也较低。
发明内容
根据本公开的示例实施例,提供了一种数据处理的方案。
在本公开的第一方面中,提供了一种数据处理方法。该方法包括从数据聚合装置接收单个聚合数据流,聚合数据流包括分别由多个视频采集装置采集的多个数据包,每个数据包具有采集数据包的视频采集装置的标识符。该方法还包括基于标识符,从聚合数据流确定与多个视频采集装置相关联的多个视频,每个视频包括具有相同标识符的数据包。
在本公开的第二方面中,提供了一种用于数据处理的装置。该装置包括数据接收模块,被配置为从数据聚合装置接收单个聚合数据流,聚合数据流包括分别由多个视频采集装置采集的多个数据包, 每个数据包具有采集数据包的视频采集装置的标识符。该装置还包括视频确定模块,被配置为基于标识符,从聚合数据流确定与多个视频采集装置相关联的多个视频,每个视频包括具有相同标识符的数据包。
在本公开的第三方面中,提供了一种电子设备,包括一个或多个处理器;以及存储装置,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现根据本公开的第一方面的方法。
在本公开的第四方面中,提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现根据本公开的第一方面的方法。
应当理解,发明内容部分中所描述的内容并非旨在限定本公开的实施例的关键或重要特征,亦非用于限制本公开的范围。本公开的其它特征将通过以下的描述变得容易理解。
附图说明
结合附图并参考以下详细说明,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标注表示相同或相似的元素,其中:
图1示出了本公开的多个实施例能够在其中实现的示例环境的示意图;
图2示出了根据本公开的实施例的数据处理的过程的流程图;
图3示出了根据本公开的实施例的数据处理的示意图;
图4示出了根据本公开的实施例的用于数据处理的装置的示意性框图;以及
图5示出了能够实施本公开的多个实施例的计算设备的框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显 示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
在本公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。
在本公开的实施例的描述中,如本领域技术人员所理解的,术语“数据流”是指从视频采集装置获得的数据以流的形式由数据聚合装置接收。数据流可以包括至少一个数据包,数据包可以包括至少一帧图像。
在本公开的实施例的描述中,如本领域技术人员所理解的,术语“聚合”是指将多个数据流的多个数据包以首尾相接或者相互交织的方式聚合在单个数据流中,以便于经由单个传输通道来实现多个视频的传输。
在本公开的实施例的描述中,如本领域技术人员所理解的,术语“标识符”是指用于指示数据包来自哪个视频采集装置的标识。标识符可以是数字、字母、符号或者由它们中的一个或多个组合而成。
传统上,如上文所提及的,视频采集与处理装置对多个视频采集装置的控制不够灵活,难以实现针对个别视频采集装置的启停控制。此外,随着视频采集装置数目的增加,这些装置往往需要占用较多的硬件资源来传输多个视频,并且这些视频之间的同步精度也较低。
根据本公开的实施例,提出了一种数据处理方案,用于获得来自多个视频采集装置的视频。在该方案中,从数据聚合装置接收单 个聚合数据流,聚合数据流包括分别由多个视频采集装置采集的多个数据包,每个数据包具有采集数据包的视频采集装置的标识符。基于标识符,从聚合数据流确定与多个视频采集装置相关联的多个视频,每个视频包括具有相同标识符的数据包。以此方式,来自多个视频采集装置的数据流被聚合为单个聚合数据流进行传输,单个聚合数据流还可以根据标识符被确定为多个视频以供用户使用。因此,利用本公开的方案能够稳定且高效地传输来自多个视频采集装置的视频,同时降低了数据传输的成本。
以下将参照附图来具体描述本公开的实施例。
图1示出了本公开的多个实施例能够在其中实现的示例环境100的示意图。在该示例环境100中,数据聚合装置130接收来自视频采集装置110-1和110-2的数据流120-1和120-2。数据流120-1可以包括数据包121-1、122-1和123-1,数据流120-2可以包括121-2、122-2和123-2。在一些实施例中,视频采集装置可以包括相机、摄像头或者其他任何能够采集视频的装置。
数据聚合装置130可以将来自视频采集装置110-1和110-2的数据流120-1和120-2聚合为单个聚合数据流140。在一些实施例中,数据聚合装置130可以包括串行器或解串器,也可以包括其他任何能够将多个数据流聚合为单个数据流的装置。聚合数据流140包括数据包121-1、122-1、123-1、121-2、122-2和123-2。数据包121-1、122-1、123-1具有视频采集装置110-1的标识符141,数据包121-2、122-2和123-2具有视频采集装置110-2的标识符142,以指示与这些数据包相关联的视频采集装置。应当理解,虽然图1示出了这些数据包以121-1、122-1、123-1、121-2、122-2和123-2这样首尾相接的顺序排列,但是应当理解,这些数据包也能够以互相交织的顺序排列,例如以121-1、122-2、123-1、121-2、122-1和123-2等顺序排列,本公开对此不进行限制。
计算设备150接收聚合数据流140。由于数据包121-1、122-1、123-1、121-2、122-2和123-2具有指示视频采集装置110-1或视频 采集装置110-2的标识符141或142,计算设备150可以根据这些标识符,从聚合数据流140中确定包括数据包121-1、122-1、123-1的视频160-1以及包括数据包121-2、122-2和123-2的视频160-2,以供用户使用。计算设备150可以是固定式计算设备,诸如服务器、台式计算机、开发板等,也可以是便携式计算设备,诸如移动电话、平板计算机等。应当理解,以上仅仅是计算设备150的一些示例,其还可以是分布式计算系统等其他适当的系统或设备。
应当理解,图1中示出的环境仅是示例性的,尽管图1中仅示出了数据聚合装置130从两个视频采集装置110-1和110-2接收数据流,然而应当理解,数据聚合装置130还可以从更多个视频采集装置接收数据流,也可以将更多个数据流聚合为单个聚合数据流。还应当理解,可以使用多个数据聚合装置来从多个视频采集装置接收数据流,也可以使用多个计算设备来确定多个视频,本公开对此不进行限制。
为了更清楚地理解本公开的实施例所提供的数据处理的方案,将参照图2来进一步描述本公开的实施例。图2示出了根据本公开的实施例的数据处理的过程200的流程图。过程200可以由图1的计算设备150或者由包括计算设备150的分布式计算系统来实现。为便于讨论,将结合图1来描述过程200。
在框210处,计算设备150从数据聚合装置130接收单个聚合数据流140。聚合数据流140包括由视频采集装置110-1和110-2采集的数据包121-1、122-1、123-1、121-2、122-2和123-2。数据包121-1、122-1和123-1具有指示视频采集装置110-1的标识符141,数据包121-2、122-2和123-2具有指示视频采集装置110-2的标识符142。标识符141和142是由数据聚合装置130在聚合数据流120-1和120-2时添加在每个数据包中的。以这样的方式,计算设备150只需要从数据聚合装置130接收单个聚合数据流,就可以接收到来自多个视频采集装置的数据包,显著地降低了数据包传输所需要的硬件资源。
在一些实施例中,计算设备150可以接收用户针对视频采集装置110-1或110-2中的至少一个视频采集装置的访问请求。响应于接收到该访问请求,计算设备150可以向数据聚合装置130发送用于启动该至少一个视频采集装置的指示。数据聚合装置130可以根据该指示来启动该至少一个视频采集装置。计算设备150可以从数据聚合装置130接收聚合数据流140,聚合数据流140至少包含由该至少一个视频采集装置采集的数据包。以此方式,用户可以根据需要来有针对性地启动个别视频采集装置,从而降低了装置整体功率消耗。
在框220处,计算设备150基于标识符141和142,从聚合数据流140确定与视频采集装置110-1和110-2相关联的视频160-1和160-2。视频160-1包括数据包121-1、122-1和123-1,视频160-2包括数据包121-2、122-2和123-2。以此方式,计算设备150可以将接收到的单个聚合数据流140确定为分别从视频采集装置110-1和110-2采集到的视频,从而提高了数据传输效率,并且降低了传输资源的消耗。
在一些实施例中,计算设备150可以确定聚合数据流140中所包含的数据包121-1、122-1、123-1、121-2、122-2和123-2具有的标识符141和142。计算设备150将具有标识符141的数据包121-1、122-1和123-1确定为第一数据集合,将具有相同标识符142的数据包121-2、122-2和123-2确定为第二数据集合。以此方式,计算设备150可以快速确定来自第一视频采集装置110-1的第一数据集合和来自第二视频采集装置110-2的第二数据集合,以提高获得视频的效率。在一些实施例中,计算设备150基于第一数据集合和第二数据集合可以确定来自第一视频采集装置110-1的视频160-1和来自第二视频采集装置110-2的视频160-2。以此方式,计算设备150利用标识符可以从聚合数据流140获得来自多个视频采集装置的视频,从而简化了数据传输过程。
在一些实施例中,计算设备150接收到针对视频160-1和160-2 中至少一个视频的访问请求后,计算设备150可以从访问请求中获取该至少一个视频的标识符。根据该标识符,计算设备150可以向用户提供与该标识符相匹配的视频,以供用户使用。例如,如果用户当前需要获取视频160-1,那么用户可以向计算设备150发出针对视频160-1的访问请求,该请求中包含了标识符141。计算设备150可以从访问请求中获取标识符141。在一些实施例中,计算设备150可以根据该标识符141向用户提供与该标识符141相匹配的视频160-1,以供用户使用。以此方式,计算设备150可以选择性地将用户所需的视频提供给用户,从而提高了装置运行效率,降低了功率消耗。
图3示出了根据本公开的实施例的数据处理的示意图300。出于简化视图的目的,数据流和数据包的传输和转换过程未在图3中示出,本领域技术人员应当理解,图3中涉及的数据流和数据包的传输和转换过程是以如图1和图2所描述的方式来进行的。在图3的示例中,计算设备150可以从两个数据聚合装置130和230分别接收单个聚合数据流,然后从这些聚合数据流中确定从视频采集装置110-1、110-2、210-1和210-2采集的视频160-1、160-2、260-1和260-2。以此方式,计算设备150可以获得从更多视频采集装置采集到的视频,扩展装置的适用性。
仍然以图3为例,计算设备150可以分别向数据聚合装置130和230发送具有预定频率的触发信号310和320,其中触发信号310和320的频率相同。数据聚合装置130和230可以在接收到触发信号310和320后,以该预定频率同步地获取由视频采集装置110-1、110-2、210-1和210-2所采集的数据包。例如,计算设备150向数据聚合装置130和230同步地发送25Hz的触发信号310和320,数据聚合装置130和230就能够以40毫秒的间隔同步地从视频采集装置110-1、110-2、210-1和210-2获取数据包。由于触发信号310和320是同步地发送到数据聚合装置130和230的,因此,从视频采集装置110-1、110-2、210-1和210-2获取的数据包也是同步的,最终获 得的视频160-1、160-2、260-1和260-2也是同步的。以此方式,可以提高从多个视频采集装置获得的视频的同步精度。
应当理解,尽管图3中示出了4个视频采集装置以及两个数据聚合装置,然而可以根据实际情况的需要而设置更多视频采集装置和数据聚合装置。还应当理解,尽管图3中仅示出了一个计算设备150分别从数据聚合装置130和230接收聚合数据流,然而还可以设置附加的计算设备来从数据聚合装置130和230接收聚合数据流,该附加的计算设备也可以获得视频160-1、160-2、260-1和260-2。以此方式,可以在不同的计算设备上获得所需要的视频,以便用户独立地对视频执行各种处理。
通常,在用户对图像进行处理时,获得的图像经常会出现图像切帧的问题,这是由于处理器对一帧图像的处理速度小于一帧图像被存储到缓冲器的速度而造成的。例如,在一帧图像被存储到缓冲器后,处理器对该帧图像进行处理,但是由于某些原因,处理器对该帧图像的处理速度较慢,导致缓冲器中的该帧图像还未被处理完毕就被下一帧图像所覆盖,使得用户获得的图像一部分显示为前一帧图像,而另一部分显示为后一帧图像,降低了图像处理的质量,并且大大影响了用户体验。
在一些实施例中,计算设备150可以为每个视频分配n个低层缓冲器,n为大于等于2的整数。在用户需要对视频进行处理时,可以将视频中的每一帧按照先后顺序循环存储到每个低层缓冲器中。例如,以n=3为例,计算设备150可以将视频中的第1帧图像存储到第1个缓冲器,将第2帧图像存储到第2个缓冲器,将第3帧图像存储到第3个缓冲器,然后返回到第1个缓冲器,将第4帧图像存储到第1个缓冲器,将第5帧图像存储到第2个缓冲器,以此类推。计算设备150再分配一个临时缓冲器,用于将当前要处理的一帧图像从低层缓冲器中读取到临时缓冲器,以供计算设备150进行处理。
例如,计算设备150在处理第1帧图像时,先从第1个缓冲器 读取第1帧图像到临时缓冲器中,并且在临时缓冲器中对该帧图像进行处理。假设计算设备150向低层缓冲器存储一帧的时间间隔为T,那么在第1帧图像被存储到第1个缓冲器后,第2帧图像在经过时间T之后被存储到第2个缓冲器,又经过时间T之后第3帧图像被存储在第3个缓冲器,再经过时间T之后第4帧图像被存储到第1个缓冲器,然后又经过时间T之后,第5帧图像被存储在第2个缓冲器中。可以看出,由于第2帧图像、第3帧图像、第4帧图像被分别存储在第2个、第3个第和第1个缓冲器中,所以不会对临时缓冲器中正在被处理的图像造成任何影响,只要保证计算设备150在处理完第1帧图像后从第2个缓冲器中读取第2帧图像时,第2帧图像没有被第5帧图像所覆盖即可。因此,只要计算设备150处理图像的速度小于(n+1)*T就可以避免异常情况的出现。以此方式,根据实际情况的需要,通过调节分配给视频的低层缓冲器的数量就可以有效避免出现图像切帧和丢帧的问题,提高了图像处理的质量。
图4示出了根据本公开的实施例的用于数据处理的装置400的示意性框图。装置400可以被包括在图1的计算设备150中或者被实现为计算设备150。如图4所示,装置400包括数据接收模块410,被配置为从数据聚合装置接收单个聚合数据流,聚合数据流包括分别由多个视频采集装置采集的多个数据包,每个数据包具有采集数据包的视频采集装置的标识符。装置400还包括视频确定模块420,被配置为基于标识符,从聚合数据流确定与多个视频采集装置相关联的多个视频,每个视频包括具有相同标识符的数据包。
在一些实施例中,数据接收模块410包括:指示发送模块,被配置为响应于接收到针对多个视频采集装置中的至少一个视频采集装置的访问请求,向数据聚合装置发送用于启动至少一个视频采集装置的指示;以及数据流接收模块,被配置为从数据聚合装置接收单个聚合数据流,单个聚合数据流至少包含由至少一个视频采集装置采集的数据包。
在一些实施例中,视频确定模块420包括:标识符确定模块,被配置为确定聚合数据流中所包括的多个数据包分别所具有的标识符;数据集合确定模块,被配置为将多个数据包中具有相同标识符的数据包确定为一个数据集合;以及标识符利用模块,被配置为基于数据集合确定与标识符所对应的视频采集装置相关联的视频。
在一些实施例中,装置400还包括:触发模块,被配置为向数据聚合装置和另一数据聚合装置同时发送具有预定频率的触发信号,以使数据聚合装置和另一数据聚合装置以预定频率同步地获取由不同的视频采集装置所采集的数据包。
在一些实施例中,装置400还包括:标识符获取模块,被配置为响应于接收到针对多个视频中的至少一个视频的访问请求,从访问请求中获取至少一个视频的标识符;以及视频提供模块,被配置为提供多个视频中的、与获取的标识符匹配的视频。
在一些实施例中,数据聚合装置130包括串行器和解串器。
图5示出了可以用来实施本公开的实施例的示例设备500的示意性框图。设备500可以用于实现图1的计算设备150。如图所示,设备500包括中央处理单元(CPU)501,其可以根据存储在只读存储器(ROM)502中的计算机程序指令或者从存储单元508加载到随机访问存储器(RAM)503中的计算机程序指令,来执行各种适当的动作和处理。在RAM 503中,还可存储设备500操作所需的各种程序和数据。CPU 501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
设备500中的多个部件连接至I/O接口505,包括:输入单元506,例如键盘、鼠标等;输出单元507,例如各种类型的显示器、扬声器等;存储单元508,例如磁盘、光盘等;以及通信单元509,例如网卡、调制解调器、无线通信收发机等。通信单元509允许设备500通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
处理单元501执行上文所描述的各个方法和处理,例如过程200。 例如,在一些实施例中,过程200可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元508。在一些实施例中,计算机程序的部分或者全部可以经由ROM 502和/或通信单元509而被载入和/或安装到设备500上。当计算机程序加载到RAM 503并由CPU 501执行时,可以执行上文描述的过程200中的任一个的一个或多个步骤。备选地,在其他实施例中,CPU 501可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行过程200。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)等等。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何 合适组合。
此外,虽然采用特定次序描绘了各操作,但是这应当理解为要求这样操作以所示出的特定次序或以顺序次序执行,或者要求所有图示的操作应被执行以取得期望的结果。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实现中。相反地,在单个实现的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实现中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (14)

  1. 一种数据处理方法,包括:
    从数据聚合装置接收单个聚合数据流,所述聚合数据流包括分别由多个视频采集装置采集的多个数据包,每个数据包具有采集所述数据包的视频采集装置的标识符;以及
    基于所述标识符,从所述聚合数据流确定与所述多个视频采集装置相关联的多个视频,每个视频包括具有相同标识符的数据包。
  2. 根据权利要求1所述的方法,其中从数据聚合装置接收单个聚合数据流包括:
    响应于接收到针对所述多个视频采集装置中的至少一个视频采集装置的访问请求,向所述数据聚合装置发送用于启动所述至少一个视频采集装置的指示;以及
    从所述数据聚合装置接收所述单个聚合数据流,所述单个聚合数据流至少包含由所述至少一个视频采集装置采集的数据包。
  3. 根据权利要求1所述的方法,其中从所述聚合数据流确定所述多个视频包括:
    确定所述聚合数据流中所包括的多个数据包分别所具有的标识符;
    将所述多个数据包中具有相同标识符的数据包确定为一个数据集合;以及
    基于所述数据集合确定与所述标识符所对应的视频采集装置相关联的视频。
  4. 根据权利要求1所述的方法,还包括:
    向所述数据聚合装置和另一数据聚合装置同时发送具有预定频率的触发信号,以使所述数据聚合装置和所述另一数据聚合装置以所述预定频率同步地获取由不同的视频采集装置所采集的数据包。
  5. 根据权利要求1所述的方法,还包括:
    响应于接收到针对所述多个视频中的至少一个视频的访问请求, 从所述访问请求中获取所述至少一个视频的标识符;以及
    提供所述多个视频中的、与获取的所述标识符匹配的视频。
  6. 根据权利要求1所述的方法,其中所述数据聚合装置包括串行器和解串器。
  7. 一种用于数据处理的装置,包括:
    数据接收模块,被配置为从数据聚合装置接收单个聚合数据流,所述聚合数据流包括分别由多个视频采集装置采集的多个数据包,每个数据包具有采集所述数据包的视频采集装置的标识符;以及
    视频确定模块,被配置为基于所述标识符,从所述聚合数据流确定与所述多个视频采集装置相关联的多个视频,每个视频包括具有相同标识符的数据包。
  8. 根据权利要求7所述的装置,其中所述数据接收模块包括:
    指示发送模块,被配置为响应于接收到针对所述多个视频采集装置中的至少一个视频采集装置的访问请求,向所述数据聚合装置发送用于启动所述至少一个视频采集装置的指示;以及
    数据流接收模块,被配置为从所述数据聚合装置接收所述单个聚合数据流,所述单个聚合数据流至少包含由所述至少一个视频采集装置采集的数据包。
  9. 根据权利要求7所述的装置,其中所述视频确定模块包括:
    标识符确定模块,被配置为确定所述聚合数据流中所包括的多个数据包分别所具有的标识符;
    数据集合确定模块,被配置为将所述多个数据包中具有相同标识符的数据包确定为一个数据集合;以及
    标识符利用模块,被配置为基于所述数据集合确定与所述标识符所对应的视频采集装置相关联的视频。
  10. 根据权利要求7所述的装置,还包括:
    触发模块,被配置为向所述数据聚合装置和另一数据聚合装置同时发送具有预定频率的触发信号,以使所述数据聚合装置和所述另一数据聚合装置以所述预定频率同步地获取由不同的视频采集装置所 采集的数据包。
  11. 根据权利要求7所述的装置,还包括:
    标识符获取模块,被配置为响应于接收到针对所述多个视频中的至少一个视频的访问请求,从所述访问请求中获取所述至少一个视频的标识符;以及
    视频提供模块,被配置为提供所述多个视频中的、与获取的所述标识符匹配的视频。
  12. 根据权利要求7所述的装置,其中所述数据聚合装置包括串行器和解串器。
  13. 一种电子设备,所述设备包括:
    一个或多个处理器;以及
    存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-6中任一项所述的方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如权利要求1-6中任一项所述的方法。
PCT/CN2020/083597 2019-10-15 2020-04-07 数据处理的方法、装置、设备和存储介质 WO2021073054A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20876718.6A EP3896964A4 (en) 2019-10-15 2020-04-07 DATA PROCESSING METHOD, APPARATUS AND DEVICE, AND STORAGE MEDIA
JP2021538335A JP7273975B2 (ja) 2019-10-15 2020-04-07 データ処理の方法、装置、機器及び記憶媒体
EP23190861.7A EP4246965A3 (en) 2019-10-15 2020-04-07 Method and device, equipment, and storage medium for data processing
US17/374,981 US11671678B2 (en) 2019-10-15 2021-07-13 Method and device, equipment, and storage medium for data processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910980311.1A CN110677623B (zh) 2019-10-15 2019-10-15 数据处理的方法、装置、设备和存储介质
CN201910980311.1 2019-10-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/374,981 Continuation US11671678B2 (en) 2019-10-15 2021-07-13 Method and device, equipment, and storage medium for data processing

Publications (1)

Publication Number Publication Date
WO2021073054A1 true WO2021073054A1 (zh) 2021-04-22

Family

ID=69082688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/083597 WO2021073054A1 (zh) 2019-10-15 2020-04-07 数据处理的方法、装置、设备和存储介质

Country Status (5)

Country Link
US (1) US11671678B2 (zh)
EP (2) EP3896964A4 (zh)
JP (1) JP7273975B2 (zh)
CN (1) CN110677623B (zh)
WO (1) WO2021073054A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677623B (zh) * 2019-10-15 2021-09-10 北京百度网讯科技有限公司 数据处理的方法、装置、设备和存储介质
CN112866640A (zh) * 2021-01-08 2021-05-28 珠海研果科技有限公司 数据存储方法和装置
WO2024011293A1 (en) * 2022-07-14 2024-01-18 Harvest Technology Pty Ltd Systems, methods, and storage media for transmitting multiple data streams over a communications network for remote inspection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927364A (zh) * 2014-04-18 2014-07-16 苏州科达科技股份有限公司 一种视频摘要数据的存储方法、系统及展示系统
CN105335387A (zh) * 2014-07-04 2016-02-17 杭州海康威视系统技术有限公司 一种视频云存储系统的检索方法
US20160359760A1 (en) * 2015-06-04 2016-12-08 At&T Intellectual Property I, L.P. Apparatus and method to improve compression and storage data
CN106331673A (zh) * 2016-08-22 2017-01-11 上嘉(天津)文化传播有限公司 一种基于分散控制系统的vr视频数据控制方法
CN109937408A (zh) * 2016-11-14 2019-06-25 深圳市大疆创新科技有限公司 处理器间的数据流调度
CN110677623A (zh) * 2019-10-15 2020-01-10 北京百度网讯科技有限公司 数据处理的方法、装置、设备和存储介质

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602858A (en) * 1993-09-20 1997-02-11 Kabushiki Kaisha Toshiba Digital signal decoding apparatus having a plurality of correlation tables and a method thereof
JPH11164289A (ja) * 1997-11-25 1999-06-18 Matsushita Electric Ind Co Ltd 映像多重装置と映像監視装置
JP2995703B1 (ja) * 1998-10-08 1999-12-27 コナミ株式会社 画像作成装置、画像作成装置における表示場面切替方法、画像作成装置における表示場面切替プログラムが記録された可読記録媒体及びビデオゲーム装置
US20080106597A1 (en) * 1999-10-12 2008-05-08 Vigilos, Inc. System and method for storing and remotely retrieving surveillance video images
WO2005027518A1 (ja) 2003-09-11 2005-03-24 Fujitsu Limited 画像伝送装置及び画像伝送システム
JP4577768B2 (ja) 2005-01-27 2010-11-10 株式会社コルグ 映像信号切り換え装置
JP4849297B2 (ja) 2005-04-26 2012-01-11 ソニー株式会社 符号化装置および方法、復号装置および方法、並びにプログラム
US9182228B2 (en) * 2006-02-13 2015-11-10 Sony Corporation Multi-lens array system and method
WO2008015781A1 (en) * 2006-08-01 2008-02-07 Nikon Corporation Image processing device and electronic camera
US8719309B2 (en) * 2009-04-14 2014-05-06 Apple Inc. Method and apparatus for media data transmission
JP5492736B2 (ja) 2010-11-01 2014-05-14 日本電信電話株式会社 映像配信システム、映像配信方法、および映像配信プログラム
WO2012061980A1 (zh) * 2010-11-09 2012-05-18 华为技术有限公司 数据包的传输方法及装置
CN102480600B (zh) * 2010-11-22 2014-07-02 上海银晨智能识别科技有限公司 双路监控视频信息融合方法及系统
US8537195B2 (en) * 2011-02-09 2013-09-17 Polycom, Inc. Automatic video layouts for multi-stream multi-site telepresence conferencing system
JP5940999B2 (ja) 2013-03-12 2016-06-29 日本電信電話株式会社 映像再生装置、映像配信装置、映像再生方法、映像配信方法及びプログラム
JP6364838B2 (ja) * 2014-03-14 2018-08-01 日本電気株式会社 映像処理装置および映像処理方法
US10044987B1 (en) * 2014-12-10 2018-08-07 Amazon Technologies, Inc. Image processing system
JP2016171382A (ja) 2015-03-11 2016-09-23 パナソニックIpマネジメント株式会社 サーバ装置、自動撮影システム及び自動撮影方法
CN106851183B (zh) 2015-12-04 2020-08-21 宁波舜宇光电信息有限公司 基于fpga的多路视频处理系统及其方法
CN106162075A (zh) 2016-06-17 2016-11-23 浙江万朋教育科技股份有限公司 一种单台pc实现多路视频输入的稳定解决方法
US10193944B2 (en) 2016-06-17 2019-01-29 Q Technologies Inc. Systems and methods for multi-device media broadcasting or recording with active control
WO2018058358A1 (en) * 2016-09-28 2018-04-05 Covidien Lp System and method for parallelization of cpu and gpu processing for ultrasound imaging devices
CN106454236B (zh) 2016-10-09 2019-09-17 珠海全志科技股份有限公司 一种提高多路视频采集前端处理效率的方法和系统
US10560609B2 (en) 2016-11-04 2020-02-11 Karl Storz Endoscopy-America, Inc. System and related method for synchronized capture of data by multiple network-connected capture devices
CN108063746B (zh) * 2016-11-08 2020-05-15 北京国双科技有限公司 数据的处理方法、客户端、服务器及系统
IL284864B (en) * 2017-05-15 2022-09-01 T Worx Holdings Llc A system and method for communication between devices connected to weapons
CN108200345A (zh) 2018-01-24 2018-06-22 华东师范大学 高速实时多路视频采集处理装置
CN108924477B (zh) * 2018-06-01 2021-06-01 北京图森智途科技有限公司 一种远距离视频处理方法和系统、视频处理设备
CN108986253B (zh) * 2018-06-29 2022-08-30 百度在线网络技术(北京)有限公司 用于多线程并行处理的存储数据方法和装置
CN109392018B (zh) * 2018-11-23 2021-04-16 Oppo广东移动通信有限公司 数据传输方法及相关装置
US10694167B1 (en) * 2018-12-12 2020-06-23 Verizon Patent And Licensing Inc. Camera array including camera modules
CN109949203B (zh) 2019-03-19 2023-01-03 广东紫旭科技有限公司 一种异构cpu多路4k超高清视频处理装置与控制方法
CN110278418B (zh) * 2019-07-29 2021-09-03 广州小鹏汽车科技有限公司 视频数据的处理系统、方法和车辆

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927364A (zh) * 2014-04-18 2014-07-16 苏州科达科技股份有限公司 一种视频摘要数据的存储方法、系统及展示系统
CN105335387A (zh) * 2014-07-04 2016-02-17 杭州海康威视系统技术有限公司 一种视频云存储系统的检索方法
US20160359760A1 (en) * 2015-06-04 2016-12-08 At&T Intellectual Property I, L.P. Apparatus and method to improve compression and storage data
CN106331673A (zh) * 2016-08-22 2017-01-11 上嘉(天津)文化传播有限公司 一种基于分散控制系统的vr视频数据控制方法
CN109937408A (zh) * 2016-11-14 2019-06-25 深圳市大疆创新科技有限公司 处理器间的数据流调度
CN110677623A (zh) * 2019-10-15 2020-01-10 北京百度网讯科技有限公司 数据处理的方法、装置、设备和存储介质

Also Published As

Publication number Publication date
US20210345009A1 (en) 2021-11-04
EP3896964A1 (en) 2021-10-20
EP4246965A3 (en) 2023-11-15
US11671678B2 (en) 2023-06-06
CN110677623A (zh) 2020-01-10
CN110677623B (zh) 2021-09-10
EP4246965A2 (en) 2023-09-20
JP2022516534A (ja) 2022-02-28
EP3896964A4 (en) 2022-07-20
JP7273975B2 (ja) 2023-05-15

Similar Documents

Publication Publication Date Title
WO2021073054A1 (zh) 数据处理的方法、装置、设备和存储介质
US20220394316A1 (en) Message sending method and device, readable medium and electronic device
US8520563B2 (en) Interface device, communications system, non-volatile storage device, communication mode switching method and integrated circuit
WO2022068488A1 (zh) 消息发送的控制方法、装置、电子设备及计算机可读存储介质
CN111818632B (zh) 一种设备同步的方法、装置、设备及存储介质
WO2020026018A1 (zh) 文件的下载方法、装置、设备/终端/服务器及存储介质
CN114265713A (zh) Rdma事件管理方法、装置、计算机设备及存储介质
CN110719233B (zh) 用于发送信息的方法及装置
CN111817830B (zh) 传输、接收控制方法、终端及网络侧设备
CN108289165B (zh) 一种基于手机控制相机的实现方法、装置及终端设备
CN108462679B (zh) 数据传输方法及装置
CN112954449B (zh) 视频流处理方法、系统、电子装置和存储介质
CN108337285B (zh) 一种通信系统及通信方法
US20230188439A1 (en) Traffic Monitoring Device, Traffic Monitoring Method, and Traffic Monitoring Program
CN116627495A (zh) 一种信息交互方法、系统、装置、设备及介质
CN116260747A (zh) 终端测试设备的监测方法、装置及电子设备
CN108306836B (zh) 数据传输装置、智能交互平板及数据传输方法
CN108107750B (zh) 一种电力系统仿真的实时io数据处理方法及系统
CN117675720B (zh) 消息报文传输方法、装置、电子设备和存储介质
CN111614443A (zh) 终端能力上报、信息接收方法、终端及网络设备
CN211180818U (zh) 视频处理设备
US7007086B2 (en) Method and apparatus for measuring multi-connection performance of a server
CN112637027B (zh) 基于uart的帧边界界定装置及发送方法和接收方法
US20090129355A1 (en) Apparatus of transmitting packets of wireless local network and method for using the same
CN112306610A (zh) 终端控制方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20876718

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021538335

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020876718

Country of ref document: EP

Effective date: 20210714

NENP Non-entry into the national phase

Ref country code: DE