CN111316643A - Video coding method, device and movable platform - Google Patents
Video coding method, device and movable platform Download PDFInfo
- Publication number
- CN111316643A CN111316643A CN201980005411.0A CN201980005411A CN111316643A CN 111316643 A CN111316643 A CN 111316643A CN 201980005411 A CN201980005411 A CN 201980005411A CN 111316643 A CN111316643 A CN 111316643A
- Authority
- CN
- China
- Prior art keywords
- video data
- encoding
- core
- data
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
Abstract
A video encoding method, apparatus and movable platform, the method comprising: acquiring video data to be currently encoded (S201); determining a currently idle processor core for encoding the video data from a multi-core processor (S202); acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded (S203); and sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data (S204). The memory address of the coding reference data of the video data to be coded currently is sent to the processor cores, so that the processor cores can obtain the coding reference data and code the video data, and therefore the video data can be coded in parallel among the processor cores, and the requirements of high resolution and real-time performance of the video are met.
Description
Technical Field
The embodiment of the invention relates to the technical field of video coding, in particular to a video coding method, video coding equipment and a movable platform.
Background
With the demand for ultrahigh resolution and ultrahigh frame rate of video becoming higher and higher, video transmission and storage require a large amount of bandwidth. In order to save bandwidth and guarantee video quality, video needs to be encoded and then transmitted. In order to achieve real-time encoding, a hardware encoding and decoding method is adopted to encode the video, wherein the resolution is from 64x64 to 8192x8192 or higher, and the frame rate is from 25 to hundreds of frames per second. Most of video coding methods in the prior art use a single-core processor for serial coding, but with the further improvement of the requirements on video resolution and real-time performance, the video signal coding method implemented by the serial coding of the single-core processor cannot meet the increasing requirements on the high resolution and real-time performance of video signals.
Disclosure of Invention
The embodiment of the invention provides a video coding method, video coding equipment and a movable platform, which are used for realizing parallel coding of video data so as to meet the requirements on high resolution and real-time performance of videos.
In a first aspect, an embodiment of the present invention provides a video encoding method, including:
acquiring video data to be coded currently;
determining a currently idle processor core for encoding the video data from a multi-core processor;
acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded;
and sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data.
In a second aspect, an embodiment of the present invention provides a video encoding apparatus, including: a multi-core scheduling device and a plurality of processor cores;
the multi-core scheduling device is used for acquiring the current video data to be coded; and determining a currently idle processor core for encoding the video data from the plurality of processor cores;
acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded;
and sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data.
In a third aspect, an embodiment of the present invention provides a movable platform, including: imaging means and a video encoding apparatus according to an embodiment of the present invention as described in the second aspect; the imaging device is used for acquiring video data.
In a fourth aspect, an embodiment of the present invention provides a chip, including: a memory and a processor;
the memory to store program instructions; the processor is configured to call program instructions in the memory to perform the video encoding method according to the first aspect.
In a fifth aspect, an embodiment of the present invention provides a readable storage medium, on which a computer program is stored; when executed, implement a video encoding method as described in the first aspect in an embodiment of the present invention.
In a sixth aspect, an embodiment of the present invention provides a computer program, which is used to implement the video encoding method according to the first aspect when the computer program is executed by a computer.
According to the video coding method, the video coding device and the mobile platform, the video data to be coded currently are obtained; determining a currently idle processor core for encoding the video data from a multi-core processor; acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded; and sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data. In the embodiment, the memory address of the coding reference data of the video data to be coded currently is sent to the processor cores, so that the processor cores can acquire the coding reference data and code the video data according to the coding reference data, thereby realizing the simultaneous parallel coding of the video data among the processor cores and ensuring the requirements of high resolution and real-time performance of the video signal.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic architectural diagram of an unmanned flight system according to an embodiment of the invention;
fig. 2 is a flowchart of a video encoding method according to an embodiment of the present invention;
FIG. 3 is a diagram of an encoding architecture according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video encoding apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a movable platform according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides a video coding method, a video coding device and a movable platform. The movable platform may be, for example, a drone automobile, a robot, a handheld electronic device, or the like. The handheld electronic device is, for example, a mobile phone, a tablet computer, a notebook phone, a wearable device, or other terminal devices. Where the drone may be, for example, a rotorcraft (rotorcraft), such as a multi-rotor aircraft propelled through air by a plurality of propulsion devices, embodiments of the invention are not limited in this regard.
FIG. 1 is a schematic architectural diagram of an unmanned flight system according to an embodiment of the invention. The present embodiment is described by taking a rotor unmanned aerial vehicle as an example.
The unmanned flight system 100 can include a drone 110, a display device 130, and a control terminal 140. The drone 110 may include, among other things, a power system 150, a flight control system 160, a frame, and a pan-tilt 120 carried on the frame. The drone 110 may be in wireless communication with the control terminal 140 and the display device 130.
The airframe may include a fuselage and a foot rest (also referred to as a landing gear). The fuselage may include a central frame and one or more arms connected to the central frame, the one or more arms extending radially from the central frame. The foot rest is connected with the fuselage for play the supporting role when unmanned aerial vehicle 110 lands.
The power system 150 may include one or more electronic governors (abbreviated as electric governors) 151, one or more propellers 153, and one or more motors 152 corresponding to the one or more propellers 153, wherein the motors 152 are connected between the electronic governors 151 and the propellers 153, the motors 152 and the propellers 153 are disposed on the horn of the drone 110; the electronic governor 151 is configured to receive a drive signal generated by the flight control system 160 and provide a drive current to the motor 152 based on the drive signal to control the rotational speed of the motor 152. The motor 152 is used to drive the propeller in rotation, thereby providing power for the flight of the drone 110, which power enables the drone 110 to achieve one or more degrees of freedom of motion. In certain embodiments, the drone 110 may rotate about one or more axes of rotation. For example, the above-mentioned rotation axes may include a Roll axis (Roll), a Yaw axis (Yaw) and a pitch axis (pitch). It should be understood that the motor 152 may be a dc motor or an ac motor. The motor 152 may be a brushless motor or a brush motor.
The pan/tilt head 120 may include a motor 122. The pan/tilt head is used to carry the photographing device 123. Flight controller 161 may control the movement of pan/tilt head 120 via motor 122. Optionally, as another embodiment, the pan/tilt head 120 may further include a controller for controlling the movement of the pan/tilt head 120 by controlling the motor 122. It should be understood that the pan/tilt head 120 may be separate from the drone 110, or may be part of the drone 110. It should be understood that the motor 122 may be a dc motor or an ac motor. The motor 122 may be a brushless motor or a brush motor. It should also be understood that the pan/tilt head may be located at the top of the drone, as well as at the bottom of the drone.
The photographing device 123 may be, for example, a device for capturing an image such as a camera or a video camera, and the photographing device 123 may communicate with the flight controller and perform photographing under the control of the flight controller. The image capturing Device 123 of this embodiment at least includes a photosensitive element, such as a Complementary Metal Oxide Semiconductor (CMOS) sensor or a Charge-coupled Device (CCD) sensor. It can be understood that the camera 123 may also be directly fixed to the drone 110, such that the pan/tilt head 120 may be omitted.
The display device 130 is located at the ground end of the unmanned aerial vehicle system 100, can communicate with the unmanned aerial vehicle 110 in a wireless manner, and can be used for displaying attitude information of the unmanned aerial vehicle 110. In addition, an image photographed by the photographing device may also be displayed on the display apparatus 130. It should be understood that the display device 130 may be a stand-alone device or may be integrated into the control terminal 140.
The control terminal 140 is located at the ground end of the unmanned aerial vehicle system 100, and can communicate with the unmanned aerial vehicle 110 in a wireless manner, so as to remotely control the unmanned aerial vehicle 110.
It should be understood that the above-mentioned nomenclature for the components of the unmanned flight system is for identification purposes only, and should not be construed as limiting embodiments of the present invention.
Fig. 2 is a flowchart of a video encoding method according to an embodiment of the present invention, and as shown in fig. 2, the method of this embodiment may include:
s201, obtaining the current video data to be coded.
In this embodiment, video data to be currently encoded is obtained, where the video data may be video data acquired by an imaging device.
S202, determining a current idle processor core for encoding the video data from a multi-core processor.
The multi-core processor in this embodiment may be configured to encode video data, and the multi-core processor includes a plurality of processor cores, each of the processor cores may be configured to encode video data, and at least one of the processor cores may encode video data at the same time. There may be processor cores in an occupied state and processor cores in an idle state among the plurality of processor cores at the same time. Therefore, in order to ensure the efficiency of encoding, the processor core in the idle state is used for encoding the video data to be encoded, so the currently idle processor core for encoding the video data is determined from a plurality of processing cores of the multi-core processor. Alternatively, if there are multiple currently idle processor cores, a currently idle processor core for encoding the video data may be randomly determined therefrom. Or, optionally, each of the plurality of processor cores is provided with an identifier, and if there are a plurality of currently idle processor cores, the processor core with the smallest value of the identifier may be determined from the currently idle processor cores.
Optionally, in this embodiment, the idle indication of each processor core may be obtained, if the idle indication of the processor core is 1, it indicates that the processor core is the currently idle processor core, and if the idle indication of the processor core is 0, it indicates that the processor core is the currently occupied processor core.
Optionally, after determining a currently idle processor core for encoding the video data from the multi-core processor, the determined processor core is set to an occupied state, for example, an idle indication of the processor core is set to 0, so as to avoid allocating the processor core to other video data for encoding and avoid encoding collision.
S203, according to the current video data to be coded, obtaining a storage address of coding reference data of the current video data to be coded.
In this embodiment, after the video data to be currently encoded is obtained, since a plurality of processor cores are used to encode the video data in parallel, when the video data to be currently encoded is encoded, other data needs to be referred to, and the data is referred to as encoding parameter data of the video data to be currently encoded. Therefore, the present embodiment obtains the storage address of the encoding reference data of the video data to be currently encoded according to the video data to be currently encoded.
And S204, sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data.
In this embodiment, after the storage address of the encoding reference data is obtained, the storage address is sent to the processor core determined in the step S202, and after the processor core receives the storage address, the encoding reference data of the video data to be encoded currently is obtained according to the storage address, and then the video data is encoded according to the encoding reference data. Optionally, after the processor core finishes encoding the video data according to the encoding reference data, the processor core is set to an idle state, for example, an idle indication of the processor core is set to 1, so that the processor core is timely allocated to encode other video data.
In the embodiment, the video data to be coded currently is acquired; determining a currently idle processor core for encoding the video data from a multi-core processor; acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded; and sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data. In the embodiment, the memory address of the coding reference data of the video data to be coded currently is sent to the processor cores, so that the processor cores can acquire the coding reference data and code the video data according to the coding reference data, thereby realizing the simultaneous parallel coding of the video data among the processor cores and ensuring the requirements of high resolution and real-time performance of the video signal.
In some embodiments, the encoding reference data of the video data to be currently encoded comprises: and encoding data of video data adjacent to the video data to be encoded currently. The processor core of the embodiment can encode the video data to be encoded currently according to the encoded data of the adjacent video data of the video data to be encoded currently, so that the accuracy of parallel encoding of the video data can be ensured. For example: and if the current video data to be coded is the nth frame, the adjacent video data is the (n-1) th frame.
In some embodiments, after the processor core encodes the video data according to the encoding reference data, a storage address of encoded data of the video data is obtained, and the encoded data of the video data is used as encoding reference data of another video data. After the processor core encodes the video data to be encoded currently according to the encoding reference data, the encoded data of the video data is stored in a memory, and then the embodiment can acquire the storage address of the encoded data and use the encoded data of the video data as the encoding reference data of another video data. For example: the video data is an nth frame, and after the processor core encodes the nth frame, the encoded data of the nth frame is stored in a memory, and then the embodiment can acquire the storage address of the encoded data of the nth frame, and when the (n + 1) th frame needs to be encoded, because the (n + 1) th frame needs to be referred to for encoding, the encoded data of the nth frame serves as the encoded reference data of the (n + 1) th frame.
The storage address of the encoded data of the video data may be: addresses stored in off-chip storage. After obtaining the encoded data of the video data, the processor core stores the encoded data of the video data into an off-chip Memory, which may be, for example, a Synchronous Dynamic Random Access Memory (SDRAM). Alternatively, the first and second electrodes may be,
the storage address of the encoded data of the video data may be: an address stored in a memory within a processor core that encodes the video data. After acquiring the encoded Data of the video Data, the processor core stores the encoded Data of the video Data into a Memory in the processor core, where the Memory in the processor core may be, for example, a Static Random-Access Memory (SRAM) or a Double Data Rate (DDR SRAM).
In some embodiments, the encoded reference data comprises: and each processor core in the multi-core processor needs reference data when encoding video data. The reference data is used by each processor core when encoding video data, and is shared by each processor core.
Optionally, the memory address of the reference data is an address stored in a memory shared by the plurality of processor cores. The reference data may be stored in a memory shared by each processor core in the multi-core processor, the memory being external to the plurality of processor cores. Optionally, the memory shared by the plurality of processor cores is an SRAM located outside the plurality of processor cores.
In some embodiments, the encoding cooperation mode among the processor cores in the multi-core processor can also be acquired; and sending the coding configuration parameters corresponding to the coding cooperation mode to the processor core, so that the processor core codes the video data according to the coding configuration parameters and the coding reference data. In this embodiment, there are various encoding cooperation modes, for example: one coding cooperation mode indicates that the multi-core processor is used to code multiple frames of video data, another coding cooperation mode indicates that the multi-core processor is used to code multiple vertical stripes in the same frame of video data, and still another coding cooperation mode indicates that the multi-core processor is used to code multiple horizontal stripes in the same frame of video data, which is not limited in this embodiment. After a current coding cooperation mode among processor cores in a multi-core processor is obtained, a coding configuration parameter corresponding to the current coding cooperation mode is sent to each processor core in the multi-core processor, and then when each processor core codes video data, the video data can be coded according to the coding configuration parameter and coding reference data of the video data.
In some embodiments, if the coding cooperation mode indicates that the multi-core processor is used for coding multiple frames of video data, the video data to be currently coded is a frame of video data. The neighboring video data of the video data to be currently encoded is, for example, video data of a neighboring frame.
In some embodiments, if the encoding cooperation mode indicates that the multi-core processor is used to encode multiple vertical slices in the same frame of video data, the video data to be currently encoded is a vertical slice in a frame of video data. The neighboring video data of the video data to be currently encoded is, for example, neighboring vertical slices in the same frame of video data.
In some embodiments, if the encoding cooperation mode indicates that the multi-core processor is used to encode multiple horizontal slices in the same frame of video data, the video data to be currently encoded is a horizontal slice in a frame of video data. The neighboring video data of the video data to be currently encoded is, for example, a neighboring horizontal slice in the same frame of video data.
The scheme of the embodiment of the present invention is described below by taking a High Efficiency Video Coding (HEVC) encoder (coder) as an example. As shown in fig. 3, the multi-core Encoder (Encoder multi-core wrap) includes a multi-core Scheduler (multi-core Scheduler), 3 Encoder cores (i.e., processor cores in the above embodiments), and may instantiate 2 or more cores, where the number of Encoder cores is not limited in this embodiment. And the cooperative cooperation among a plurality of encoder cores is synchronized or scheduled by a multi-core scheduling device.
Wherein each encoder core internally comprises: a functional LOGIC circuit (LOGIC CORE), a bus access module (aximper), a Configuration module (Configuration: consisting of SW REG (software register) and AHB (AHB slave interface)), an internal storage unit (e.g., SRAM, i.e., memory inside the processor CORE in the above-described embodiments).
The multi-core scheduling device comprises the following modules: a Synchronizer (Synchronizer), a Scheduler (Scheduler), a shared controller (shared controller), a shared memory (e.g., SRAM, which is a memory shared by a plurality of processor cores in the above embodiments), and the following describes each module.
A synchronizer: data to be accessed among a plurality of encoder cores can be written into an off-chip memory (namely SDRAM in figure 3), and written data coordinates or address information is transmitted to the encoder cores to be accessed through a synchronizer. The function of the synchronizer is to synchronize coordinate or address information between multiple encoder cores, such as encoded reference frame data, coordinates of encoded reference motion vector data (i.e. storage address of encoded reference data in the above-mentioned embodiments). The synchronizer can be connected with any encoder core, connected through valid & ready standard handshake interactive interfaces, and internally gated by any two encoder cores through MUX and DEMUX to carry out interactive communication. The interactive data is stored in the middle by FIFO, so that the loss is avoided. In the Valid & ready interaction timing, data is stored in the FIFO or output to the lower module when vld and rdy are simultaneously active (i.e., active high). If the coded sequence is a P sequence, the encoder core 0 encodes an nth frame, the encoder core 1 encodes an n +1 th frame, reconstructed data written by the encoder core 0 serves as reference data for motion estimation of the encoder core 1, the encoder core 0 is required to send the position of the reconstructed data written out of the chip to the encoder core 1 through a synchronizer to ensure that the reference data of the encoder core 1 is always written out of an off-chip memory by the encoder core 0, the encoder core 1 judges after receiving the position of the reconstructed data (namely, the position of the reference data) through the synchronizer, and motion estimation is started only after the required reference data is ready.
A scheduler: and scheduling the cooperative encoding among a plurality of encoder cores, wherein software only needs to write the encoding configuration parameters of the frame to be encoded into a chip and select an encoding cooperative mode, and a scheduler forwards the encoding configuration parameters to a configuration module in the encoder cores according to the encoding cooperative mode and starts encoding.
Coding cooperation mode 1: the multiple encoder cores encode multiple frames simultaneously, each encoder core encodes one frame, the multiple frames have dependency of reference data and motion vectors, and the synchronization of the reference data and the motion vectors among the multiple encoder cores (namely the multiple frames) is completed through a synchronizer.
Coding cooperation mode 2: a plurality of encoder cores simultaneously encode a frame, each encoder core encodes a horizontal stripe (tile) or a vertical stripe (slice), intra-prediction adjacent data reference dependence and filtering adjacent data dependence exist among the plurality of encoder cores (namely, the plurality of slices or the plurality of tiles), and the data are shared among the plurality of encoder cores (the plurality of slices or the plurality of tiles) through a shared controller. The entropy coding performs independent coding to generate independent streams.
Coding cooperation mode 3: multiple encoder cores encode a frame simultaneously, each encoding a row of CTBs, and the data dependency between CTB encodings (with cooperative mode 2) is shared by a shared controller.
Scheduler scheduling is not limited to the above three cooperation modes, and the scheduler mainly manages internally: and distributing and controlling software configuration data, establishing a synchronous information channel among a plurality of encoder cores, and controlling the encoder cores to start and end encoding. The scheduler is in class to be controlled with different state machines according to different cooperation modes.
Taking coding cooperative mode 1 as an example, the scheduler has several variables as follows:
core _ amount: the number of encoder cores, in this embodiment 3;
core _ x _ idle: when the idle indication of the encoder core x is 1, the encoder core x is idle;
core _ x _ curr _ buf _ id, namely the buffer id of a frame to be coded of the encoder core x;
core _ x _ ref _ buf _ id: the reference frame buffer id of encoder core x.
The encoding process is for example:
1. frame n: preparing parameters for frame n (nth frame) to be coded;
2. wait frame end interrupt: waiting for the end interruption of the coding frame;
3. core _ x _ idle determination: judging idle core code cores, selecting an encoder core with small x when a plurality of core code cores are idle, and setting the selected core x idle to be 0 after selection;
4. core _ x _ curr _ buf _ id: setting core _ x _ curr _ buf _ id as n;
5. core _ x _ ref _ buf _ id: setting core _ x _ ref _ buf _ id as n-1 or 0, and selecting a reconstructed frame of the previous 1 frame as a reference;
6. setup AHB channel to core x: establishing an encoder core x AHB writing channel;
7. frame n parameter write in o core x: configuring parameters to an encoder core x;
8. start core x encoder: starting an encoder core x and starting encoding;
9. setup core x and y synchronization channel: establishing a synchronous information channel of an encoder core x for encoding a current frame and an encoder core y for encoding a reference frame;
10. core x frame n encoder finish: the frame n coding of the coder core x is finished;
11. core _ x _ idle set: setting core _ x _ idle to 1, and reporting frame ending interruption at the same time;
12. frame n done: frame n coding ends.
The sharing controller: data needing to be mutually accessed among a plurality of encoder cores can be written into an off-chip memory, and written data coordinate information is transmitted to the encoder cores needing to be accessed through a synchronizer. A shared controller may also be used to directly perform access between multiple encoder cores. In the encoding cooperation mode 2, a plurality of encoder cores encode different tiles/slices in a frame at the same time, and both deblocking and sao filtering need to refer to pixels next to the tiles/slices during tile/slice encoding. Therefore, using the shared controller, after the encoder core 0 completes encoding of tile0, the encoder core 0 sends a ready signal to the shared controller by encoding the data buffer on the right side of tile0 in the SRAM in the encoder core 0. The encoder core 1 encodes tile1 beside tile0, the encoder core 1 needs to refer to the pixels of tile0 when encoding the pixels on the left side of tile1 (when deblocking and sao filtering are performed or intra prediction is performed), and the encoder core 1 sends a request signal to the shared controller. After receiving the ready signal of the encoder core 0 and the request signal of the encoder core 1, the shared controller connects the SRAM read control signal of the encoder core 0 to the encoder core 1, establishes a direct connection relationship, and sends a signal of successful establishment to the encoder core 0 and the encoder core 1. After the encoder core 0 and the encoder core 1 receive the connection success signal, the encoder core 1 directly accesses the SRAM in the encoder core 0. After the encoder core 1 accesses the SRAM of the encoder core 0, a ready signal is sent to the shared controller, the shared controller cancels the direct connection relation, and the SRAM of the encoder core 0 is released.
Shared memory: besides sharing data through an off-chip memory or a direct connection (namely, an SRAM in the encoder core), the data can also be shared through a shared memory, and the shared memory is integrated in the multi-core scheduling device. The data which the multiple encoder cores need to access can be encoded in a shared memory mode.
Taking the rate control parameter as an example, the software directly stores the rate control parameter in the shared memory, and the multiple encoder cores access the shared memory through the access arbiter.
The shared memory access arbiter arbitrates according to the request and response modes as follows: the multiple encoder cores need to access rate control parameters, the encoder cores send access requests to a shared memory access arbiter, the shared memory access arbiter receives the access requests and sorts the requests according to a first-come first-obtained principle, and the request arranged at the forefront is responded first. The encoder core may access the shared memory upon receiving the reply. And after the encoder core finishes accessing the shared memory, sending a request ending signal to the shared memory access arbiter, receiving the request ending signal by the shared memory access arbiter, canceling the access right of the encoder core, and then responding to the next request.
Therefore, based on the above scheme, embodiments of the present invention provide a general multi-core implementation architecture, a data-dependent synchronization method and architecture under the multi-core implementation architecture, a method for sharing storage resources under the multi-core implementation architecture, and a general multi-core scheduling control method and architecture.
The embodiment of the present invention further provides a computer storage medium, in which program instructions are stored, and when the program is executed, part or all of the steps of the video encoding method in fig. 2 and its corresponding embodiment may be included.
Fig. 4 is a schematic structural diagram of a video encoding apparatus according to an embodiment of the present invention, and as shown in fig. 4, the video encoding apparatus 400 according to this embodiment may include: a multi-core scheduler 401 and a plurality of processor cores 402.
The multi-core scheduling device 401 is configured to obtain video data to be currently encoded; and determining a currently idle processor core 402 from the plurality of processor cores 402 for encoding the video data;
acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded;
and sending the storage address to the determined processor core 402, so that the processor core 402 acquires the coding reference data according to the storage address and codes the video data according to the coding reference data.
In some embodiments, the encoded reference data comprises: and encoding data of video data adjacent to the video data to be encoded currently.
In some embodiments, the processor core 402 is further configured to store encoded data of the video data after encoding the video data according to the encoding reference data;
the multi-core scheduling device 401 is further configured to obtain a storage address of the encoded data of the video data, and use the encoded data of the video data as encoding reference data of another video data.
In some embodiments, the processor core 402 includes a memory 4021, and the encoded data of the video data is stored at the address: an address stored in a memory 4021 within the processor core 402 that encodes the video data; alternatively, the first and second electrodes may be,
the storage address of the coded data of the video data is as follows: the address stored in the off-chip memory 403. The off-chip memory 403 may be a memory outside the video encoding apparatus 400, or may be a memory inside the video encoding apparatus 400. For example: if the video coding device 400 is a chip or a multi-core processor of a chip, the off-chip memory 403 is a memory outside the video coding device 400, wherein fig. 4 illustrates that the off-chip memory 403 is a memory outside the video coding device 400, but the embodiment is not limited thereto.
In some embodiments, the off-chip memory 403 is SDRAM.
In some embodiments, the memory 4021 within the processor core 402 is an SRAM.
In some embodiments, the encoded reference data comprises: and each processor core in the plurality of processor cores needs reference data when encoding video data.
In some embodiments, the video encoding apparatus 400 further comprises: a memory 404 shared by the plurality of processor cores 402;
the memory address of the reference data is an address stored in a memory shared by the plurality of processor cores 402.
The memory 404 shared by the plurality of processor cores 402 may be a memory outside the multi-core scheduling device 401 or a memory inside the multi-core scheduling device 401, and fig. 4 illustrates an example in which the memory 404 shared by the plurality of processor cores 402 is a memory inside the multi-core scheduling device 401, which is not limited in this embodiment.
In some embodiments, the memory 404 shared by the plurality of processor cores 402 is SRAM located outside the plurality of processor cores 402.
In some embodiments, the multi-core scheduling apparatus 401 is further configured to: acquiring a coding cooperation mode among the processor cores 402 in the plurality of processor cores 402; sending the encoding configuration parameters corresponding to the encoding cooperation mode to the processor core 402;
the processor core 402 is configured to encode the video data according to the encoding configuration parameter and the encoding reference data.
In some embodiments, the video data currently to be encoded is a frame of video data if the encoding cooperation mode indicates that multiple processor cores 402 are used to encode multiple frames of video data.
In some embodiments, if the encoding cooperation mode indicates that multiple processor cores 402 are used to encode multiple vertical slices in the same frame of video data, then the video data currently to be encoded is a vertical slice in a frame of video data.
In some embodiments, the video data currently to be encoded is a horizontal slice in a frame of video data if the encoding cooperation mode indicates that multiple processor cores 402 are used to encode multiple horizontal slices in the same frame of video data.
In some embodiments, the multi-core scheduler 401 is further configured to place a currently idle processor core for encoding the video data into an occupied state after determining the processor core from the plurality of processor cores 402; and after the processor core 402 finishes encoding the video data according to the encoding reference data, setting the processor core 402 to an idle state.
The video encoding device of this embodiment may be configured to execute the technical solutions in the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 5 is a schematic structural diagram of a movable platform according to an embodiment of the present invention, and as shown in fig. 5, the movable platform 500 of the embodiment may include: an imaging device 501 and a video encoding apparatus 502. The imaging device 501 is configured to acquire video data. The video encoding apparatus 502 is used to encode video data collected by the imaging device 501. For example, the video encoding device 502 may adopt the structure of the embodiment shown in fig. 4, and accordingly, the technical solutions in the above method embodiments may be implemented, and the implementation principle and the technical effect are similar, which are not described herein again.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (30)
1. A video encoding method, comprising:
acquiring video data to be coded currently;
determining a currently idle processor core for encoding the video data from a multi-core processor;
acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded;
and sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data.
2. The method of claim 1, wherein the encoding the reference data comprises: and encoding data of video data adjacent to the video data to be encoded currently.
3. The method of claim 1 or 2, further comprising:
and after the processor core encodes the video data according to the encoding reference data, acquiring a storage address of encoded data of the video data, and taking the encoded data of the video data as encoding reference data of another video data.
4. A method as claimed in claim 2 or 3, wherein the coded data of the video data is stored at an address: an address stored in off-chip memory, or an address stored in memory within a processor core that encodes the video data.
5. The method of claim 4, wherein the off-chip memory is a Synchronous Dynamic Random Access Memory (SDRAM).
6. The method of claim 4, wherein the memory within the processor core is a Static Random Access Memory (SRAM).
7. The method according to any of claims 1-6, wherein the encoding the reference data comprises: and each processor core in the multi-core processor needs reference data when encoding video data.
8. The method of claim 7, wherein the reference data is stored at an address stored in a memory shared by the plurality of processor cores.
9. The method of claim 8, wherein the memory shared by the plurality of processor cores is an SRAM located outside of the plurality of processor cores.
10. The method according to any one of claims 1-9, further comprising:
acquiring a coding cooperation mode among processor cores in the multi-core processor;
and sending the coding configuration parameters corresponding to the coding cooperation mode to the processor core, so that the processor core can encode the video data according to the coding configuration parameters and the coding reference data.
11. The method of claim 10, wherein the video data to be currently encoded is a frame of video data if the encoding collaboration mode indicates that the multi-core processor is configured to encode multiple frames of video data.
12. The method of claim 10, wherein if the encoding collaboration mode indicates that the multi-core processor is configured to encode multiple vertical slices in the same frame of video data, the video data to be currently encoded is a vertical slice in a frame of video data.
13. The method of claim 10, wherein the video data to be currently encoded is a horizontal slice in a frame of video data if the encoding collaboration mode indicates that the multi-core processor is used to encode multiple horizontal slices in the same frame of video data.
14. The method of any of claims 1-13, wherein after determining a currently idle processor core for encoding the video data from the multi-core processor, further comprising:
setting the processor core to an occupied state;
and after the processor core finishes encoding the video data according to the encoding reference data, setting the processor core in an idle state.
15. A video encoding device, comprising: a multi-core scheduling device and a plurality of processor cores;
the multi-core scheduling device is used for acquiring the current video data to be coded; and determining a currently idle processor core for encoding the video data from the plurality of processor cores;
acquiring a storage address of coding reference data of the current video data to be coded according to the current video data to be coded;
and sending the storage address to the determined processor core so that the processor core can acquire the coding reference data according to the storage address and code the video data according to the coding reference data.
16. The apparatus of claim 15, wherein the encoded reference data comprises: and encoding data of video data adjacent to the video data to be encoded currently.
17. The apparatus according to claim 15 or 16, wherein the processor core is further configured to store encoded data of the video data after encoding the video data according to the encoding reference data;
the multi-core scheduling device is further configured to acquire a storage address of the encoded data of the video data, and use the encoded data of the video data as encoding reference data of another video data.
18. The apparatus according to claim 16 or 17, wherein the encoded data of the video data is stored at an address: an address stored in a memory within a processor core that encodes the video data; or an address stored in off-chip memory.
19. The apparatus of claim 18, wherein the off-chip memory is a Synchronous Dynamic Random Access Memory (SDRAM).
20. The apparatus of claim 18, wherein the memory within the processor core is a Static Random Access Memory (SRAM).
21. The apparatus according to any of claims 15-20, wherein the encoded reference data comprises: and each processor core in the plurality of processor cores needs reference data when encoding video data.
22. The apparatus of claim 21, further comprising: a memory shared by the plurality of processor cores;
the memory address of the reference data is an address stored in a memory shared by the plurality of processor cores.
23. The apparatus of claim 22, wherein the memory shared by the plurality of processor cores is an SRAM located outside the plurality of processor cores.
24. The apparatus according to any of claims 15-22, wherein the multi-core scheduling device is further configured to: acquiring a coding cooperation mode among processor cores in the plurality of processor cores; sending the coding configuration parameters corresponding to the coding cooperation mode to the processor core;
and the processor core is used for encoding the video data according to the encoding configuration parameters and the encoding reference data.
25. The apparatus of claim 24, wherein the video data currently to be encoded is a frame of video data if the encoding cooperation mode indicates that multiple processor cores are used to encode multiple frames of video data.
26. The apparatus of claim 24, wherein the video data currently to be encoded is a vertical slice in a frame of video data if the encoding cooperation mode indicates that multiple processor cores are used to encode multiple vertical slices in the same frame of video data.
27. The apparatus of claim 24, wherein the video data currently to be encoded is a horizontal slice in a frame of video data if the encoding cooperation mode indicates that multiple processor cores are used to encode multiple horizontal slices in the same frame of video data.
28. The apparatus according to any of claims 15-27, wherein the multi-core scheduler is further configured to place the processor core in an occupied state after determining a currently idle processor core from a plurality of processor cores for encoding the video data; and setting the processor core in an idle state after the processor core finishes encoding the video data according to the encoding reference data.
29. A movable platform, comprising: an imaging device and a video encoding apparatus as claimed in any one of claims 15 to 28;
the imaging device is used for acquiring video data.
30. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program; the computer program, when executed, implementing a video encoding method as claimed in any one of claims 1-14.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/080681 WO2020199050A1 (en) | 2019-03-29 | 2019-03-29 | Video encoding method and device, and movable platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111316643A true CN111316643A (en) | 2020-06-19 |
Family
ID=71159512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980005411.0A Pending CN111316643A (en) | 2019-03-29 | 2019-03-29 | Video coding method, device and movable platform |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111316643A (en) |
WO (1) | WO2020199050A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112235579A (en) * | 2020-09-28 | 2021-01-15 | 深圳市洲明科技股份有限公司 | Video processing method, computer-readable storage medium and electronic device |
CN117395437A (en) * | 2023-12-11 | 2024-01-12 | 沐曦集成电路(南京)有限公司 | Video coding and decoding method, device, equipment and medium based on heterogeneous computation |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101267564A (en) * | 2008-04-16 | 2008-09-17 | 中国科学院计算技术研究所 | A multi-processor video coding chip device and method |
CN101466041A (en) * | 2009-01-16 | 2009-06-24 | 清华大学 | Task scheduling method for multi-eyepoint video encode of multi-nuclear processor |
CN103227919A (en) * | 2013-03-29 | 2013-07-31 | 苏州皓泰视频技术有限公司 | Scalable video coding (SVC) method based on multi-core processor Tilera |
CN103634604A (en) * | 2013-12-01 | 2014-03-12 | 北京航空航天大学 | Multi-core DSP (digital signal processor) motion estimation-oriented data prefetching method |
CN104539972A (en) * | 2014-12-08 | 2015-04-22 | 中安消技术有限公司 | Method and device for controlling video parallel decoding in multi-core processor |
US9148670B2 (en) * | 2011-11-30 | 2015-09-29 | Freescale Semiconductor, Inc. | Multi-core decompression of block coded video data |
CN105005546A (en) * | 2015-06-23 | 2015-10-28 | 中国兵器工业集团第二一四研究所苏州研发中心 | Asynchronous AXI bus structure with built-in cross point queue |
CN105263022A (en) * | 2015-09-21 | 2016-01-20 | 山东大学 | Multi-core hybrid storage management method for high efficiency video coding (HEVC) process |
CN106921862A (en) * | 2014-04-22 | 2017-07-04 | 联发科技股份有限公司 | Multi-core decoder system and video encoding/decoding method |
CN108540797A (en) * | 2018-03-23 | 2018-09-14 | 南京邮电大学 | HEVC based on multi-core platform combines WPP coding methods within the frame/frames |
CN109271333A (en) * | 2017-07-17 | 2019-01-25 | 深圳市中兴微电子技术有限公司 | A kind of SRAM control method and controller, control system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101272497B (en) * | 2008-05-07 | 2011-06-08 | 北京数码视讯科技股份有限公司 | Video encoding method |
CN103873874B (en) * | 2014-02-19 | 2017-06-06 | 同观科技(深圳)有限公司 | A kind of full search method for estimating based on programmable parallel processor |
-
2019
- 2019-03-29 WO PCT/CN2019/080681 patent/WO2020199050A1/en active Application Filing
- 2019-03-29 CN CN201980005411.0A patent/CN111316643A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101267564A (en) * | 2008-04-16 | 2008-09-17 | 中国科学院计算技术研究所 | A multi-processor video coding chip device and method |
CN101466041A (en) * | 2009-01-16 | 2009-06-24 | 清华大学 | Task scheduling method for multi-eyepoint video encode of multi-nuclear processor |
US9148670B2 (en) * | 2011-11-30 | 2015-09-29 | Freescale Semiconductor, Inc. | Multi-core decompression of block coded video data |
CN103227919A (en) * | 2013-03-29 | 2013-07-31 | 苏州皓泰视频技术有限公司 | Scalable video coding (SVC) method based on multi-core processor Tilera |
CN103634604A (en) * | 2013-12-01 | 2014-03-12 | 北京航空航天大学 | Multi-core DSP (digital signal processor) motion estimation-oriented data prefetching method |
CN106921862A (en) * | 2014-04-22 | 2017-07-04 | 联发科技股份有限公司 | Multi-core decoder system and video encoding/decoding method |
CN104539972A (en) * | 2014-12-08 | 2015-04-22 | 中安消技术有限公司 | Method and device for controlling video parallel decoding in multi-core processor |
CN105005546A (en) * | 2015-06-23 | 2015-10-28 | 中国兵器工业集团第二一四研究所苏州研发中心 | Asynchronous AXI bus structure with built-in cross point queue |
CN105263022A (en) * | 2015-09-21 | 2016-01-20 | 山东大学 | Multi-core hybrid storage management method for high efficiency video coding (HEVC) process |
CN109271333A (en) * | 2017-07-17 | 2019-01-25 | 深圳市中兴微电子技术有限公司 | A kind of SRAM control method and controller, control system |
CN108540797A (en) * | 2018-03-23 | 2018-09-14 | 南京邮电大学 | HEVC based on multi-core platform combines WPP coding methods within the frame/frames |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112235579A (en) * | 2020-09-28 | 2021-01-15 | 深圳市洲明科技股份有限公司 | Video processing method, computer-readable storage medium and electronic device |
CN112235579B (en) * | 2020-09-28 | 2022-09-06 | 深圳市洲明科技股份有限公司 | Video processing method, computer-readable storage medium and electronic device |
CN117395437A (en) * | 2023-12-11 | 2024-01-12 | 沐曦集成电路(南京)有限公司 | Video coding and decoding method, device, equipment and medium based on heterogeneous computation |
CN117395437B (en) * | 2023-12-11 | 2024-04-05 | 沐曦集成电路(南京)有限公司 | Video coding and decoding method, device, equipment and medium based on heterogeneous computation |
Also Published As
Publication number | Publication date |
---|---|
WO2020199050A1 (en) | 2020-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6885485B2 (en) | Systems and methods for capturing still and / or moving scenes using multiple camera networks | |
US20220247898A1 (en) | Replaceable gimbal camera, aircraft, aircraft system, and gimbal replacement method for aircraft | |
EP3443749B1 (en) | Systems and methods for video processing and display | |
CN107079102B (en) | Focusing method, photographic device and unmanned plane | |
US11140332B2 (en) | Imaging control method, imaging device and unmanned aerial vehicle | |
CN108924520B (en) | Transmission control method, device, controller, shooting equipment and aircraft | |
CN113259717B (en) | Video stream processing method, device, equipment and computer readable storage medium | |
CN111316643A (en) | Video coding method, device and movable platform | |
EP4040783A1 (en) | Imaging device, camera-equipped drone, and mode control method, and program | |
WO2020181494A1 (en) | Parameter synchronization method, image capture apparatus, and movable platform | |
US20190138241A1 (en) | Method and system for storing images | |
CN110945452A (en) | Cloud deck, unmanned aerial vehicle control method, cloud deck and unmanned aerial vehicle | |
CN110383810B (en) | Information processing apparatus, information processing method, and information processing program | |
CN112672133A (en) | Three-dimensional imaging method and device based on unmanned aerial vehicle and computer readable storage medium | |
US20210181769A1 (en) | Movable platform control method, movable platform, terminal device, and system | |
WO2018086131A1 (en) | Data flow scheduling between processors | |
CN113791640A (en) | Image acquisition method and device, aircraft and storage medium | |
CN110770699A (en) | Data instruction processing method, storage chip, storage system and movable platform | |
JP2023157917A (en) | Imaging method | |
US20200319818A1 (en) | System and method for supporting low latency in a movable platform environment | |
CN113632037A (en) | Control method and device for movable platform | |
WO2020237429A1 (en) | Control method for remote control device, and remote control device | |
CN111316576A (en) | Unmanned aerial vehicle communication method and unmanned aerial vehicle | |
CN110786006A (en) | Color temperature adjusting method, control terminal and movable platform | |
US20200334192A1 (en) | Communication method, device, and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200619 |