CN113573069A - Video encoding and decoding method, device and system and electronic equipment - Google Patents

Video encoding and decoding method, device and system and electronic equipment Download PDF

Info

Publication number
CN113573069A
CN113573069A CN202010355957.3A CN202010355957A CN113573069A CN 113573069 A CN113573069 A CN 113573069A CN 202010355957 A CN202010355957 A CN 202010355957A CN 113573069 A CN113573069 A CN 113573069A
Authority
CN
China
Prior art keywords
frame image
video
encoding
image
code stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010355957.3A
Other languages
Chinese (zh)
Inventor
林中松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010355957.3A priority Critical patent/CN113573069A/en
Publication of CN113573069A publication Critical patent/CN113573069A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26291Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for providing content or additional data updates, e.g. updating software modules, stored at the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure provides a video encoding and decoding method, device, system and electronic device, wherein the method comprises: acquiring the update information of the current frame image of the video relative to the previous frame image; setting the previous frame image as a reference frame image for encoding the current frame image when the update information indicates that no update has occurred; and coding the current frame image by using the reference frame image to obtain a coded code stream.

Description

Video encoding and decoding method, device and system and electronic equipment
Technical Field
Embodiments of the present disclosure relate to the field of video processing technologies, and in particular, to a video encoding method, a video decoding method, a video encoding apparatus, a video decoding apparatus, a video playing system, an electronic device, and a computer-readable storage medium.
Background
At present, in order to solve the problem that the android system cannot run on a thin client with very limited hardware resources due to the complexity of the android system, the android system and the application thereof may be run in a cloud server in a container manner, for example, an android video application may be run in the cloud server in the container manner, and the obtained image data and audio data are sent to the client through a web real time Communication (WebRTC) protocol, so that the client performs video playing.
Generally, when the cloud server sends the image data and the audio data to the client, the image data and the audio data need to be encoded respectively, and the encoded audio/video code stream is sent to the client, so that the client decodes and plays the video. However, in the prior art, when encoding image data, no matter whether a current frame image is updated relative to a previous frame image, a conventional video encoding process is still required, and in conventional video encoding, there is a step of performing motion prediction on a frame image to be encoded to search for a reference frame image capable of encoding the frame image to be encoded, and the motion prediction step consumes a large amount of computation, thereby increasing encoding cost; in addition, due to the limitation of the number of the reference frame images in the reference frame image library, the reference frame images searched by the method have the problem of low coding efficiency.
Disclosure of Invention
The embodiment of the specification provides a new technical scheme for video coding.
According to a first aspect of the present description, there is provided an embodiment of a video encoding method, the method comprising:
acquiring the update information of the current frame image of the video relative to the previous frame image;
setting the previous frame image as a reference frame image for encoding the current frame image when the update information indicates that no update has occurred;
and coding the current frame image by using the reference frame image to obtain a coded code stream.
Optionally, the method further comprises:
and setting the prediction residual of each macro block included in the current frame image to be 0 when the update information indicates that no update occurs.
Optionally, the method further comprises:
judging whether the update information indicates that partial update occurs or not when the update information indicates that update occurs;
in the case that the update information indicates that partial update occurs, performing a macro block alignment operation on the image area in which the partial update occurs;
and continuously judging whether the macro block subjected to the alignment operation is positioned in the image area subjected to the partial update, and respectively adopting different coding modes to code the macro block positioned in the image area subjected to the partial update and the macro block positioned outside the image area subjected to the partial update to obtain a coded code stream.
Optionally, the method further comprises:
and in the case that the updating information indicates that updating occurs, marking the image area in which updating occurs in the current frame image.
Optionally, the method further comprises:
and under the condition that the updating area shows that partial updating does not occur, all macro blocks included in the current frame image are coded to obtain a coded code stream.
Optionally, the encoding all macro blocks included in the current frame image to obtain an encoded code stream includes:
performing motion prediction on all macro blocks included in the current frame image to obtain a prediction residual error and a motion vector corresponding to each macro block;
and obtaining a coded code stream according to each prediction residual and each motion vector.
Optionally, the encoding the macro block located outside the image area where the partial update occurs to obtain an encoded code stream includes:
setting the previous frame image as a reference frame image for coding a macro block positioned outside the image area where the partial update occurs;
and utilizing the reference frame image to encode the macro blocks positioned outside the image area subjected to the partial updating, and obtaining an encoding code stream corresponding to each macro block positioned outside the image area subjected to the partial updating.
Optionally, the encoding the macro block located outside the image area where the partial update occurs to obtain an encoded code stream, further includes:
and setting the prediction residual error of each macro block positioned outside the image area where the partial update occurs to be 0.
Optionally, the encoding the macro block located in the image area where the partial update occurs to obtain an encoded code stream includes:
performing motion prediction on the macro blocks in the image area where the partial update occurs to obtain a prediction residual error and a motion vector corresponding to each macro block;
and obtaining the coding code stream of each macro block correspondingly positioned in the image area with the partially updated according to each prediction residual and the motion vector.
Optionally, the method further comprises:
rendering any frame image of the video before encoding the frame image under the condition that a rendering function is started;
and the current frame image and the previous frame image are rendered images.
Optionally, the method further comprises: acquiring the equipment type of the terminal equipment selecting the coding mode corresponding to the code;
and sending the coded code stream to the terminal equipment belonging to the equipment type for decoding and playing.
There is also provided, in accordance with a second aspect of the present specification, an embodiment of a video decoding method, the method comprising:
acquiring a coding code stream obtained by coding each frame of image of a video, wherein coding any frame of image in each frame of image comprises: under the condition that the arbitrary frame image is not updated relative to the previous frame image, the previous frame image is used as a reference frame image to encode the arbitrary frame image;
decoding the coded code stream;
and playing the decoded video.
According to a third aspect of the present description, there is also provided an embodiment of a video encoding apparatus, comprising:
the update information acquisition module is used for acquiring the update information of the current frame image of the video relative to the previous frame image;
a reference frame image obtaining module, configured to set the previous frame image as a reference frame image for encoding the current frame image when the update information indicates that no update has occurred;
and the coding module is used for coding the current frame image by using the reference frame image to obtain a coded code stream.
According to a fourth aspect of the present description, there is also provided an embodiment of a video decoding apparatus, comprising:
the encoding code stream acquiring module is used for acquiring an encoding code stream obtained by encoding each frame image of a video, wherein encoding any frame image of the frame images comprises: under the condition that the arbitrary frame image is not updated relative to the previous frame image, the previous frame image is used as a reference frame image to encode the arbitrary frame image;
the decoding module is used for decoding the coded code stream;
and the video playing module is used for playing the decoded video.
According to a fifth aspect of the present description, there is also provided an embodiment of an electronic device, comprising:
the video encoding device as described in the third aspect above; or,
the video decoding apparatus according to the fourth aspect above; or,
a processor and a memory for storing instructions for controlling the processor to perform a method according to the first or second aspect above.
There is also provided, in accordance with a sixth aspect of the present specification, an embodiment of a video playback system, the system comprising:
a server comprising a memory and a processor, the memory of the server for storing executable commands; the processor of the server is configured to perform the video encoding method of the first aspect above under the control of the executable command; and the number of the first and second groups,
the terminal equipment comprises a memory and a processor, wherein the memory of the terminal equipment is used for storing executable commands; the processor of the terminal device is configured to execute the video decoding method according to the second aspect under the control of the executable command.
According to a seventh aspect of the present description, there is also provided an embodiment of a computer-readable storage medium storing executable instructions that, when executed by a processor, perform the method of the above first or second aspect.
In one embodiment, in order to solve the problems of low coding efficiency and high coding cost caused by the fact that in the existing video coding, even if the current frame image is not updated relative to the previous frame image, motion prediction needs to be performed on the current frame image to search out a reference frame image capable of coding the current frame image, by obtaining update information of the current frame image relative to the previous frame image of the video, and in the case that the update information indicates that the current frame image is not updated relative to the previous frame image, the previous frame image can be directly used as the reference frame image for coding the current frame image, so that in the case that the current frame image is not updated relative to the previous frame image, the step of performing motion prediction on the current frame image to search out a reference frame image capable of coding the current frame image in the existing video coding is directly omitted, the coding efficiency is improved, and the coding cost is reduced.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIGS. 1a and 1b are schematic diagrams of application scenarios that may be used to implement a video encoding method of an embodiment;
FIG. 2 is a schematic block diagram of a hardware configuration that may be used to implement a video encoding method of an embodiment;
FIG. 3 is a flow diagram of a video encoding method that may be used to implement one embodiment;
FIG. 4 is a flow diagram of a video encoding method that can be used to implement another embodiment;
FIG. 5 is a flow chart of a video decoding method that can be used to implement the third embodiment;
FIG. 6 is a flow diagram of a video codec method that may be used to implement one example;
FIG. 7 is a flow diagram of a video encoding method that may be used to implement one example;
FIG. 8 is a functional block diagram of a video encoding device that may be used to implement one embodiment;
FIG. 9 is a functional block diagram of a video decoding device that can be used to implement one embodiment;
FIG. 10 is a functional block diagram of an electronic device that may be used to implement an embodiment;
FIG. 11 is a hardware configuration diagram of a video playback system that may be used to implement one embodiment.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Under the trend of integration of the foreign internet cloud edge, hardware resources and cost required by client equipment are smaller and smaller, and due to the complexity of the android system, the android system cannot run on the client equipment with very limited hardware resources, so that the android system and the application thereof are run in a container mode in a cloud server, and a result is pushed to the client equipment through a WebRTC protocol to form a good choice, so that the android system and the application thereof have a good effect in video application.
At present, android video applications run in an android cloud container in a cloud server, after image data and audio data for a video are obtained, a video encoder and an audio encoder need to be used for encoding the image data and the audio data respectively, so that the video can be played on a client device through audio and video decoding, however, when the existing video encoder encodes the image data, even if a current frame image is not updated relative to a previous frame image, the existing video encoder still needs to perform motion prediction to search out a reference frame image capable of encoding the current frame image, and the motion prediction step consumes a large amount of calculation, so that the encoding cost is increased; in addition, the reference frame image searched by the method has the problem of low coding efficiency. Accordingly, the present embodiment provides a new encoding method for video.
As shown in fig. 1a and fig. 1b, taking as an example a scenario in which an Android application (Android Applications) running in a Cloud Host operating system (Host OS in Cloud) in an Android Cloud Container (Android Container) on a server 1000 is an Android video-type application, according to the Video encoding method in this embodiment, after a user performs a click operation on an icon of a Video application X in a terminal device 2000 (Client in fig. 1 b), the terminal device 2000 responds to the click operation and sends a Video playing request to an Android Video application X in an Android Container in a server 1000, and a surfefinger and an AudioFlinger in an Android frame are respectively responsible for synthesizing Video Source Data (Video Source Data) and Audio Source Data (Audio Source Data), the Video Source Data is transmitted to a Video Encoder (Video Encoder) by the surfefinger, and, transmitting the Audio Source Data to an Audio Encoder (Audio Encoder) by the Audio flinger.
After the Video Encoder and the Audio Encoder respectively encode the Video Source Data and the Audio Source Data, the Video Encoder transmits the encoded Video Source Data to the terminal device 2000 through a Video Channel (Video Channel) based on a WebRTC protocol, and the Audio Encoder transmits the encoded Audio Source Data to the terminal device 2000 through an Audio Channel (Audio Channel) based on the WebRTC protocol, and the terminal device 2000 respectively decodes the Video Source Data and the Audio Source Data to play the Video xxxxx. Compared with the prior art, when the video encoder of the present specification encodes any frame image in each frame image, if the frame image is not updated relative to the previous frame image, the previous frame image is directly used as the reference frame image to encode the frame image, so that according to the video encoding method of the embodiment, the step of performing motion prediction on the current frame image to search out the reference frame image for encoding the current frame image in the existing video encoding can be directly omitted under the condition that the current frame image is not updated relative to the previous frame image, the encoding efficiency is improved, and the encoding cost is reduced.
< hardware Equipment >
Fig. 2 is a schematic diagram showing a configuration of a video playback system that can be used to implement the video encoding method according to an embodiment.
As shown in fig. 2, the video playback system 100 of the present embodiment includes a server 1000, a terminal device 2000, and a network 3000.
The server 1000 may be, for example, a blade server, a rack server, or the like, and the server 1000 may also be a server cluster deployed in the cloud, which is not limited herein.
As shown in fig. 2, the server 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, and an input device 1600. The processor 1100 may be, for example, a central processing unit CPU or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a serial interface, and the like. Communication device 1400 is capable of wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display panel. The input device 1600 may include, for example, a touch screen, a keyboard, and the like.
In this embodiment, the memory 1200 of the server 1000 is used to store instructions for controlling the processor 1100 to operate to perform the video encoding method. The skilled person can design the instructions according to the solution disclosed in the present specification. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Those skilled in the art will appreciate that although a number of devices of the server 1000 are shown in fig. 2, the server 1000 of the present embodiments may refer to only some of the devices therein, e.g., only the processor 1100 and the memory 1200.
As shown in fig. 2, the terminal apparatus 2000 may include a processor 2100, a memory 2200, an interface device 2300, a communication device 2400, a display device 2500, an input device 2600, a speaker 2700, a microphone 2800, and the like. Processor 2100 is configured to execute program instructions, which may be in the instruction set of architectures such as x86, Arm, RISC, MIPS, SSE, and the like. The memory 2200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 2300 includes, for example, a USB interface, a headphone interface, and the like. Communication device 2400 is capable of wired or wireless communication, for example. The display device 2500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 2600 may include, for example, a touch screen, a keyboard, and the like. The speaker 2700 is used to output voice information. The microphone 2800 is used to collect voice information.
The terminal device 2000 may be any device that can support video playback, such as a smart phone, a laptop computer, a desktop computer, and a tablet computer.
In this embodiment, the memory 2200 of the terminal device 2000 is configured to store instructions for controlling the processor 2100 to operate so as to support implementation of a video decoding method according to any embodiment of this specification. The skilled person can design the instructions according to the solution disclosed in the present specification. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
It should be understood by those skilled in the art that although a plurality of devices of the terminal device 2000 are illustrated in fig. 2, the terminal device 2000 of the present embodiment may refer to only some of the devices, for example, only the processor 2100, the memory 2200, the display device 2500, the input device 2600, and the like.
The communication network 3000 may be a wireless network or a wired network, and may be a local area network or a wide area network. The terminal device 2000 can communicate with the server 1000 through the communication network 3000.
The video playback system 1000 shown in fig. 2 is merely illustrative and is in no way intended to limit the present description, its applications, or uses. For example, although fig. 2 shows only one server 1000 and one terminal device 2000, it is not meant to limit the respective numbers, and a plurality of servers 1000 and/or a plurality of terminal devices 2000 may be included in the video playback system 1000.
< method embodiment I >
Fig. 3 is a flow diagram of a video encoding method according to an embodiment, which may be implemented by a server, such as the server 1000 shown in fig. 1a or fig. 2. As shown in fig. 3, the video encoding method of the present embodiment may include the following steps S3100 to S3300:
in step S3100, update information of a current frame image of the video with respect to a previous frame image is obtained.
The video may be a video played through any video-like application in the server 1000. For example, the video class application is X, and the video may be a video played through the video class application X.
In one example, the server 1000 may obtain update information of a current frame image of the video relative to a previous frame image, where the update information of the current frame image of the video relative to the previous frame image at least indicates whether the current frame image is updated relative to the previous frame image, and where the current frame image is updated relative to the previous frame image, a location of an image area where the update occurs is located.
In this example, the server 1000 may further obtain pixel information of a current frame image of the video, so as to obtain the current frame image at least according to the pixel information.
In this embodiment, the server 1000 may obtain pixel information of a current frame image and update information of the current frame image relative to a previous frame image, so as to obtain the current frame image according to the pixel information, and according to the judgment on the update information, to select a suitable reference frame image to encode the current frame image, thereby obtaining an encoded code stream after encoding. For example, when the update information indicates that no update has occurred, the previous frame image may be directly set as a reference frame image for encoding the current frame image, and the current frame image may be encoded using the reference frame image. For example, when the update information indicates that an update has occurred, the update information may be further determined to determine whether or not a partial update has occurred, and an appropriate reference frame image may be selected to encode the current frame image.
In step S3200, if the update information indicates that no update has occurred, the previous frame image is set as a reference frame image for encoding the current frame image.
In one example, when the update information indicates that no update has occurred, the previous frame image may be directly set as a reference frame image for encoding the current frame image, so that the current frame image may be encoded using the reference frame image.
In the present example, when the update information indicates that no update has occurred, the previous frame image is set as the reference frame image for encoding the current frame image, and the step of performing motion prediction on the current frame image to search out the reference frame image capable of encoding the current frame image in the prior art is omitted.
In this example, since the current frame image is not updated relative to the previous frame image, the previous frame image may be directly used as a reference frame image for encoding the current frame image, and the current frame image is encoded by using the reference frame image, which inevitably improves the encoding efficiency.
In one example, when the update information indicates that no update has occurred, the prediction residuals of each macroblock included in the current frame image may be set to be 0. It can be understood that the macro block is a unit for video coding, any frame image is composed of a plurality of macro blocks, and meanwhile, the video coding is also a unit of macro blocks, and the macro blocks are coded one by one and organized into a continuous coding code stream.
In this example, the prediction residual of each macroblock included in the current frame image is 0, which is understood to mean that each macroblock included in the current frame image is identical to a macroblock for encoding a corresponding macroblock.
In step S3200, when the update information indicates that no update has occurred, the server 1000 may directly set the previous frame image as a reference frame image for encoding each macroblock included in the current frame image, set prediction residuals of each macroblock included in the current frame image to be 0, and then encode the macroblocks by macroblocks using the reference frame image and organize the reference frame image into a continuous encoded code stream.
And S3300, encoding the current frame image by using the reference frame image to obtain an encoded code stream.
In step S3300, the server 1000 may perform entropy coding on the reference frame image macroblock by macroblock, and organize the reference frame image into a continuous coding stream. For example, each macroblock may be encoded sequentially in order from left to right and top to bottom. For example, all macroblocks may be encoded at the same time, and the present embodiment does not limit how to specifically encode the macroblocks.
In this embodiment, after the current frame image is encoded by using the reference frame image according to the step S3300, the encoded current frame image is subjected to inverse encoding and filtering processing to obtain the current frame image, so that the current frame image can be used as the reference frame image when the next frame image is encoded. For example, after entropy coding each macroblock included in the current frame image, each macroblock obtained after entropy coding may be subjected to inverse coding and filtering, so as to obtain the current frame image.
As is apparent from the above steps S3100 to S3300, by acquiring update information of a current frame image of a video with respect to a previous frame image and taking the previous frame image as a reference frame image for encoding the current frame image when the update information indicates that the current frame image is not updated with respect to the previous frame image, a step of performing motion prediction on the current frame image to search for a reference frame image that can encode the current frame image in the conventional video encoding is directly omitted when the current frame image is not updated with respect to the previous frame image, thereby improving encoding efficiency and reducing encoding cost.
In an embodiment, after performing step S3100, the video encoding method of this embodiment may further include:
in the case where the update information indicates that an update has occurred, an image area in which the update has occurred is marked in the current frame image.
The above image area marked in the current frame image for updating may be: the updated image area is generated according to a preset shape mark, and the preset shape may be, for example, a regular rectangular shape, a circular shape, or the like, or an irregular shape, which is not limited herein.
In the embodiment, under the condition that the update information indicates that the update occurs, the updated image area is further marked in the current frame image, so that the identifiability of the updated image area is improved, and a programmer can intuitively know the relative position of the updated image area on the current frame image according to the mark.
In an embodiment, another video encoding method is provided, and according to fig. 4, after step S3100 is performed, the video encoding method of this embodiment may further include steps S3400 to S3600 as follows:
in step S3400, when the update information indicates that the update has occurred, it is determined whether the update information indicates that the partial update has occurred.
In this embodiment, a completely different video encoding method is provided from the above step S3200 and step S3300, in which the server 1000 directly encodes the previous frame image as the reference frame image for encoding the current frame image when the update information indicates that the update has not occurred, that is, when the server 1000 indicates that the update has occurred, it continues to determine whether the update information indicates that a partial update has occurred, and performs different encoding according to the determination result.
In step S3500, when the update information indicates that the partial update has occurred, the macroblock alignment operation is performed on the image area where the partial update has occurred.
In step S3500, for example, when the update information indicates that the partial update has occurred, the macroblock up alignment operation may be performed on the image area where the partial update has occurred so that the image area where the partial update has occurred can be encoded in units of an integer number of macroblocks.
And step S3600, continuously judging whether the macro block subjected to the alignment operation is positioned in the image area subjected to the partial update, and respectively adopting different coding modes to code the macro block positioned in the image area subjected to the partial update and the macro block positioned outside the image area subjected to the partial update to obtain a coded code stream.
In this embodiment, in step S3600, encoding the macroblock located outside the image area where the partial update occurs, and obtaining the encoded code stream may further include the following steps S3611 to S3612:
in step S3611, the previous frame image is set as a reference frame image for encoding the macro block located outside the image area where the partial update has occurred.
In one example, the server 1000 determines that the macroblock after the alignment operation is located outside the image area where the partial update occurs, and here, the previous frame image may be directly set as a reference frame image for encoding the macroblock located outside the image area where the partial update occurs, so as to encode the macroblock located outside the image area where the partial update occurs by using the previous frame image.
In one example, the server 1000 determines that the macroblock subjected to the alignment operation is located outside the image area where the partial update occurs, and here, the prediction residuals of each macroblock located outside the image area where the partial update occurs may be set to be 0.
Step S3612, the macroblock located outside the image area where the partial update occurs is encoded using the reference frame image, and an encoded code stream corresponding to each macroblock located outside the image area where the partial update occurs is obtained.
In step S3612, the server 1000 may entropy-encode the macro blocks located outside the image area where the partial update occurs one by using the reference frame image, and organize the macro blocks into a continuous encoded code stream. For example, macroblocks located outside the partially changed image area may be encoded sequentially in the order from left to right and from top to bottom. For example, all macroblocks located outside the partially changed image area may be encoded at the same time, and the present embodiment is not limited to how to specifically encode the macroblocks.
In this embodiment, the macroblocks outside the image region where the partial update occurs may be encoded according to the above steps S3611 to S3612, and after the encoded code stream is obtained, the encoded macroblocks may be subjected to inverse encoding and filtering processing to obtain the macroblocks outside the image region where the partial update occurs.
In this embodiment, in step S3600, encoding the macroblock located in the image region where the partial update occurs, and obtaining the encoded code stream may further include the following steps S3621 to S3622:
step S3621, motion prediction is performed on the macro blocks located in the image area where partial update occurs, and a prediction residual and a motion vector corresponding to each macro block are obtained.
Step S3622, according to each prediction residual and the motion vector, a code stream corresponding to each macroblock in the image region where the partial update occurs is obtained.
In the above step S3621 and this step S3622, the server 1000 determines that the macroblock subjected to the alignment operation is located in the image area where the partial update has occurred, and here, the macroblock located in the image area where the partial update has occurred may be encoded by using an existing video encoding flow. For example, it may be that a macroblock located in an image area where partial update occurs is subjected to motion estimation to search for other frame images capable of encoding the corresponding macroblock (inter prediction), and other macroblocks in the current frame where the corresponding macroblock is encoded (intra prediction), where the other frame images capable of encoding the corresponding macroblock or other macroblocks in the current frame where the corresponding macroblock is encoded may be represented by motion vectors; then, on one hand, motion compensation is carried out on the motion vector to obtain an inter-frame residual error and/or an intra-frame residual error, prediction mode decision is carried out on the inter-frame residual error and/or the intra-frame residual error to obtain a prediction residual error, and transformation and quantization are carried out on the prediction residual error; on the other hand, entropy coding is carried out on the motion vector and the prediction residual after transformation and quantization and the motion vector and the prediction residual are organized into a continuous coding code stream.
In this embodiment, the macro blocks located in the image region where the partial update occurs may be encoded according to the above steps S3621 to S3622, and after the encoded code stream is obtained, the encoded macro blocks may be subjected to inverse encoding, inverse quantization, inverse transformation, and filtering processing to obtain the macro blocks located in the image region where the partial update occurs.
As can be seen from steps S3400 to S3600, by obtaining update information of a current frame image of a video with respect to a previous frame image, and when the update information indicates that the current frame image is updated with respect to the previous frame image, continuously determining whether the update information indicates that a partial update occurs, and when the partial update occurs, directly omitting a step of performing motion prediction in the existing video coding to search for a reference frame image capable of encoding a macroblock located outside an image area where the partial update occurs, the coding efficiency is improved, and the coding cost is reduced.
In an embodiment, another video encoding method is provided, and according to fig. 4, after performing step S3400, the video encoding method of this embodiment may further include the following step S3700:
step 3700, in case that the update area indicates that partial update has not occurred, encoding all macro blocks included in the current frame image to obtain an encoded code stream.
In this embodiment, the step S3700 of encoding all the macro blocks included in the current frame image to obtain an encoded code stream may further include the following steps S3710 to S3720:
step S3710 is performed to perform motion prediction on all macro blocks included in the current frame image, and obtain a prediction residual and a motion vector corresponding to each macro block.
Step S3720, obtain the encoded code stream according to each prediction residual and motion vector.
In the above step S3710 and this step S3720, when the server 1000 determines that the update information indicates that the partial update has not occurred, the current frame image may be encoded by using an existing video encoding flow. For example, motion estimation may be performed on all macroblocks included in the current frame image to search for other frame images capable of encoding the corresponding macroblock (inter prediction), and other macroblocks in the current frame image capable of encoding the corresponding macroblock (intra prediction), where the other frame images capable of encoding the corresponding macroblock or the other macroblocks in the current frame image capable of encoding the corresponding macroblock may be represented by motion vectors; then, on one hand, motion compensation is carried out on the motion vector to obtain an inter-frame residual error and/or an intra-frame residual error, prediction mode decision is carried out on the inter-frame residual error and/or the intra-frame residual error to obtain a prediction residual error, and transformation and quantization are carried out on the prediction residual error; on the other hand, entropy coding is carried out on the motion vector and the prediction residual after transformation and quantization and the motion vector and the prediction residual are organized into a continuous coding code stream.
In this embodiment, after all the macroblocks included in the current frame image are encoded according to the step S3700, the current frame image after encoding is subjected to inverse encoding and filtering processing to obtain the current frame image, so that the current frame image can be used as a reference frame image when encoding the next frame image. .
As can be seen from step S3700 above, the encoding cost is reduced by obtaining the update information of the current frame image of the video with respect to the previous frame image, continuously determining whether the update information indicates that the current frame image is updated with respect to the previous frame image, and encoding the current frame image by using the existing video encoding process when the update information indicates that the current frame image is updated with respect to the previous frame image, and when the update information does not indicate that the current frame image is partially updated.
In one embodiment, the frame image may be rendered prior to encoding to increase encoding speed. In this embodiment, the video encoding method may further include:
in the case where the rendering function is turned on, an arbitrary frame image of the video is rendered before being encoded.
In this embodiment, the terminal device 2000 used by the operation and maintenance personnel on the platform maintenance side may set a rendering switch whether to render a frame image in the video when the video is encoded.
In this embodiment, the current frame image and the previous frame image are both rendered images.
For example, when the switch is in the on state, the rendering function is turned on, and it is necessary to render any frame image first and encode any rendered frame image, so as to accelerate the encoding speed, and how to encode any rendered frame image may refer to any of the above embodiments, which is not described herein again.
For another example, when the switch is in the off state, the rendering function is turned off, and then any frame image is directly encoded, and how to encode any frame image may refer to any of the above embodiments, which is not described herein again.
In one embodiment, a mapping relation between the coding mode and the device type is preset, so that the coded code stream is sent to the terminal device of the corresponding device type for decoding and playing, and customized service is realized. In this embodiment, the video encoding method may further include steps S3800 to S3900:
step S3800 acquires the device type of the terminal device 2000 that selects the coding scheme corresponding to the code.
The device types are used to distinguish different types of terminal devices 2000. The device type may be represented by a device attribute parameter, which may be, for example, a model number of the device, a serial number of the device, a Globally Unique Identifier (GUID) of the device, and the like.
In this embodiment, the encoding method is applicable to all hardware configurations, but the device type of the terminal device using the encoding method may be configured in the server 1000.
Step S3900, sending the encoded code stream to the terminal device 2000 belonging to the device type for decoding and playing.
< method example two >
Fig. 5 is a flowchart illustrating a video decoding method according to an embodiment, which may be implemented by a terminal device, such as terminal device 2000 shown in fig. 1a or fig. 2. As shown in fig. 5, the video decoding method of the present embodiment may include the following steps S5100 to S5300:
step S5100 obtains a code stream obtained by coding each frame image of the video.
The encoding of any of the frame images includes: and when the any frame image is not updated relative to the previous frame image, the previous frame image is used as a reference frame image to encode the any frame image.
In step S5200, the encoded code stream is decoded.
Step S5300 plays the decoded video.
As is apparent from the above steps S5100 to S5300, when any frame image is not updated with respect to the previous frame image, the previous frame image is used as the reference frame image to encode the any frame image, and therefore, when any frame image is not updated with respect to the previous frame image, the step of performing motion prediction on any frame image to search for a reference frame image that can encode the any frame image in the conventional video encoding is directly omitted, so that the encoding efficiency is improved, and the encoding cost is reduced.
< example >
Fig. 6 shows an example of a video coding and decoding method, as shown in fig. 1b and fig. 6, in this example, the video coding and decoding method may include the following steps:
step S6110, the terminal device sends a video playing request to the cloud server through the WebRTC protocol between the terminal device and the cloud server.
In step S6110, the terminal device may be, for example, the terminal device 2000 shown in fig. 1a or fig. 2, or the Client in fig. 1b, the Cloud server may be, for example, the server 1000 shown in fig. 1a or fig. 2, or may be a Cloud server running a Cloud Host operating system (Host OS in Cloud) in fig. 1b, and the terminal device and the Cloud server may perform transmission of audio/video stream data through a WebRTC protocol.
Step S6210, in response to the video playing request, an Android application (Android Applications) in an Android Container (Android Container) in the cloud server acquires each frame of image and audio for the video.
The android application can be a video application for video playing.
In step S6210, for example, the video application X performing video playing may respond to the video playing request and acquire each frame of image and audio of the video.
Step S6220, the cloud server transmits the Audio and each frame image to an Audio Encoder (Video Encoder) and a Video Encoder (Audio Encoder) in the cloud server through inter-process Communication (IPC) by the Audio Encoder and the surface Encoder in the Android frame (Android Framework), respectively, to perform encoding, so as to obtain an Audio encoding code stream and an image encoding code stream.
Referring to fig. 7, the specific process of encoding for any frame image in the frame images in step S6220 may include S6221 to S6229:
step 6221, update information of the current frame image of the video relative to the previous frame image is obtained.
In step S6221, the output information may be added to the surface flinger output layer, so that the surface flinger output layer can output not only the pixel information of the current frame image but also the update information of the current frame image relative to the previous frame image.
Step 6222, it is determined whether the update information indicates that an update has occurred, and if so, step 6224 is executed, otherwise, step 6223 is executed.
In step S6223, in a case where the update information indicates that no update has occurred, all the macroblock prediction residuals included in the current frame are set to 0, and the previous frame image is set as a reference frame image for encoding all the macroblocks included in the current frame image, after which step S7229 is performed.
In step S6223, when the update information indicates that no update has occurred, the current frame image is predicted by inter-frame prediction.
Step S6224 is executed to determine whether or not the update information indicates that the partial update has occurred when the update information indicates that the update has occurred, and if so, step S6225 is executed, otherwise, step S6227 is executed.
In step S6225, when the update information indicates that the partial update has occurred, the macroblock alignment operation is performed on the image area where the partial update has occurred.
In step S6226, it is determined whether the macroblock after the macroblock alignment operation is updated, and if so, step S6227 is performed, otherwise, step S6228 is performed.
Step 6227, the corresponding macro block in the current frame image is processed by the existing motion prediction step to obtain the motion vector and the prediction residual, the prediction residual is transformed and quantized and then entropy-encoded by combining the motion vector to obtain the encoded code stream, and the process is finished.
In step S6228, the corresponding macroblock prediction residual is set to 0, and the previous frame image is set as the reference frame image for encoding the corresponding macroblock.
Step 6229, entropy encoding is performed on the macro block in the current frame image by using the reference frame image to obtain an encoded code stream, and the process is ended.
Step S6230, the cloud server sends the Audio code stream and the image code stream to the terminal device through an Audio Channel (Video Channel) and a Video Channel (Audio Channel) based on the WebRTC protocol.
Step S6120, the terminal device decodes the audio code stream and the video code stream, and plays the decoded video.
According to the example, the output information is added to the android frame surface flag output layer in the cloud server, so that the surface flag output layer can not only output the pixel information of the current frame image, but also output the update information of the current frame image relative to the previous frame image, and when the update information indicates that the current frame image is not updated relative to the previous frame image, the previous frame image can be directly used as the reference frame image for coding the current frame image, and therefore, when the current frame image is not updated relative to the previous frame image, the step of performing motion prediction on the current frame image to search out the reference frame image capable of coding the current frame image in the existing video coding is directly omitted, the coding efficiency is improved, and the coding cost is reduced.
< first embodiment of the apparatus >
In the present embodiment, there is also provided a video encoding apparatus, as shown in fig. 8, including an update information acquisition module 8100, a reference frame image acquisition module 8200, and an encoding module 8300.
The update information obtaining module 8100 is configured to obtain update information of a current frame image of the video relative to a previous frame image.
The reference frame image obtaining module 8200 is configured to, when the update information indicates that no update has occurred, set the previous frame image as a reference frame image for encoding the current frame image.
The encoding module 8300 is configured to encode the current frame image by using the reference frame image to obtain an encoded code stream.
In one embodiment, the encoding module 8300 is further configured to set the prediction residual of each macroblock included in the current frame image to be 0 if the update information indicates that no update occurs.
In one embodiment, the encoding module 8300 is further configured to determine whether the update information indicates that a partial update occurs if the update information indicates that an update occurs; in the case that the update information indicates that partial update occurs, performing a macro block alignment operation on the image area in which the partial update occurs; and continuously judging whether the macro block subjected to the alignment operation is positioned in the image area subjected to the partial update, and respectively adopting different coding modes to code the macro block positioned in the image area subjected to the partial update and the macro block positioned outside the image area subjected to the partial update to obtain a coded code stream.
In one embodiment, the encoding module 8300 is further configured to mark the image area where the update occurs in the current frame image if the update information indicates that the update occurs.
In an embodiment, the encoding module 8300 is further configured to, when the update area indicates that partial update does not occur, encode all macroblocks included in the current frame image to obtain an encoded code stream.
In one embodiment, the encoding module 8300 is further configured to perform motion prediction on all macro blocks included in the current frame image, and obtain a prediction residual and a motion vector corresponding to each macro block; and obtaining a coded code stream according to each prediction residual and each motion vector.
In one embodiment, the encoding module 8300 is further configured to set the previous frame of image as a reference frame of image for encoding a macroblock located outside the image area where the partial update occurs; and utilizing the reference frame image to encode the macro blocks positioned outside the image area subjected to the partial updating, and obtaining an encoding code stream corresponding to each macro block positioned outside the image area subjected to the partial updating.
In one embodiment, the encoding module 8300 is further configured to set the prediction residual of each macroblock located outside the image area where the partial update occurs to be 0.
In one embodiment, the encoding module 8300 is further configured to perform motion prediction on the macroblocks located in the image region where the partial update occurs, and obtain a prediction residual and a motion vector corresponding to each macroblock; and obtaining the coding code stream of each macro block correspondingly positioned in the image area with the partially updated according to each prediction residual and the motion vector.
In one embodiment, the apparatus 8000 further includes a rendering module (not shown) for rendering any frame image of the video before encoding the frame image if a rendering function is turned on; and the current frame image and the previous frame image are rendered images.
In an embodiment, the apparatus 8000 further includes a selection module (not shown in the figure), configured to obtain a device type of a terminal device that selects a coding scheme corresponding to the code; and sending the coded code stream to the terminal equipment belonging to the equipment type for decoding and playing.
< example II of the apparatus >
In this embodiment, a video decoding apparatus is further provided, as shown in fig. 9, the video decoding apparatus includes an encoded code stream obtaining module 9100, a decoding module 9200, and a video playing module 9300.
The encoding code stream acquiring module 9100 is configured to acquire an encoding code stream obtained by encoding each frame image of a video, where encoding any frame image of the frame images includes: and when the arbitrary frame image is not updated relative to the previous frame image, encoding the arbitrary frame image by using the previous frame image as a reference frame image.
The decoding module 9200 is configured to decode the encoded code stream.
The video playing module 9300 is used for playing the decoded video.
< apparatus embodiment >
In this embodiment, an electronic device is also provided, which includes the video decoding apparatus 8000 described in the apparatus embodiment of this specification; alternatively, the video decoding apparatus 9000 described in the apparatus embodiments of the present specification; alternatively, the electronic device is an electronic device 10000 shown in fig. 10, and includes:
a memory 10100 for storing executable commands.
A processor 10200 for executing the method described in any method embodiment of this specification under control of executable commands stored in the memory 10100.
The implementation subject of the embodiment of the method executed in the electronic equipment can be a server or a terminal device.
< System embodiment >
In this embodiment, a video playing system 11000 is further provided, as shown in fig. 11, including:
server 11100, which may be, for example, server 1000 as shown in fig. 1a or fig. 2.
The server 11100 comprises a memory and a processor, wherein the memory of the server 11100 is used for storing executable commands; the processor of the server 11100 is configured to execute the video encoding method according to any embodiment of the present specification under the control of executable commands.
In this embodiment, the video playing system 11000 further includes a terminal device 11200, and the terminal device 11200 may be the terminal device 2000 shown in fig. 1a or fig. 2.
The terminal device 11200 comprises a memory and a processor, the memory of the terminal device 11200 for storing executable commands; the processor of the terminal device 11200 is configured to perform the video decoding method according to any of the embodiments of the present specification under the control of the executable command.
< computer-readable storage Medium embodiment >
The present embodiments provide a computer-readable storage medium having stored therein an executable command that, when executed by a processor, performs a method described in any of the method embodiments of the present specification.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (17)

1. A video encoding method, comprising:
acquiring the update information of the current frame image of the video relative to the previous frame image;
setting the previous frame image as a reference frame image for encoding the current frame image when the update information indicates that no update has occurred;
and coding the current frame image by using the reference frame image to obtain a coded code stream.
2. The method of claim 1, wherein the method further comprises:
and setting the prediction residual of each macro block included in the current frame image to be 0 when the update information indicates that no update occurs.
3. The method of claim 1, wherein the method further comprises:
judging whether the update information indicates that partial update occurs or not when the update information indicates that update occurs;
in the case that the update information indicates that partial update occurs, performing a macro block alignment operation on the image area in which the partial update occurs;
and continuously judging whether the macro block subjected to the alignment operation is positioned in the image area subjected to the partial update, and respectively adopting different coding modes to code the macro block positioned in the image area subjected to the partial update and the macro block positioned outside the image area subjected to the partial update to obtain a coded code stream.
4. The method of claim 3, wherein the method further comprises:
and in the case that the updating information indicates that updating occurs, marking the image area in which updating occurs in the current frame image.
5. The method of claim 3, wherein the method further comprises:
and under the condition that the updating area shows that partial updating does not occur, all macro blocks included in the current frame image are coded to obtain a coded code stream.
6. The method according to claim 5, wherein said encoding all macroblocks included in the current frame image to obtain an encoded code stream comprises:
performing motion prediction on all macro blocks included in the current frame image to obtain a prediction residual error and a motion vector corresponding to each macro block;
and obtaining a coded code stream according to each prediction residual and each motion vector.
7. The method of claim 3, wherein the encoding the macro blocks located outside the image area where the partial update occurs to obtain the encoded code stream comprises:
setting the previous frame image as a reference frame image for coding a macro block positioned outside the image area where the partial update occurs;
and utilizing the reference frame image to encode the macro blocks positioned outside the image area subjected to the partial updating, and obtaining an encoding code stream corresponding to each macro block positioned outside the image area subjected to the partial updating.
8. The method of claim 3, wherein the encoding the macro blocks located outside the image area where the partial update occurs to obtain the encoded code stream further comprises:
and setting the prediction residual error of each macro block positioned outside the image area where the partial update occurs to be 0.
9. The method of claim 3, wherein the encoding the macro block located in the image area where the partial update occurs to obtain an encoded code stream comprises:
performing motion prediction on the macro blocks in the image area where the partial update occurs to obtain a prediction residual error and a motion vector corresponding to each macro block;
and obtaining the coding code stream of each macro block correspondingly positioned in the image area with the partially updated according to each prediction residual and the motion vector.
10. The method of any of claims 1 to 9, wherein the method further comprises:
rendering any frame image of the video before encoding the frame image under the condition that a rendering function is started;
and the current frame image and the previous frame image are rendered images.
11. The method of any of claims 1 to 9, wherein the method further comprises:
acquiring the equipment type of the terminal equipment selecting the coding mode corresponding to the code;
and sending the coded code stream to the terminal equipment belonging to the equipment type for decoding and playing.
12. A video decoding method, comprising:
acquiring a coding code stream obtained by coding each frame of image of a video, wherein coding any frame of image in each frame of image comprises: under the condition that the arbitrary frame image is not updated relative to the previous frame image, the previous frame image is used as a reference frame image to encode the arbitrary frame image;
decoding the coded code stream;
and playing the decoded video.
13. A video encoding device, comprising:
the update information acquisition module is used for acquiring the update information of the current frame image of the video relative to the previous frame image;
a reference frame image obtaining module, configured to set the previous frame image as a reference frame image for encoding the current frame image when the update information indicates that no update has occurred;
and the coding module is used for coding the current frame image by using the reference frame image to obtain a coded code stream.
14. A video decoding apparatus, comprising:
the encoding code stream acquiring module is used for acquiring an encoding code stream obtained by encoding each frame image of a video, wherein encoding any frame image of the frame images comprises: under the condition that the arbitrary frame image is not updated relative to the previous frame image, the previous frame image is used as a reference frame image to encode the arbitrary frame image;
the decoding module is used for decoding the coded code stream;
and the video playing module is used for playing the decoded video.
15. An electronic device, comprising:
the video encoding device of claim 13; or,
the video decoding apparatus of claim 14; or,
a processor and a memory for storing instructions for controlling the processor to perform the method of any of claims 1 to 12.
16. A video playback system, comprising:
a server comprising a memory and a processor, the memory of the server for storing executable commands; the processor of the server is configured to perform the video encoding method of any of claims 1 to 11 under the control of the executable command; and the number of the first and second groups,
the terminal equipment comprises a memory and a processor, wherein the memory of the terminal equipment is used for storing executable commands; the processor of the terminal device is configured to perform the video decoding method of claim 12 under the control of the executable command.
17. A computer readable storage medium storing executable instructions that, when executed by a processor, perform the method of any one of claims 1 to 12.
CN202010355957.3A 2020-04-29 2020-04-29 Video encoding and decoding method, device and system and electronic equipment Pending CN113573069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010355957.3A CN113573069A (en) 2020-04-29 2020-04-29 Video encoding and decoding method, device and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010355957.3A CN113573069A (en) 2020-04-29 2020-04-29 Video encoding and decoding method, device and system and electronic equipment

Publications (1)

Publication Number Publication Date
CN113573069A true CN113573069A (en) 2021-10-29

Family

ID=78158484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010355957.3A Pending CN113573069A (en) 2020-04-29 2020-04-29 Video encoding and decoding method, device and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN113573069A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101202913A (en) * 2006-11-28 2008-06-18 三星电子株式会社 Method and apparatus for encoding and decoding video images
CN101494718A (en) * 2009-01-23 2009-07-29 逐点半导体(上海)有限公司 Method and apparatus for encoding image
CN102484715A (en) * 2009-08-21 2012-05-30 日本电气株式会社 Moving picture encoding device
CN103765888A (en) * 2011-09-06 2014-04-30 英特尔公司 Analytics assisted encoding
CN106576152A (en) * 2014-03-13 2017-04-19 华为技术有限公司 Improved method for screen content coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101202913A (en) * 2006-11-28 2008-06-18 三星电子株式会社 Method and apparatus for encoding and decoding video images
CN101494718A (en) * 2009-01-23 2009-07-29 逐点半导体(上海)有限公司 Method and apparatus for encoding image
CN102484715A (en) * 2009-08-21 2012-05-30 日本电气株式会社 Moving picture encoding device
CN103765888A (en) * 2011-09-06 2014-04-30 英特尔公司 Analytics assisted encoding
CN106576152A (en) * 2014-03-13 2017-04-19 华为技术有限公司 Improved method for screen content coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵永利;陈进成;马健;朱宝忠;张杰;: "基于块特性与自适应搜索窗口的运动估计算法", 数据采集与处理, no. 03, 15 May 2008 (2008-05-15) *

Similar Documents

Publication Publication Date Title
KR102343668B1 (en) Video encoding method, decoding method and terminal
US11039144B2 (en) Method and apparatus for image coding and decoding through inter-prediction
RU2577207C2 (en) Video encoding method and device
TW201820881A (en) Block partitioning using tree structures
US10742989B2 (en) Variable frame rate encoding method and device based on a still area or a motion area
CN112533059B (en) Image rendering method and device, electronic equipment and storage medium
CN113965751B (en) Screen content coding method, device, equipment and storage medium
US20240121421A1 (en) Motion vector obtaining method and apparatus
EP3959873A1 (en) Efficient coding of global motion vectors
US20230239464A1 (en) Video processing method with partial picture replacement
JP2023100767A (en) Picture encoding and decoding method and device for video sequence
EP3959886A1 (en) Signaling of global motion vector in picture header
CN109076211A (en) Renewal step by step in case of motion
JP7388610B2 (en) Video encoding method, video decoding method, and terminal
CN113573069A (en) Video encoding and decoding method, device and system and electronic equipment
CN113938705B (en) Method, device, server, terminal equipment and system for video coding and decoding
CN114051138A (en) Video processing method and device, storage medium and electronic equipment
US12015780B2 (en) Inter prediction method and apparatus, video encoder, and video decoder
JP2019016896A (en) Image processing system, and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination