CN111225208B - Video coding method and device - Google Patents

Video coding method and device Download PDF

Info

Publication number
CN111225208B
CN111225208B CN201811425053.2A CN201811425053A CN111225208B CN 111225208 B CN111225208 B CN 111225208B CN 201811425053 A CN201811425053 A CN 201811425053A CN 111225208 B CN111225208 B CN 111225208B
Authority
CN
China
Prior art keywords
image frame
terminal
rotation angular
motion
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811425053.2A
Other languages
Chinese (zh)
Other versions
CN111225208A (en
Inventor
孙恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201811425053.2A priority Critical patent/CN111225208B/en
Publication of CN111225208A publication Critical patent/CN111225208A/en
Application granted granted Critical
Publication of CN111225208B publication Critical patent/CN111225208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure relates to video encoding methods and apparatus. The method comprises the following steps: acquiring the rotation angular speed of a terminal and the true values of a first image frame and a second image frame which are adjacent to each other and have a to-be-coded relationship in a video sequence shot by a camera of the terminal; determining a motion vector of the first image frame according to the rotation angular velocity; according to the motion vector, the real value of the first image frame and the real value of the second image frame, performing motion compensation on the second image frame to obtain a motion residual error; the motion vector and the motion residual are encoded. The method and the device can obviously reduce the calculation complexity, power consumption and time consumption of motion compensation in video coding, obviously improve the motion compensation performance in a dim light environment and under the condition of severe terminal motion, and improve the video compression efficiency.

Description

Video coding method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video encoding method and apparatus.
Background
In general, video image data has a very strong correlation, i.e., a large amount of temporal/spatial redundancy information. By adopting the video coding technology, redundant information in image data can be removed, and the original video with large volume is compressed to a size convenient for storage and transmission. The motion compensation technique is one of the key techniques in video coding technique, and can remove the temporal/spatial redundant information in the video sequence through motion compensation.
In the related art, a motion compensation technique adopted by a video coding technique is usually based on image content, and motion prediction is performed on the image content to further remove image temporal/spatial information. However, since the motion compensation technique in the related art needs to be analyzed and processed based on image contents, its motion compensation effect is greatly affected by image quality; meanwhile, in order to control the computational complexity, the motion compensation technology in the related art needs to limit the operation range during motion prediction, which affects the motion compensation performance when the image moves in a large amplitude; meanwhile, the motion compensation technology based on image content is time-consuming and power-consuming, which is a great challenge for mobile devices such as mobile phones.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide a video encoding method and apparatus. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a video encoding method, the method including:
acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other in a to-be-coded relation in a video sequence shot by a camera of the terminal;
determining a motion vector of the first image frame from the rotation angular velocity;
performing motion compensation on the second image frame according to the motion vector, the real value of the first image frame and the real value of the second image frame to obtain a motion residual error;
encoding the motion vector and the motion residual.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the technical scheme, the motion vector of the first image frame is determined by analyzing the rotation angular velocity of the terminal, and then the motion vector is used for motion compensation, so that motion estimation and motion compensation can be realized based on the rotation angular velocity of the terminal, image content does not need to be specifically analyzed and processed, the motion condition of the image frame can be known according to the rotation angular velocity of the terminal, a motion residual error only needs to be calculated once, the calculation complexity, power consumption and time consumption of motion compensation in video coding can be obviously reduced, the motion compensation performance under the dark light environment and the severe terminal motion condition is obviously improved, and the video compression efficiency is improved.
In one embodiment, the motion compensating the second image frame according to the motion vector, the true value of the first image frame, and the true value of the second image frame to obtain a motion residual includes:
determining a predicted value of the second image frame according to the real value of the first image frame and the motion vector;
and calculating the difference between the true value of the second image frame and the predicted value of the second image frame, and determining the calculated difference as a motion residual.
In one embodiment, the obtaining of the rotational angular velocity of the terminal includes:
and calling an angular velocity sensor of the terminal to measure and obtain the rotation angular velocity of the terminal.
In one embodiment, the obtaining of the rotational angular velocity of the terminal includes:
sending a request message to a server, wherein the request message is used for requesting the server to return the rotation angular speed of the terminal;
and receiving a response message which is sent by the server and carries the rotation angular speed of the terminal.
In one embodiment, the rotational angular velocity includes any one or a combination of the following parameters: roll angle, pitch angle, or yaw angle.
According to a second aspect of the embodiments of the present disclosure, there is provided a video encoding apparatus comprising:
the device comprises an acquisition module, a coding module and a decoding module, wherein the acquisition module is used for acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other in a to-be-coded relation in a video sequence shot by a camera of the terminal;
a determination module configured to determine a motion vector of the first image frame according to the rotation angular velocity;
the motion compensation module is used for performing motion compensation on the second image frame according to the motion vector, the real value of the first image frame and the real value of the second image frame to obtain a motion residual error;
and the coding module is used for coding the motion vector and the motion residual error.
In one embodiment, the motion compensation module comprises:
the determining submodule is used for determining a predicted value of the second image frame according to the real value of the first image frame and the motion vector;
and the calculation submodule is used for calculating the difference between the real value of the second image frame and the predicted value of the second image frame and determining the calculated difference value as a motion residual.
In one embodiment, the obtaining module calls an angular velocity sensor of the terminal to measure the rotational angular velocity of the terminal.
In one embodiment, the obtaining module includes:
the sending submodule is used for sending a request message to a server, and the request message is used for requesting the server to return the rotation angular speed of the terminal;
and the receiving submodule is used for receiving a response message which is sent by the server and carries the rotation angular speed of the terminal.
According to a third aspect of the embodiments of the present disclosure, there is provided a video encoding apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other in a to-be-coded relation in a video sequence shot by a camera of the terminal;
determining a motion vector of the first image frame according to the rotation angular velocity;
performing motion compensation on the second image frame according to the motion vector, the real value of the first image frame and the real value of the second image frame to obtain a motion residual error;
encoding the motion vector and the motion residual.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method embodiments of any one of the above-mentioned first aspects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a video encoding method according to an example embodiment.
Fig. 2 is a flow chart illustrating a video encoding method according to an example embodiment.
Fig. 3 is a block diagram illustrating a video encoding apparatus according to an example embodiment.
Fig. 4 is a block diagram illustrating a video encoding apparatus according to an example embodiment.
Fig. 5 is a block diagram illustrating a video encoding apparatus according to an example embodiment.
Fig. 6 is a block diagram illustrating a video encoding apparatus according to an example embodiment.
Fig. 7 is a block diagram illustrating an apparatus according to an example embodiment.
Fig. 8 is a block diagram illustrating an apparatus according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the related art, a motion compensation technique adopted by a video coding technique is usually based on image content, and motion prediction is performed on the image content to further remove image temporal/spatial information. However, since the motion compensation technique in the related art needs to be analyzed and processed based on image content, its motion compensation effect is greatly affected by image quality; meanwhile, in order to control the computational complexity, the motion compensation technology in the related art needs to limit the operation range during motion prediction, which affects the motion compensation performance when the image moves in a large amplitude; meanwhile, the motion compensation technology based on image content is time-consuming and power-consuming, which is a great challenge for mobile devices such as mobile phones.
In order to solve the above problem, an embodiment of the present disclosure provides a video encoding method, where the method includes: the method comprises the following steps: acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other and have a to-be-coded relationship in a video sequence shot by a camera of the terminal; determining a motion vector of the first image frame according to the rotation angular velocity; performing motion compensation on the second image frame according to the motion vector, the real value of the first image frame and the real value of the second image frame to obtain a motion residual error; the motion vector and the motion residual are encoded. According to the embodiment of the disclosure, the motion vector of the first image frame is determined by analyzing the rotation angular velocity of the terminal, and then the motion vector is used for motion compensation, so that motion estimation and motion compensation can be realized based on the rotation angular velocity of the terminal, image content does not need to be specifically analyzed and processed, and the motion condition of the image frame can be known according to the rotation angular velocity of the terminal, so that a motion residual only needs to be calculated once, the calculation complexity, power consumption and time consumption of motion compensation in video coding can be remarkably reduced, the motion compensation performance under a dark light environment and a severe terminal motion condition is remarkably improved, and the video compression efficiency is improved.
It should be noted that the terminal in the present disclosure may include, for example, an electronic device such as a smart phone, a tablet computer, a notebook computer, or a wearable device (such as a bracelet, smart glasses, etc.).
Based on the above analysis, the following specific examples are proposed.
FIG. 1 is a flow diagram illustrating a method of video encoding in accordance with an exemplary embodiment; the execution main body of the method can be a terminal; as shown in fig. 1, the method comprises the following steps 101-104:
in step 101, the rotation angular velocity of the terminal and the real values of the first image frame and the second image frame adjacent to each other in the relation to be encoded in the video sequence shot by the camera of the terminal are obtained.
Illustratively, the rotational angular velocity includes any one or a combination of the following: roll angle, pitch angle, or yaw angle. The rotational angular velocity of the terminal is a rotational angular velocity when the terminal is deflected or tilted. For example, a varying roll angle is obtained when the terminal is rocking side-to-side; obtaining a changing pitch angle when the terminal swings back and forth; a varying yaw angle is obtained when the terminal rotates the screen.
For example, the implementation manner of acquiring the rotation angular speed of the terminal may include at least any one or a combination of the following:
in the implementation mode 1, the terminal calls an angular velocity sensor of the terminal to measure and obtain the rotation angular velocity of the terminal. The rotation angular velocity of the terminal is obtained by calling the existing hardware component on the terminal, and then motion estimation and motion compensation are carried out based on the rotation angular velocity of the terminal, so that the scheme is simple and easy to operate.
In the implementation mode 2, the terminal sends a request message to the server, wherein the request message is used for requesting the server to return the rotation angular speed of the terminal; the server can monitor the motion condition of the terminal through the monitoring equipment to obtain the rotation angular speed of the terminal, and after receiving the request message of the terminal, the server returns a response message carrying the rotation angular speed of the terminal to the terminal; and the terminal receives the response message sent by the server, and further obtains the rotation angular speed of the terminal.
In step 102, a motion vector of the first image frame is determined from the rotational angular velocity.
For example, after obtaining the rotation angular velocity of the terminal, the movement of the entire first image frame is estimated based on the rotation angular velocity, and the motion vector of the first image frame, i.e., the translation size or rotation value of the true value of the first image frame corresponding to the terminal yaw and tilt operations, is obtained.
In step 103, the second image frame is motion compensated according to the motion vector, the true value of the first image frame and the true value of the second image frame, so as to obtain a motion residual.
For example, according to the real value and the motion vector of the first image frame, the predicted value of the second image frame is determined; and calculating the difference between the true value of the second image frame and the predicted value of the second image frame, and determining the calculated difference as a motion residual.
In step 104, the motion vector and the motion residual are encoded.
Illustratively, the second image frame is predicted and compensated through the first image frame, the true value of the second image frame is represented as the true value, the motion vector and the motion residual of the first image frame, only the motion vector and the motion residual are coded during coding, redundant information in the second image frame is removed, and the compression ratio is improved.
According to the technical scheme provided by the embodiment of the disclosure, the motion vector of the first image frame is determined by analyzing the rotation angular velocity of the terminal, and then the motion vector is used for motion compensation, so that motion estimation and motion compensation can be realized based on the rotation angular velocity of the terminal, image content does not need to be specifically analyzed and processed, and the motion condition of the image frame can be obtained according to the rotation angular velocity of the terminal, so that the motion residual only needs to be calculated once, the computation complexity, power consumption and time consumption of motion compensation in video coding can be remarkably reduced, the motion compensation performance under the dark light environment and the severe terminal motion condition is remarkably improved, and the video compression efficiency is improved.
Fig. 2 is a flow chart illustrating a method of video encoding according to an example embodiment. As shown in fig. 2, based on the embodiment shown in fig. 1, the video encoding method according to the present disclosure may include the following steps 201 and 206:
in step 201, an angular velocity sensor of the terminal is called to measure and obtain a rotation angular velocity of the terminal.
In step 202, real values of a first image frame and a second image frame adjacent to each other in a to-be-encoded relationship in a video sequence shot by a camera of a terminal are obtained.
In step 203, a motion vector of the first image frame is determined from the rotational angular velocity.
In step 204, a prediction value of the second image frame is determined based on the true value and the motion vector of the first image frame.
In step 205, a difference between the true value of the second image frame and the predicted value of the second image frame is calculated, and the calculated difference is determined as a motion residual.
In step 206, the motion vector and the motion residual are encoded.
According to the technical scheme provided by the embodiment of the disclosure, the rotation angular velocity of the terminal is obtained by calling the existing hardware component on the terminal, and then the motion estimation and the motion compensation are performed based on the rotation angular velocity of the terminal, so that the scheme is simple and easy to operate, the calculation complexity, the power consumption and the time consumption of the motion compensation in video coding can be obviously reduced, the motion compensation performance under the dark light environment and the severe terminal motion condition is improved, and the video compression efficiency is improved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
FIG. 3 is a block diagram illustrating a video encoding apparatus according to an example embodiment; the apparatus may be implemented in various ways, for example, with all of the components of the apparatus being implemented in a terminal, or with components of the apparatus being implemented in a coupled manner on the terminal side; the apparatus may implement the method related to the present disclosure through software, hardware, or a combination of the two, as shown in fig. 3, the video encoding apparatus includes: an obtaining module 301, a determining module 302, a motion compensation module 303 and an encoding module 304, wherein:
the obtaining module 301 is configured to obtain a rotation angular velocity of the terminal and a true value of a first image frame and a second image frame adjacent to each other in a to-be-encoded relationship in a video sequence captured by a camera of the terminal;
the determination module 302 is configured to determine a motion vector of the first image frame from the rotational angular velocity;
the motion compensation module 303 is configured to perform motion compensation on the second image frame according to the motion vector, the true value of the first image frame, and the true value of the second image frame, so as to obtain a motion residual;
the encoding module 304 is configured to encode the motion vectors and the motion residuals.
The device provided by the embodiment of the disclosure can be used for executing the technical scheme of the embodiment shown in fig. 1, and the execution mode and the beneficial effect are similar, and are not described again here.
In one possible implementation, as shown in fig. 4, the video encoding apparatus shown in fig. 3 may further include a motion compensation module 303 configured to include: a determination submodule 401 and a calculation submodule 402, wherein:
the determination submodule 401 is configured to determine a prediction value of the second image frame from the true value and the motion vector of the first image frame;
the calculation submodule 402 is configured to calculate a difference between a true value of the second image frame and a predicted value of the second image frame, and determine the calculated difference value as a motion residual.
In a possible implementation manner, the obtaining module 301 obtains the rotation angular velocity of the terminal by calling an angular velocity sensor of the terminal to measure.
In one possible implementation, as shown in fig. 5, the video encoding apparatus shown in fig. 3 may further include a configuration module 301 configured to include: a sending submodule 501 and a receiving submodule 502, wherein:
the sending submodule 501 is configured to send a request message to the server, where the request message is used to request the server to return the rotation angular speed of the terminal;
the receiving submodule 502 is configured to receive a response message of the rotational angular velocity of the portable terminal sent by the server.
Fig. 6 is a block diagram illustrating a video encoding apparatus 600 according to an exemplary embodiment, the video encoding apparatus 600 being applied to a terminal, the video encoding apparatus 600 including:
a processor 601;
a memory 602 for storing processor-executable instructions;
wherein the processor 601 is configured to:
acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other and have a to-be-coded relationship in a video sequence shot by a camera of the terminal;
determining a motion vector of the first image frame according to the rotation angular velocity;
performing motion compensation on the second image frame according to the motion vector, the real value of the first image frame and the real value of the second image frame to obtain a motion residual error;
the motion vector and the motion residual are encoded.
In one embodiment, the processor 601 may be further configured to:
determining a predicted value of the second image frame according to the true value and the motion vector of the first image frame;
and calculating the difference between the true value of the second image frame and the predicted value of the second image frame, and determining the calculated difference as a motion residual.
In one embodiment, the processor 601 may be further configured to:
and calling an angular velocity sensor of the terminal to measure and obtain the rotation angular velocity of the terminal.
In one embodiment, the processor 601 may be further configured to:
sending a request message to a server, wherein the request message is used for requesting the server to return the rotation angular speed of the terminal;
and receiving a response message of the rotation angular velocity of the portable terminal sent by the server.
In one embodiment, the rotational angular velocity includes any one or a combination of the following: roll angle, pitch angle, or yaw angle.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 7 is a block diagram illustrating an apparatus in accordance with an exemplary embodiment; the apparatus 700 is applicable to a terminal; the apparatus 700 may include one or more of the following components: processing component 702, memory 704, power component 706, multimedia component 708, audio component 710, input/output (I/O) interface 712, sensor component 714, and communications component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing various aspects of status assessment for the device 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of device 700, the change in position of device 700 or a component of device 700, the presence or absence of user contact with device 700, the orientation or acceleration/deceleration of device 700, and the change in temperature of device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate communication between the apparatus 700 and other devices in a wired or wireless manner. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the device 700 to perform the above-described method is provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
FIG. 8 is a block diagram illustrating an apparatus in accordance with an example embodiment. For example, the apparatus 800 may be provided as a server. The apparatus 800 comprises a processing component 802 that further comprises one or more processors, and memory resources, represented by memory 803, for storing instructions, e.g., applications, that are executable by the processing component 802. The application programs stored in the memory 803 may include one or more modules that each correspond to a set of instructions. Further, the processing component 802 is configured to execute instructions to perform the above-described methods.
The device 800 may also include a power component 806 configured to perform power management of the device 800, a wired or wireless network interface 805 configured to connect the device 800 to a network, and an input output (I/O) interface 808. The apparatus 800 may operate based on an operating system stored in the memory 803, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of the apparatus 700 or 800, enable the apparatus 700 or 800 to perform a method comprising:
acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other and have a to-be-coded relationship in a video sequence shot by a camera of the terminal;
determining a motion vector of the first image frame according to the rotation angular velocity;
according to the motion vector, the real value of the first image frame and the real value of the second image frame, performing motion compensation on the second image frame to obtain a motion residual error;
the motion vector and the motion residual are encoded.
In one embodiment, performing motion compensation on the second image frame according to the motion vector, the true value of the first image frame and the true value of the second image frame to obtain a motion residual error includes:
determining a predicted value of the second image frame according to the true value and the motion vector of the first image frame;
and calculating the difference between the true value of the second image frame and the predicted value of the second image frame, and determining the calculated difference as a motion residual.
In one embodiment, acquiring the rotational angular velocity of the terminal includes:
and calling an angular velocity sensor of the terminal to measure and obtain the rotation angular velocity of the terminal.
In one embodiment, acquiring the rotational angular velocity of the terminal includes:
sending a request message to a server, wherein the request message is used for requesting the server to return the rotation angular speed of the terminal;
and receiving a response message of the rotation angular speed of the portable terminal sent by the server.
In one embodiment, the rotational angular velocity includes any one or a combination of the following parameters: roll angle, pitch angle, or yaw angle.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (5)

1. A video encoding method, comprising:
acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other in a to-be-coded relation in a video sequence shot by a camera of the terminal;
determining a motion vector of the first image frame according to the rotation angular speed of the terminal, and determining a predicted value of the second image frame according to a real value of the first image frame and the motion vector; calculating the difference between the real value of the second image frame and the predicted value of the second image frame, and determining the calculated difference value as a motion residual; the determining the motion vector of the first image frame according to the rotation angular speed of the terminal comprises: after the rotation angular velocity of the terminal is obtained, estimating the movement condition of the whole first image frame based on the rotation angular velocity, and obtaining a motion vector of the first image frame;
encoding the motion vector and the motion residual;
the method for acquiring the rotation angular speed of the terminal comprises any one of the following modes or combinations:
calling an angular velocity sensor of the terminal to measure and obtain the rotation angular velocity of the terminal;
sending a request message to a server, wherein the request message is used for requesting the server to return the rotation angular speed of the terminal; and receiving a response message which is sent by the server and carries the rotation angular speed of the terminal.
2. The method of claim 1, wherein the rotational angular velocity comprises any one or a combination of the following parameters: roll angle, pitch angle, or yaw angle.
3. A video encoding apparatus, comprising:
the device comprises an acquisition module, a coding module and a decoding module, wherein the acquisition module is used for acquiring the rotation angular speed of a terminal and the real values of a first image frame and a second image frame which are adjacent to each other in a to-be-coded relation in a video sequence shot by a camera of the terminal;
a determining module, configured to determine a motion vector of the first image frame according to a rotational angular velocity of the terminal, including: after the rotation angular velocity of the terminal is obtained, estimating the movement condition of the whole first image frame based on the rotation angular velocity, and obtaining a motion vector of the first image frame;
the motion compensation module is used for performing motion compensation on the second image frame according to the motion vector, the real value of the first image frame and the real value of the second image frame to obtain a motion residual error;
a coding module for coding the motion vector and the motion residual;
the motion compensation module comprises:
the determining submodule is used for determining a predicted value of the second image frame according to the real value of the first image frame and the motion vector;
the calculation submodule is used for calculating the difference between the real value of the second image frame and the predicted value of the second image frame and determining the calculated difference value as a motion residual error;
the acquiring module acquires the rotation angular speed of the terminal, and the acquiring module comprises any one of the following modes or combinations:
calling an angular velocity sensor of the terminal to measure and obtain the rotation angular velocity of the terminal;
sending a request message to a server, wherein the request message is used for requesting the server to return the rotation angular speed of the terminal; and receiving a response message which is sent by the server and carries the rotation angular speed of the terminal.
4. A video encoding apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring the rotation angular speed of a terminal and the true values of a first image frame and a second image frame which are adjacent to each other and have a to-be-coded relationship in a video sequence shot by a camera of the terminal;
determining a motion vector of the first image frame according to the rotation angular speed of the terminal, and determining a predicted value of the second image frame according to a real value of the first image frame and the motion vector; calculating the difference between the real value of the second image frame and the predicted value of the second image frame, and determining the calculated difference value as a motion residual; the determining the motion vector of the first image frame according to the rotation angular speed of the terminal comprises: after the rotation angular velocity of the terminal is obtained, estimating the movement condition of the whole first image frame based on the rotation angular velocity, and obtaining a motion vector of the first image frame;
encoding the motion vector and the motion residual;
the method for acquiring the rotation angular speed of the terminal comprises any one of the following modes or combinations:
calling an angular velocity sensor of the terminal to measure and obtain the rotation angular velocity of the terminal;
sending a request message to a server, wherein the request message is used for requesting the server to return the rotation angular speed of the terminal; and receiving a response message which is sent by the server and carries the rotation angular speed of the terminal.
5. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method of any one of claims 1-2.
CN201811425053.2A 2018-11-27 2018-11-27 Video coding method and device Active CN111225208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811425053.2A CN111225208B (en) 2018-11-27 2018-11-27 Video coding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811425053.2A CN111225208B (en) 2018-11-27 2018-11-27 Video coding method and device

Publications (2)

Publication Number Publication Date
CN111225208A CN111225208A (en) 2020-06-02
CN111225208B true CN111225208B (en) 2022-09-02

Family

ID=70830323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811425053.2A Active CN111225208B (en) 2018-11-27 2018-11-27 Video coding method and device

Country Status (1)

Country Link
CN (1) CN111225208B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911294A (en) * 2021-03-22 2021-06-04 杭州灵伴科技有限公司 Video encoding method, video decoding method using IMU data, XR device and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906948A (en) * 2004-03-15 2007-01-31 三星电子株式会社 Image coding apparatus and method for predicting motion using rotation matching
CN105100585A (en) * 2014-05-20 2015-11-25 株式会社东芝 Camera module and image sensor
CN105284101A (en) * 2013-04-10 2016-01-27 微软技术许可有限责任公司 Motion blur-free capture of low light high dynamic range images
CN107750451A (en) * 2015-07-27 2018-03-02 三星电子株式会社 For stablizing the method and electronic installation of video
EP3301928A1 (en) * 2016-09-30 2018-04-04 Thomson Licensing Methods, devices and stream to encode global rotation motion compensated images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9451163B2 (en) * 2012-05-11 2016-09-20 Qualcomm Incorporated Motion sensor assisted rate control for video encoding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906948A (en) * 2004-03-15 2007-01-31 三星电子株式会社 Image coding apparatus and method for predicting motion using rotation matching
CN105284101A (en) * 2013-04-10 2016-01-27 微软技术许可有限责任公司 Motion blur-free capture of low light high dynamic range images
CN105100585A (en) * 2014-05-20 2015-11-25 株式会社东芝 Camera module and image sensor
CN107750451A (en) * 2015-07-27 2018-03-02 三星电子株式会社 For stablizing the method and electronic installation of video
EP3301928A1 (en) * 2016-09-30 2018-04-04 Thomson Licensing Methods, devices and stream to encode global rotation motion compensated images

Also Published As

Publication number Publication date
CN111225208A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN110708559B (en) Image processing method, device and storage medium
US11202072B2 (en) Video encoding method, apparatus, and device, and storage medium
US11388403B2 (en) Video encoding method and apparatus, storage medium, and device
CN110827253A (en) Training method and device of target detection model and electronic equipment
CN110536168B (en) Video uploading method and device, electronic equipment and storage medium
CN108881952B (en) Video generation method and device, electronic equipment and storage medium
CN105049219B (en) Flow booking method and system, mobile terminal and server
CN111953980B (en) Video processing method and device
CN110611820A (en) Video coding method and device, electronic equipment and storage medium
CN109120929B (en) Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and video encoding system
CN115052150A (en) Video encoding method, video encoding device, electronic equipment and storage medium
CN111225208B (en) Video coding method and device
CN110515623B (en) Method and device for realizing graphic operation, electronic equipment and storage medium
CN110311961B (en) Information sharing method and system, client and server
CN110177275B (en) Video encoding method and apparatus, and storage medium
CN115297333B (en) Inter-frame prediction method and device of video data, electronic equipment and storage medium
CN109255839B (en) Scene adjustment method and device
CN111859097A (en) Data processing method and device, electronic equipment and storage medium
CN112954293B (en) Depth map acquisition method, reference frame generation method, encoding and decoding method and device
CN110460856B (en) Video encoding method, video encoding device, video encoding apparatus, and computer-readable storage medium
CN110213531B (en) Monitoring video processing method and device
CN110166797B (en) Video transcoding method and device, electronic equipment and storage medium
CN112884813A (en) Image processing method, device and storage medium
CN114885192A (en) Video processing method, video processing apparatus, and storage medium
CN111698262A (en) Bandwidth determination method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant