CN113271465A - Sub-pixel motion estimation method and apparatus, computer device, and medium - Google Patents
Sub-pixel motion estimation method and apparatus, computer device, and medium Download PDFInfo
- Publication number
- CN113271465A CN113271465A CN202110542053.6A CN202110542053A CN113271465A CN 113271465 A CN113271465 A CN 113271465A CN 202110542053 A CN202110542053 A CN 202110542053A CN 113271465 A CN113271465 A CN 113271465A
- Authority
- CN
- China
- Prior art keywords
- point
- pixel
- whole pixel
- sub
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The disclosure provides a sub-pixel motion estimation method and device, computer equipment and a medium, relates to the technical field of cloud computing, and further relates to the fields of media cloud and video coding and decoding. The implementation scheme is as follows: determining a starting point of sub-pixel search in a reference image and whole pixel points distributed in the reference image; determining a sub-pixel search mode corresponding to the relative position relation based on the relative position relation between the starting point and the whole pixel point; and searching a target sub-pixel point for motion estimation in the reference image based on the determined sub-pixel search pattern.
Description
Technical Field
The present disclosure relates to the field of cloud computing technologies, and further relates to the field of media cloud and video encoding and decoding, and in particular, to a method and an apparatus for sub-pixel motion estimation, a computing device, a computer-readable storage medium, and a computer program product.
Background
The video encoder performs encoding compression on the original data to obtain as little reconstruction distortion as possible, or to obtain as low a bitrate as possible. Many new techniques are employed for this purpose, such as more complex inter-frame prediction algorithms, variable block size motion compensation, partitioning of multi-mode blocks, variable size block transforms, rate distortion optimization techniques, sub-pixel interpolation, sub-pixel motion estimation, sub-pixel motion compensation, adaptive interpolation filtering, etc. The improvement of the compression performance is at the cost of increasing a large amount of calculation, and great inconvenience is brought to video real-time coding communication.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a computing device, a computer-readable storage medium, and a computer program product for sub-pixel motion estimation.
According to an aspect of the present disclosure, there is provided a sub-pixel motion estimation method, including: determining a starting point of sub-pixel search in a reference image and pixel points distributed in the reference image; determining a sub-pixel search mode corresponding to the relative position relation based on the relative position relation between the starting point and the whole pixel point; and searching a target sub-pixel point for motion estimation in the reference image based on the determined sub-pixel search pattern.
According to another aspect of the present disclosure, there is provided a sub-pixel motion estimation apparatus including: a first determining unit configured to determine a starting point of a sub-pixel search in a reference image and integer pixels distributed in the reference image; a second determination unit configured to determine a sub-pixel search pattern corresponding to a relative positional relationship between the start point and the integer pixel based on the relative positional relationship; and a searching unit configured to search for a target subpixel point for motion estimation in the reference image based on the determined subpixel search pattern.
According to another aspect of the present disclosure, there is provided a computer device including: a memory, a processor and a computer program stored on the memory, wherein the processor is configured to execute the computer program to implement the steps of the above-described method.
According to another aspect of the present disclosure, a non-transitory computer readable storage medium is provided, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method described above.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the steps of the above-described method when executed by a processor.
According to one or more embodiments of the present disclosure, the corresponding search mode can be flexibly adjusted according to the relative position relationship between the initial point and the entire pixel point distributed in the reference image, so that the motion estimation of the sub-pixel is further optimized on the basis of ensuring the coding quality, and the efficiency of the motion estimation in the video coding is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a full search based sub-pixel motion estimation method according to an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of a sub-pixel motion estimation method according to an embodiment of the present disclosure;
FIG. 4 shows a flow diagram of another sub-pixel motion estimation method according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of sub-pixel motion estimation according to an embodiment of the present disclosure;
FIG. 6A shows a schematic diagram of another sub-pixel motion estimation according to an embodiment of the present disclosure;
FIG. 6B shows a schematic diagram of another sub-pixel motion estimation according to an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of another sub-pixel motion estimation according to an embodiment of the present disclosure;
FIG. 8 is a block diagram of a sub-pixel motion estimation apparatus according to an embodiment of the present disclosure;
FIG. 9 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Each image frame in the video to be encoded can be regarded as being formed based on discretized integer pixel points, wherein each integer pixel point can represent the brightness and color nearby. Macroscopically, whole pixels can be regarded as connected together, but microscopically, distances varying from a few to a dozen microns exist between whole pixels. In order to further subdivide the positions among the whole pixel points, the concept of sub-pixel points is introduced among the whole pixel points. The sub-pixels can be further divided into 1/2 pixels, 1/4 pixels, 1/8 pixels and the like. The sub-pixel subdivision technology can effectively make up the defects of hardware equipment, improves the resolution of an image, and is widely applied to image processing, for example, in motion estimation of video coding, more accurate motion estimation is performed by searching a target sub-pixel point. In the motion estimation of video coding, the motion estimation process of integer pixel and sub-pixel occupies 40-50% of the total coding time, and the proportion of the motion estimation process is higher in the video with more severe individual motion.
In the related art, motion estimation of whole pixels and sub-pixels generally adopts a full search mode in a sub-pixel search area to determine a target sub-pixel point. FIG. 1 is a diagram illustrating an exemplary full search based sub-pixel motion estimation method. As shown in fig. 1, 4 1/2 pixels and 4 1/4 pixels are taken as an example for explanation. The full search based sub-pixel motion estimation comprises: determining a target whole pixel point 110 corresponding to the starting point by using a preset template; with the determined target whole pixel point 110 as the center, respectively calculating loss values of the target whole pixel point 110 and 4 1/2 pixel points (i.e. black triangles in fig. 1) around the target whole pixel point 110 to determine target 1/2 pixel points, such as 1/2 pixel point 120; with the target 1/2 pixel 120 as the center, the loss values of the target 1/2 pixel 120 and the 4 1/4 pixels (i.e., the black circles in fig. 1) around the target 1/2 pixel are calculated, respectively, to determine the target 1/4 pixel, e.g., 1/4 pixel 130. In the related art, for any one starting point, the same sub-pixel search mode is adopted, that is, search is indiscriminately performed on each sub-pixel of the next level around the determined target whole pixel or target sub-pixel. The search method has a huge calculation amount, and the sub-pixel search mode cannot be adaptively adjusted for each initial point, so that the efficiency of motion estimation of the whole pixels and the sub-pixels cannot be effectively improved.
Based on the relative position relationship between the starting point of the sub-pixel search and the whole pixel point in the reference image, the sub-pixel search mode corresponding to the relative position relationship is determined, and the target sub-pixel point used for motion estimation in the reference image is searched based on the determined sub-pixel search mode. Therefore, the method and the device can flexibly adjust the corresponding sub-pixel search mode according to the relative position relation between the initial point and the whole pixel point in the reference image, further optimize the motion estimation of the sub-pixel on the basis of ensuring the coding quality, and improve the efficiency of the motion estimation in the video coding.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 2 illustrates a schematic diagram of an exemplary system 200 in which various methods and apparatus described herein may be implemented, according to an embodiment of the present disclosure. Referring to fig. 2, the system 200 includes one or more client devices 201, 202, 203, 204, 205, and 206, a server 220, and one or more communication networks 210 coupling the one or more client devices to the server 220. The client devices 201, 202, 203, 204, 205, and 206 may be configured to execute one or more applications.
In embodiments of the present disclosure, server 220 may run one or more services or software applications that enable the sub-pixel motion estimation method to be performed.
In some embodiments, server 220 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, such as provided to users of client devices 201, 202, 203, 204, 205, and/or 206 under a software as a service (SaaS) model.
In the configuration shown in fig. 2, server 220 may include one or more components that implement the functions performed by server 220. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 201, 202, 203, 204, 205, and/or 206 may, in turn, utilize one or more client applications to interact with server 220 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 200. Accordingly, fig. 2 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
A user may use a client device 201, 202, 203, 204, 205, and/or 206 to obtain a video stream to be encoded. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 2 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
The computing units in server 220 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. Server 220 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 220 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 201, 202, 203, 204, 205, and 206. Server 220 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 201, 202, 203, 204, 205, and 206.
In some embodiments, server 220 may be a server of a distributed system, or a server that incorporates a blockchain. The server 220 may also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 200 may also include one or more databases 230. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 230 may be used to store information such as audio files and video files. Data store 230 may reside in various locations. For example, the data store used by server 220 may be local to server 220, or may be remote from server 220 and may communicate with server 220 via a network-based or dedicated connection. Data store 230 may be of different types. In certain embodiments, the data store used by server 220 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of databases 230 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 200 of fig. 2 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 3 is a diagram illustrating a sub-pixel motion estimation method according to an exemplary embodiment of the present disclosure, which may include, as shown in fig. 3: step S301, determining a starting point of sub-pixel search in a reference image and whole pixel points distributed in the reference image; step S302, determining a sub-pixel searching mode corresponding to the relative position relation based on the relative position relation between the starting point and the whole pixel point; and step S303, searching a target sub-pixel point for motion estimation in the reference image based on the determined sub-pixel searching mode. Therefore, the corresponding sub-pixel search mode can be flexibly adjusted according to the relative position relation between the initial point and the whole pixel point distributed in the reference image, on the basis of ensuring the coding quality, the motion estimation of the sub-pixel is further optimized, and the efficiency of the motion estimation in the video coding is improved.
For step S301, according to some embodiments, a starting point of the sub-pixel search may be determined according to a motion vector MV obtained by estimating a motion of an integer pixel of a block to be coded in a current image to be coded, and spatial domain and temporal domain motion information of an adjacent block to be coded.
In one embodiment, a Candidate Motion Vector List (Motion Vector Predictor Candidate List) may be constructed by Motion vectors of spatial neighboring blocks of a block to be encoded in a current picture to be encoded, and Motion vectors of temporal collocated blocks of the block to be encoded in a collocated picture. And comparing the motion vector MV obtained by estimating the integer pixel motion of the block to be coded with each candidate motion vector in the motion vector candidate list, and determining the starting point of the sub-pixel search based on the position pointed by the closest candidate motion vector in the reference image.
After the starting point of the sub-pixel search and the one or more whole pixel points distributed in the reference image are determined in step S301, steps S302 and S303 may be sequentially performed, a sub-pixel search pattern corresponding to the relative positional relationship is determined based on the relative positional relationship between the starting point and at least one of the one or more whole pixel points, and a target sub-pixel point for motion estimation in the reference image is searched based on the determined sub-pixel search pattern.
According to some embodiments, in the process of searching for the target subpixel point, the target subpixel point may be determined based on a calculation result of the error value Rdcost. Among the plurality of whole pixel points, the whole pixel point with the smaller Rdcost calculation result can be used as a target whole pixel point, and among the plurality of sub pixel points, the sub pixel point with the smaller Rdcost calculation result can be used as a target sub pixel point.
Specifically, the calculation formula of Rdcost is as follows:
RDcost=SATD/SAD+λ×bits
where λ is the lagrange multiplication coefficient and bits represents the number of bits required to encode the current motion vector. In practical application, SAD or SATD can be used according to requirements and precision, wherein SAD is the absolute sum of residual coefficients, and SATD is the absolute sum of Hadamard transform of the residual coefficients. Among them, SATD has a larger calculation amount than SAD.
In an implementation mode, in the searching process of the target sub-pixel, SAD can be adopted to calculate Rdcost when the target whole pixel is determined, SATD can be adopted to calculate Rdcost when the target sub-pixel is determined, so that an adaptive calculation mode is selected according to the types of different pixels, and the calculation efficiency can be improved on the basis of ensuring the encoding quality.
According to some embodiments, each sub-pixel search pattern specifies a sector in the reference image, and searching for a target sub-pixel point in the reference image for motion estimation based on the determined sub-pixel search pattern may include: a search for a target subpixel point is performed within the sector specified by the determined subpixel search pattern.
After the relative position relationship between the starting point of the sub-pixel search and at least one of the one or more whole pixel points is determined, the sub-pixel points with the possibly large error value under the position relationship can be directly skipped in the searching process of the target sub-pixel point based on the determined position relationship, namely, the searching of each secondary sub-pixel point is not indiscriminately performed on the periphery of the determined target whole pixel point or the target sub-pixel point, but the searching of the target sub-pixel point is limited in the designated sector corresponding to the position relationship. Therefore, on the basis of ensuring the coding quality, the calculation amount in the sub-pixel searching process can be effectively reduced, and the efficiency of the motion estimation of the sub-pixels is improved.
Fig. 4 is a flowchart illustrating sub-pixel motion estimation according to an exemplary embodiment of the present disclosure. As shown in fig. 4, in an exemplary embodiment, the sub-pixel motion estimation process may be:
s401, judging whether the initial point is overlapped with the whole pixel points, and executing S402 under the condition that the initial point is overlapped with any whole pixel point in the reference image; skipping to S403 under the condition that the starting point is not overlapped with any integral pixel point in the reference image;
s402, executing a first search mode to determine a corresponding search sector;
s403, judging whether the starting point is on a horizontal or vertical connecting line of two adjacent integral pixel points, wherein the two adjacent integral pixel points are two integral pixel points which are closest to the initial point in the reference image, and executing S404 under the condition that the starting point is on the horizontal or vertical connecting line of the adjacent integral pixel points; skipping to S405 under the condition that the starting point is not on the horizontal or vertical connecting line adjacent to the whole pixel point;
s404, executing a second search mode to determine a corresponding search sector;
s405, executing a third search mode to determine a corresponding search sector;
s406, searching the target sub-pixel points in the determined search sector;
and S407, determining target sub-pixel points for motion estimation.
The "relative position relationship between the start point and the whole pixel point in the reference image" in step S302 may include at least one of the above-mentioned S401 or S403, and the "sub-pixel search pattern corresponding to the relative position relationship" in step S302 may include the corresponding above-mentioned first search pattern, second search pattern, or third search pattern.
According to some embodiments, determining, based on a relative positional relationship between the start point and the integer pixel point, a sub-pixel search pattern corresponding to the relative positional relationship may include: in response to the coincidence of the initial point and one of the whole pixel points in the reference image, determining the whole pixel point coincident with the initial point as a first target whole pixel point; and executing search in a first sector with the first target whole pixel point as the center of a circle.
Under the condition that the initial point is superposed with one of the whole pixel points in the reference image, the whole pixel point can be directly determined as a first target whole pixel point in the sub-pixel search, and then the circle center position of a first sector for executing the search can be rapidly and reliably determined. Because the target sub-pixel points are often concentrated near the target whole pixel points, in the sub-pixel searching process, the searching area of the first target whole pixel point, which can effectively locate the target sub-pixel points, is determined.
According to some embodiments, performing a search within a first sector centered on a first target integer pixel point may comprise: fitting an error surface model according to the coordinates and the error value of each integral pixel point in the first target integral pixel point and the adjacent integral pixel points of the first target integral pixel point; determining the coordinate of the minimum error value point of the error surface model; and performing a search within the first sector covering the minimum error value point.
The applicant finds that the error value corresponding to the whole pixel point has a unimodal surface characteristic. Therefore, based on the error surface model fitted by the first target whole pixel point and the adjacent whole pixel point of the first target whole pixel point, the error value change condition in the area near the first target whole pixel point can be quickly and accurately determined. And based on the position of the minimum error value point in the error surface model, a first sector for searching the target sub-pixel point is positioned, so that the calculated amount can be reduced, the accurate target sub-pixel point can be searched, and the coding quality is ensured.
Compared with the calculation of the error value of the sub-pixel point, the calculation of the error value of the whole pixel point is usually smaller in calculation amount and higher in calculation efficiency. Therefore, based on the calculation of the first target whole pixel point and the adjacent whole pixel point of the first target whole pixel point, the first sector is positioned in a mode of fitting an error surface model, and the region where the target sub-pixel point is located can be determined quickly and reliably. On the basis, the target sub-pixel point can be further accurately positioned by searching the target sub-pixel in the determined first sector, and errors possibly caused by calculation only depending on the whole pixel point are avoided.
In one embodiment, the error surface model may be fitted by building a 5-term model, a 6-term model, and a 9-term model. Specifically, the 5-term model, the 6-term model, and the 9-term model may be expressed as:
f5(x,y)=Ax2+By2+Cx+Dy+E
f6(x,y)=Ax2+Bxy+Cy2+Dx+Ey+F
f9(x,y)=Ax2y2+Bx2y+Cxy2+Dx2+Exy+Fy2+Gx+Hy+I
where A, B … …, I are coefficients. The values of the coefficients in the 5-term model, the 6-term model and the 9-term model can be determined by calculating the error values of the first target integer pixel and a specific number of adjacent integer pixels around the first target integer pixel.
Taking 5 models as an example, the coordinate values and corresponding error values of a first target whole pixel point and 4 whole pixel points adjacent to the first target whole pixel point are substituted into f5In (x, y), coefficients A, B, C, D, E in the 5-term model can be calculated, and the function f can be further determined5The minimum error point corresponding to (x, y), i.e. f5The coordinates corresponding to the minimum point of (x, y).
According to some embodiments, the error value may be an Rdcost value.
According to some embodiments, the first sector has a central angle of 90 degrees.
According to some embodiments, a search for 1/2 pixels may be performed within the determined first sector to determine one of 1/2 pixels as the target 1/2 pixel; based on the determined target 1/2 pixel point, determining a first sub-sector, wherein the first sub-sector is within the first sector and the first sub-sector covers the determined target 1/2 pixel point; and performing a search for a target subpixel point in the first sub-sector. Therefore, the calculation amount can be further reduced, and the search efficiency is improved.
For example, fig. 5 is a diagram illustrating the sub-pixel motion estimation method of an exemplary embodiment in the case where the initial point coincides with one of the integer pixel points in the reference image. As shown in fig. 5, the whole pixel point coinciding with the initial point is determined as a first target whole pixel point 510. Fitting the error surface model based on the first target integer pixel point 510 and four adjacent integer pixel points 511 of the first target integer pixel point, and further determining the coordinates of the minimum error value point 520 of the error surface model; a search is performed within a first sector covering the minimum error point 520, wherein the first sector has a center angle of 90 degrees.
Within the first sector, a search of 1/2 pixels (i.e., the black triangle in fig. 5) centered on the first target whole pixel 510 is performed to determine one of 1/2 pixels, e.g., 1/2 pixel 530, as the target 1/2 pixel. Based on the determined target 1/2 pixel 530, a first sub-sector is determined and further within the first sub-sector, a search is performed for 1/4 pixels centered on target 1/2 pixel 530.
Preferably, a search may be made in the first sub-sector only for a portion 1/4 of the pixels centered at the target 1/2 pixel 530, e.g., three 1/4 pixels (i.e., black circles in fig. 5) that are close to the target full pixel 510.
According to some embodiments, determining, based on a relative positional relationship between the start point and the integer pixel point, a sub-pixel search pattern corresponding to the relative positional relationship comprises: responding to the fact that the initial point is not overlapped with any whole pixel point in the reference image, the initial point and two adjacent whole pixel points are located on the same horizontal or vertical connecting line, and determining one of the adjacent whole pixel points as a second target whole pixel point based on the error value of each adjacent whole pixel point in the two adjacent whole pixel points, wherein the two adjacent whole pixel points are two whole pixel points which are closest to the initial point in the reference image; and executing search in a second sector with the second target whole pixel point as the center of a circle.
Under the condition that the initial point is not superposed with any whole pixel point in the reference image and the initial point and two adjacent whole pixel points are positioned on the same horizontal or vertical connecting line, the second target whole pixel point can be determined only by searching the two adjacent whole pixel points, so that the circle center position of the second sector for executing the search can be determined quickly and reliably only by adopting smaller calculated amount. Because the target sub-pixel points are often concentrated near the target whole pixel points, in the sub-pixel searching process, the searching area of the second target whole pixel point, which can effectively locate the target sub-pixel points, is determined.
According to some embodiments, the second sector covers the initial point. Therefore, the second sector for sub-pixel searching can be conveniently positioned, the calculated amount can be reduced, accurate target sub-pixel points can be searched, and the coding quality is ensured.
According to some embodiments, the central angle of the second sector is less than 130 degrees.
According to some embodiments, performing the search in the second sector centered on the second target integer pixel point comprises performing the search for 1/2 subpixel points located on a perpendicular bisector of a line connecting two adjacent integer pixel points.
For example, fig. 6A is a diagram illustrating a sub-pixel motion estimation method of an exemplary embodiment in a case where an initial point does not coincide with any integer pixel point in a reference image, and the initial point and two adjacent integer pixel points are located on the same horizontal or vertical line. As shown in fig. 6A, the initial point 610A does not coincide with any whole pixel point in the reference image, and the initial point 610A and two adjacent whole pixel points (i.e., white circles in fig. 6A) are located on the same horizontal connection line, and one of the adjacent whole pixel points, for example, the whole pixel point 620A, is determined to be a second target whole pixel point based on an error value of each of the two adjacent whole pixel points. Further, a search for a target subpixel point is performed in a second sector centered on the second target integer pixel point 620A and covering the initial point 610A. Wherein the central angle of the second sector is less than 130 degrees.
In the second sector, a search is performed on three 1/2 sub-pixels (i.e., black triangles in fig. 6A) on the perpendicular bisector of the line connecting two adjacent whole pixels (i.e., white circles in fig. 6A) to determine one of the 1/2 pixels, e.g., 1/2 pixel 630A, as the target 1/2 pixel. Based on the determined target 1/2 pixel point 630A, a search for 1/4 pixel points (i.e., black circles in fig. 6A) centered on target 1/2 pixel point 630A is further performed.
Preferably, the search may be performed only for the portion 1/4 pixels centered at the target 1/2 pixel point 630A, e.g., only for three 1/4 pixels near the second target full pixel point 620A.
As another example, fig. 6B illustrates a sub-pixel motion estimation method according to another exemplary embodiment, where the initial point does not coincide with any integer pixel point in the reference image, and the initial point and two adjacent integer pixel points are located on the same horizontal or vertical connection line. As shown in FIG. 6B, the initial point is 610B and the second target integer pixel point is 620B. Further, a search is performed in a second sector centered on the second target integer point 620B and covering the initial point 610B. Wherein the central angle of the second sector is less than 130 degrees.
In the second sector, a search is performed on three 1/2 sub-pixels (i.e., black triangles in fig. 6B) on the perpendicular bisector of the line connecting two adjacent whole pixels (i.e., white circles in fig. 6B) to determine one of the 1/2 pixels, e.g., 1/2 pixel 630B, as the target 1/2 pixel. Based on the determined target 1/2 pixel point 630B, further, a search is performed for three 1/4 pixel points (i.e., black circles in fig. 6B) on the perpendicular bisector of the target 1/2 pixel point 630B and the second target whole pixel point 620B.
According to some embodiments, determining, based on a relative positional relationship between the start point and the whole pixel point, a sub-pixel search pattern corresponding to the relative positional relationship includes: responding to the fact that the initial point is not overlapped with any whole pixel point in the reference image, and the initial point and two adjacent whole pixel points are not located on the same horizontal or vertical connecting line, and determining one whole pixel point as a third target whole pixel point based on the error value of each whole pixel point in four whole pixel points which are closest to the initial point in the reference image, wherein the two adjacent whole pixel points are two whole pixel points which are closest to the initial point in the reference image; and executing search in a third sector with the third target whole pixel point as the center of a circle.
Under the condition that the initial point is not superposed with any whole pixel point in the reference image and the initial point and two adjacent whole pixel points are not positioned on the same horizontal or vertical connecting line, a third target whole pixel point can be determined only by searching four whole pixel points closest to the initial point, so that the circle center position of a third sector for executing searching can be determined quickly and reliably only by adopting smaller calculated amount. Because the target sub-pixel points are often concentrated near the target whole pixel points, in the sub-pixel searching process, the searching area of the target sub-pixel points can be effectively positioned by determining the third target whole pixel points.
According to some embodiments, the third sector covers the initial point. Therefore, the third sector for sub-pixel searching can be conveniently positioned, the calculation amount can be reduced, the accurate target sub-pixel point can be searched in the searching range, and the coding quality is ensured.
According to some embodiments, the third sector has a central angle of 90 degrees.
According to some embodiments, a search for 1/2 pixels may be performed within the determined third sector to determine one of 1/2 pixels as the target 1/2 pixel; determining a third sub-sector based on the determined target 1/2 pixel points, wherein the third sub-sector is within the third sector and the third sub-sector covers the determined target 1/2 pixel points; and performing a search for a target subpixel point in the third sub-sector. Therefore, the calculation amount can be further reduced, and the search efficiency is improved.
For example, fig. 7 is a diagram illustrating a sub-pixel motion estimation method according to an exemplary embodiment in a case where an initial point does not coincide with any integer pixel point in a reference image, and the initial point and two adjacent integer pixel points are not located on the same horizontal or vertical line. As shown in fig. 7, the initial point 710 does not coincide with any whole pixel point in the reference image (i.e., the white circle in fig. 7), and the initial point 710 and two adjacent whole pixel points are not located on the same horizontal or vertical line. And determining one of the whole pixel points 720 as a third target whole pixel point based on an error value of each of four whole pixel points closest to the initial point in the reference image, and further performing search in a third sector which takes the third target whole pixel point 720 as a circle center and covers the initial point 710, wherein the central angle of the third sector is 90 degrees.
Within the third sector, a search is performed for 1/2 pixels (i.e., black triangles in fig. 7) centered around the third target whole pixel 720 to determine one of 1/2 pixels, e.g., 1/2 pixel 730, as the target 1/2 pixel. Based on the determined target 1/2 pixel 730, a third sub-sector can be determined and further within the third sub-sector, a search for 1/4 pixels centered on target 1/2 pixel 730 is performed.
Preferably, a search may be made in the third sub-sector only for a portion 1/4 of pixels centered at the target 1/2 pixel 730, e.g., three 1/4 pixels near the third target integer pixel 70.
It can be understood that the above description based on 1/2 pixel and 1/4 pixel is only an exemplary embodiment, and those skilled in the art can implement the search for the sub-pixel with the corresponding precision in a similar manner based on the requirement of the sub-pixel precision in practical application, for example, 1/8 pixel, 1/16 pixel, and the like, which is not limited herein.
According to another aspect of the present disclosure, as shown in fig. 8, there is also provided a sub-pixel motion estimation apparatus 800, including: a first determining unit 801 configured to determine a starting point of a sub-pixel search in a reference image and integer pixel points distributed in the reference image; a second determining unit 802 configured to determine a sub-pixel search pattern corresponding to a relative positional relationship between the start point and the integer pixel based on the relative positional relationship; and a searching unit 803 configured to search for target subpixel points in the reference image for motion estimation based on the determined subpixel search pattern.
According to some embodiments, each sub-pixel search pattern specifies a sector in the reference image, the search unit being further configured to: a search for a target subpixel point is performed within the sector specified by the determined subpixel search pattern.
According to some embodiments, the second determining unit further comprises: a first determining subunit, configured to, in response to the initial point coinciding with one of the integer pixel points in the reference image, determine the integer pixel point coinciding with the initial point as a first target integer pixel point; and a first search subunit configured to perform a search in a first sector centered on the first target integer pixel.
According to some embodiments, the first search subunit comprises: a module for fitting an error surface model according to the coordinates and error values of each whole pixel point in the first target whole pixel point and the adjacent whole pixel point of the first target whole pixel point; a module for determining the coordinate of the minimum error value point of the error surface model; and a module for performing a search within a first sector covering the minimum error value point.
According to some embodiments, the first sector has a central angle of 90 degrees. According to some embodiments, the second determining unit further comprises: a second determining subunit, configured to, in response to that the initial point does not coincide with any whole pixel point in the reference image, and that the initial point and two adjacent whole pixel points are located on the same horizontal or vertical connection line, determine, based on an error value of each adjacent whole pixel point of the two adjacent whole pixel points, that one of the adjacent whole pixel points is a second target whole pixel point, where the two adjacent whole pixel points are two whole pixel points closest to the initial point in the reference image; and a second search subunit configured to perform a search in a second sector centered on the second target integer pixel.
According to some embodiments, the second sector covers the initial point.
According to some embodiments, the central angle of the second sector is less than 130 degrees. According to some embodiments, the second determining unit further comprises: a third determining subunit, configured to, in response to that the initial point does not coincide with any one of the whole pixel points in the reference image, and that the initial point and two adjacent whole pixel points are not located on the same horizontal or vertical straight line, determine, based on an error value of each of four whole pixel points closest to the initial point in the reference image, one of the whole pixel points as a third target whole pixel point, where the two adjacent whole pixel points are two whole pixel points closest to the initial point in the reference image; and a third search subunit configured to perform a search in a third sector centered on the third target integer pixel.
According to some embodiments, the third sector covers the initial point.
According to some embodiments, the third sector has a central angle of 90 degrees.
According to another aspect of the present disclosure, there is also provided a computer device comprising: a memory, a processor and a computer program stored on the memory, wherein the processor is configured to execute the computer program to implement the steps of the above method.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the above-described method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program realizes the steps of the above-mentioned method when executed by a processor.
Referring to fig. 9, a block diagram of a structure of an electronic device 900, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906, an output unit 907, a storage unit 908, and a communication unit 909. The input unit 906 may be any type of device capable of inputting information to the device 900, and the input unit 906 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 907 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 908 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 909 allows the device 900 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers, and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 901 performs the respective methods and processes described above, such as the sub-pixel motion estimation method. For example, in some embodiments, the sub-pixel motion estimation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into RAM 903 and executed by computing unit 901, one or more steps of the sub-pixel motion estimation method described above may be performed. Alternatively, in other embodiments, the calculation unit 901 may be configured to perform the sub-pixel motion estimation method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.
Claims (25)
1. A method of sub-pixel motion estimation, comprising:
determining a starting point of sub-pixel search in a reference image and whole pixel points distributed in the reference image;
determining a sub-pixel search mode corresponding to the relative position relation based on the relative position relation between the starting point and the integral pixel point; and
searching for target sub-pixel points in the reference image for motion estimation based on the determined sub-pixel search pattern.
2. The method of claim 1, wherein each sub-pixel search pattern specifies a sector in the reference image, and wherein searching for a target sub-pixel point in the reference image for motion estimation based on the determined sub-pixel search pattern comprises:
performing a search for the target subpixel point within the sector specified by the determined subpixel search pattern.
3. The method of claim 2, wherein determining the sub-pixel search pattern corresponding to the relative positional relationship based on the relative positional relationship between the start point and the integer pixel point comprises:
in response to the initial point coinciding with one of the whole pixel points in the reference image, determining the whole pixel point coinciding with the initial point as a first target whole pixel point; and
and executing search in a first sector taking the first target whole pixel point as the center of a circle.
4. The method of claim 3, wherein said performing a search within a first sector centered on said first target integer pixel comprises:
fitting an error surface model according to the coordinates and the error value of each integral pixel point in the first target integral pixel point and the adjacent integral pixel points of the first target integral pixel point;
determining the coordinate of the minimum error value point of the error surface model; and
a search is performed within a first sector covering the minimum error value point.
5. The method of claim 3 or 4, wherein the first sector has a central angle of 90 degrees.
6. The method of claim 2, wherein determining the sub-pixel search pattern corresponding to the relative positional relationship based on the relative positional relationship between the start point and the integer pixel point comprises:
responding to the fact that the initial point is not overlapped with any whole pixel point in the reference image, the initial point and two adjacent whole pixel points are located on the same horizontal or vertical connecting line, and determining one adjacent whole pixel point as a second target whole pixel point based on an error value of each adjacent whole pixel point in the two adjacent whole pixel points, wherein the two adjacent whole pixel points are two whole pixel points which are closest to the initial point in the reference image; and
and executing search in a second sector taking the second target whole pixel point as the center of a circle.
7. The method of claim 6, wherein the second sector covers the initiation point.
8. The method of claim 6 or 7, wherein the central angle of the second sector is less than 130 degrees.
9. The method of claim 2, wherein determining the sub-pixel search pattern corresponding to the relative positional relationship based on the relative positional relationship between the start point and the integer pixel point comprises:
in response to that the initial point is not overlapped with any whole pixel point in the reference image, and the initial point and two adjacent whole pixel points are not positioned on the same horizontal or vertical straight line, determining one of the whole pixel points as a third target whole pixel point based on an error value of each of four whole pixel points closest to the initial point in the reference image, wherein the two adjacent whole pixel points are two whole pixel points closest to the initial point in the reference image; and
and executing search in a third sector taking the third target whole pixel point as the center of a circle.
10. The method of claim 9, wherein the third sector covers the initiation point.
11. The method of claim 9 or 10, wherein the third sector has a central angle of 90 degrees.
12. A sub-pixel motion estimation apparatus, comprising:
a first determining unit configured to determine a starting point of a sub-pixel search in a reference image and integer pixels distributed in the reference image;
a second determining unit configured to determine a sub-pixel search pattern corresponding to a relative positional relationship between the start point and the integer pixel based on the relative positional relationship; and
a searching unit configured to search for a target subpixel point in the reference image for motion estimation based on the determined subpixel search pattern.
13. The apparatus of claim 12, wherein each sub-pixel search mode specifies a sector in the reference image, the search unit further configured to:
performing a search for the target subpixel point within the sector specified by the determined subpixel search pattern.
14. The apparatus of claim 13, wherein the second determining unit further comprises:
a first determining subunit, configured to, in response to the initial point coinciding with one of the integer pixel points in the reference image, determine an integer pixel point coinciding with the initial point as a first target integer pixel point; and
and the first searching subunit is configured to execute searching in a first sector taking the first target whole pixel point as a center.
15. The apparatus of claim 14, wherein the first search subunit comprises:
a module for fitting an error surface model according to the coordinates and error values of each whole pixel point in the first target whole pixel point and the adjacent whole pixel point of the first target whole pixel point;
a module for determining the coordinates of the minimum error point of the error surface model; and
a module that performs a search within a first sector that covers the minimum error value point.
16. The apparatus of claim 14 or 15, wherein the first sector has a central angle of 90 degrees.
17. The apparatus of claim 13, wherein the second determining unit further comprises:
a second determining subunit, configured to, in response to that the initial point is not overlapped with any whole pixel point in the reference image, and that the initial point and two adjacent whole pixel points are located on the same horizontal or vertical connection line, determine, based on an error value of each adjacent whole pixel point of the two adjacent whole pixel points, that one of the adjacent whole pixel points is a second target whole pixel point, where the two adjacent whole pixel points are two whole pixel points closest to the initial point in the reference image; and
and the second searching subunit is configured to execute searching in a second sector taking the second target whole pixel point as a circle center.
18. The apparatus of claim 17, wherein the second sector covers the initiation point.
19. The apparatus of claim 17 or 18, wherein the central angle of the second sector is less than 130 degrees.
20. The apparatus of claim 13, wherein the second determining unit further comprises:
a third determining subunit, configured to, in response to that the initial point does not coincide with any whole pixel point in the reference image, and that the initial point and two adjacent whole pixel points are not located on the same horizontal or vertical straight line, determine, based on an error value of each whole pixel point of four whole pixel points closest to the initial point in the reference image, one of the whole pixel points as a third target whole pixel point, where the two adjacent whole pixel points are two whole pixel points closest to the initial point in the reference image; and
and the third searching subunit is configured to perform searching in a third sector taking the third target whole pixel point as a circle center.
21. The apparatus of claim 20, wherein the third sector covers the initiation point.
22. The apparatus of claim 20 or 21, wherein the third sector has a central angle of 90 degrees.
23. A computer device, comprising:
a memory, a processor, and a computer program stored on the memory,
wherein the processor is configured to execute the computer program to implement the steps of the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the steps of the method of any of claims 1-11.
25. A computer program product comprising a computer program, wherein the computer program realizes the steps of the method of any one of claims 1-11 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110542053.6A CN113271465B (en) | 2021-05-18 | 2021-05-18 | Sub-pixel motion estimation method and apparatus, computer device, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110542053.6A CN113271465B (en) | 2021-05-18 | 2021-05-18 | Sub-pixel motion estimation method and apparatus, computer device, and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113271465A true CN113271465A (en) | 2021-08-17 |
CN113271465B CN113271465B (en) | 2022-10-25 |
Family
ID=77231487
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110542053.6A Active CN113271465B (en) | 2021-05-18 | 2021-05-18 | Sub-pixel motion estimation method and apparatus, computer device, and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113271465B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090082672A (en) * | 2008-01-28 | 2009-07-31 | 울산대학교 산학협력단 | Fast search method for sub-pixel motion estimation in H.264 |
CN109348234A (en) * | 2018-11-12 | 2019-02-15 | 北京佳讯飞鸿电气股份有限公司 | A kind of efficient sub-picture element movement estimating method and system |
CN110392265A (en) * | 2019-08-27 | 2019-10-29 | 广州虎牙科技有限公司 | Inter frame motion estimation method, apparatus, electronic equipment and readable storage medium storing program for executing |
-
2021
- 2021-05-18 CN CN202110542053.6A patent/CN113271465B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090082672A (en) * | 2008-01-28 | 2009-07-31 | 울산대학교 산학협력단 | Fast search method for sub-pixel motion estimation in H.264 |
CN109348234A (en) * | 2018-11-12 | 2019-02-15 | 北京佳讯飞鸿电气股份有限公司 | A kind of efficient sub-picture element movement estimating method and system |
CN110392265A (en) * | 2019-08-27 | 2019-10-29 | 广州虎牙科技有限公司 | Inter frame motion estimation method, apparatus, electronic equipment and readable storage medium storing program for executing |
Non-Patent Citations (3)
Title |
---|
DAI WEI等: "A NOVEL FAST TWO STEP SUB-PIXEL MOTION ESTIMATION ALGORITHM IN HEVC", 《2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS,SPEECH》 * |
王维等: "基于HEVC的亚像素运动估计快速算法", 《系统工程与电子技术》 * |
黄敏琪等: "基于亚像素运动矢量的AVS编码快速搜索算法", 《数字视频》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113271465B (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115147558B (en) | Training method of three-dimensional reconstruction model, three-dimensional reconstruction method and device | |
CN115578433B (en) | Image processing method, device, electronic equipment and storage medium | |
CN114792355B (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN115239888B (en) | Method, device, electronic equipment and medium for reconstructing three-dimensional face image | |
CN116228867B (en) | Pose determination method, pose determination device, electronic equipment and medium | |
CN115578515A (en) | Training method of three-dimensional reconstruction model, and three-dimensional scene rendering method and device | |
CN114723949A (en) | Three-dimensional scene segmentation method and method for training segmentation model | |
CN115601555A (en) | Image processing method and apparatus, device and medium | |
CN116030185A (en) | Three-dimensional hairline generating method and model training method | |
US10887586B2 (en) | Picture encoding method and terminal | |
CN113810765B (en) | Video processing method, device, equipment and medium | |
CN113271465B (en) | Sub-pixel motion estimation method and apparatus, computer device, and medium | |
CN116246026B (en) | Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device | |
CN116580212B (en) | Image generation method, training method, device and equipment of image generation model | |
CN116245998B (en) | Rendering map generation method and device, and model training method and device | |
CN114327718B (en) | Interface display method, device, equipment and medium | |
CN114092556A (en) | Method, apparatus, electronic device, medium for determining human body posture | |
CN115222598A (en) | Image processing method, apparatus, device and medium | |
CN112565752B (en) | Method, apparatus, device and medium for encoding video data | |
CN115359309A (en) | Training method, device, equipment and medium of target detection model | |
CN114049472A (en) | Three-dimensional model adjustment method, device, electronic apparatus, and medium | |
CN110399892B (en) | Environmental feature extraction method and device | |
CN111507944A (en) | Skin smoothness determination method and device and electronic equipment | |
CN115359194B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113271462B (en) | Method and device for evaluating video coding algorithm, computer equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20210817 Assignee: Beijing Intellectual Property Management Co.,Ltd. Assignor: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. Contract record no.: X2023110000093 Denomination of invention: Subpixel motion estimation method and device, computer equipment and medium Granted publication date: 20221025 License type: Common License Record date: 20230818 |
|
EE01 | Entry into force of recordation of patent licensing contract |