WO2021073066A1 - 图像处理方法及装置 - Google Patents
图像处理方法及装置 Download PDFInfo
- Publication number
- WO2021073066A1 WO2021073066A1 PCT/CN2020/086269 CN2020086269W WO2021073066A1 WO 2021073066 A1 WO2021073066 A1 WO 2021073066A1 CN 2020086269 W CN2020086269 W CN 2020086269W WO 2021073066 A1 WO2021073066 A1 WO 2021073066A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- image frame
- target macroblock
- feature
- macroblock
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/527—Global motion vector estimation
Definitions
- the present disclosure relates to the field of computer image technology, and in particular to image processing methods and devices.
- the moving image can be divided into several blocks or macroblocks, and try to search for each block or macroblock in the adjacent frame image , And get the relative offset of the space position between the two, the relative offset is usually referred to as the motion vector, and the process of obtaining the motion vector is called motion estimation.
- the inventor discovered a motion vector recognition method while studying the background technology.
- the method divides the current frame into multiple strips, and calculates the feature points of each strip and the feature value corresponding to the feature point row by row, and calculates each feature
- the above scheme has some problems: because each frame of image is subjected to striped feature point detection and feature point matching in a top-down order, matching feature points are obtained, and the motion vector of each matching feature point is calculated according to the preset
- the threshold determines whether there is a global motion vector in the motion vector with the most occurrences. This method cannot guarantee that the global motion vector is accurately determined in the first image of each frame. This situation is easy to happen: for example, in the middle Or if the global motion vector is determined at a later position, the current and subsequent strips can only be coded according to the determined global motion vector. Since the previous strips have already been processed, they will not be determined based on the subsequent determination. The global motion vector is encoded. Therefore, in this way, the image compression effect cannot be guaranteed.
- the present disclosure proposes an improved technical solution on the basis of the above-mentioned background art solution, in order to at least partially improve the above-mentioned problem.
- the embodiments of the present disclosure provide an image processing method and device, which can solve the problem of the slow speed of the existing image processing to calculate the global motion vector.
- the technical solution is as follows:
- an image processing method including:
- the motion vector of the target macroblock line is calculated according to a preset rule, and the motion vector of the target macroblock line is taken as the global motion vector.
- determining the target macroblock row according to the characteristic points of each macroblock row includes:
- calculating the motion vector of the target macroblock line according to a preset rule according to the characteristic points of the target macroblock line includes:
- the global motion vector is determined according to the motion vector of the matching feature point.
- determining a global motion vector according to the motion vector of the matching feature point includes:
- the motion vector of the matching feature point with the most occurrences is determined as the global motion vector.
- the above method further includes: dividing the current image frame into a plurality of strips; and performing macroblock type recognition on the current image frame according to the global motion vector.
- an image processing device which includes:
- the first determining module is configured to determine a target macroblock line according to the characteristic points of each macroblock line; wherein the target macroblock line includes N consecutive initial macroblock lines;
- the second determining module is configured to calculate the motion vector of the target macroblock line according to a preset rule according to the characteristic points of the target macroblock line, and use the motion vector of the target macroblock line as a global motion vector.
- the first determining module is specifically configured to:
- the second determining module includes:
- the first calculation sub-module is used to calculate the feature values of multiple feature points in the reference image frame
- the second calculation sub-module is used to calculate the characteristic value of each characteristic point in the target macroblock row of the current image frame
- the comparison sub-module is used to compare the feature value of each feature point in the target macroblock row of the current image frame with the feature value of multiple feature points in the reference image frame;
- the third calculation submodule is used to calculate the motion vector of each matching feature point in the target macroblock row relative to the matching feature point corresponding to the reference image frame;
- the determining sub-module is used to determine the global motion vector according to the motion vector of the matching feature point according to a preset rule.
- the determining submodule is specifically used for:
- the image processing method provided by the present disclosure is simple and fast, has high accuracy, and has a high image compression rate.
- Fig. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure
- Fig. 2 is a flowchart of determining a global motion vector provided by an embodiment of the present disclosure
- FIG. 3 is a flowchart of an image processing method provided by an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of images before and after translation provided by an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram of a stripe division provided by an embodiment of the present disclosure.
- FIG. 6 is an example diagram of an application environment provided by an embodiment of the present disclosure.
- FIG. 8 is a structural diagram of an image processing device provided by an embodiment of the present disclosure.
- Fig. 9 is a structural diagram of an image processing device provided by an embodiment of the present disclosure.
- Fig. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure. As shown in Fig. 1, the image processing method includes the following steps:
- Step 101 Acquire multiple initial macroblock lines of a current image frame, and identify the characteristic points of each initial macroblock line; wherein, the current image frame includes multiple initial macroblock lines;
- the current frame image is first acquired by the image acquisition device, where the current frame image is divided into multiple macroblocks; specifically, the current frame image is divided into multiple macroblocks according to a preset macroblock division method, each of which is The size of a macro block is M ⁇ N, where M can be equal to N. Specifically, the size of each macro block may be 16 ⁇ 16, 8 ⁇ 8, and so on.
- determining the target macroblock row includes:
- the feature point refers to a point where the gray value of the image changes drastically or a point with a larger curvature on the edge of the image (ie, the intersection of two edges).
- the N initial macroblock lines containing the most feature points are taken as target macroblock lines.
- Step 102 Determine a target macroblock line according to the characteristic points of each initial macroblock line; wherein, the target macroblock line includes N consecutive initial macroblock lines;
- Step 103 Calculate the motion vector of the target macroblock line according to the characteristic points of the target macroblock line, and use the motion vector of the target macroblock line as a global motion vector.
- calculating the motion vector of the target macroblock line according to the characteristic points of the target macroblock line, and using the motion vector of the target macroblock line as the global motion vector includes:
- the reference frame is an image frame that has been coded, all the feature points and their feature values in the reference frame can be obtained.
- the reference frame not only refers to the previous frame of the current image frame, but can actually refer to any frame of image before the current image.
- the feature of each feature point can be calculated according to feature extraction algorithms such as Scale-invariant Feature Transform (SIFT), FAST, MSER, STAR, etc. Value; the hash value of the pixel value of the feature point can also be calculated as the feature value.
- SIFT Scale-invariant Feature Transform
- FAST FAST
- MSER MSER
- STAR STAR
- hash value of the pixel value of the feature point can also be calculated as the feature value.
- Step 1032 Calculate the feature value of each feature point in the target macroblock row of the current image frame
- This step is the same as the method of calculating the feature value of the reference frame in step 1031, and will not be repeated here.
- Step 1033 Compare the feature value of each feature point in the target macroblock row of the current image frame with the feature value of multiple feature points in the reference image frame;
- Step 1034 Mark the feature points with the same feature values of the feature points in the target macroblock row of the current image frame and the feature points in the reference frame as matching feature points;
- Step 1035 Calculate the motion vector of each matching feature point in the target macroblock row relative to the matching feature point corresponding to the reference image frame;
- the representation of the motion vector of each target feature point relative to its matching feature point is (mv_x, mv_y), where mv_x represents the offset on the x-axis (horizontal), and mv_y represents the y-axis (longitudinal). The offset.
- Step 1036 Determine a global motion vector according to the motion vector of the matching feature point according to a preset rule.
- determining a global motion vector according to the motion vector of the matching feature point includes:
- the motion vector of the matching feature point with the most occurrences is determined as the global motion vector.
- each feature point in the target macroblock row includes: (mv_x1, mv_y1), (mv_x2, mv_y2) and (mv_x3, mv_y3); among them, (mv_x1, mv_y1) appears 5 times, ( mv_x2, mv_y2) appears 102 times and (mv_x3, mv_y3) 11 times. Therefore, the motion vector (mv_x2, mv_y2) that appears most can be determined as the global motion vector.
- the method further includes: performing macroblock type recognition on the current image frame according to the global motion vector.
- the method further includes:
- Step 104 Divide the current image frame into multiple strips, and perform macroblock type recognition strip by strip according to the determined global motion vector.
- the current frame image can be divided into multiple strips in a preset manner, and the height of each strip can be equal to the height of one macroblock row or multiple continuous macroblock rows.
- the strip division method can be set and adjusted according to actual needs.
- a frame of image can be equally divided into N strips, and the number of N can be set as required.
- Figure 4 is a schematic diagram of images before and after translation.
- Figure 5 is a schematic diagram of the stripe division of the right image in Figure 4. Referring to Figure 5, the original image is equally divided into six strips (a)-(f), and the height of each strip is two macros. The height of the block row, the length is equal to the length of seventeen macroblock columns.
- the macroblock type identification is performed slice by slice from top to bottom.
- the specific identification steps include:
- the macroblocks contained in the current slice are compared with the macroblocks in the reference frame one by one, and the macroblock type is identified according to the comparison result.
- the recognition of the macroblock type according to the comparison result specifically includes:
- the current comparison macroblock is determined to be a zero-motion macroblock; the current comparison macroblock is moved backward according to the global motion vector (assuming the current comparison The macroblock is the macroblock after moving according to the global motion vector.
- the so-called reverse motion refers to restoring the current compared macroblock to the initial position before sending the motion), and determine the position of the current compared macroblock after the reverse motion (Ie the initial position); if the macroblock at the corresponding position (initial position) in the reference frame is the same as the currently compared macroblock, the current compared macroblock is determined to be a global motion macroblock.
- the other macroblocks are considered to have changed the content of the picture, and other encoding rules can be used for encoding (for example, the video can be encoded by H.264).
- the comparison process is to compare the pixel values of all pixels of the two macroblocks one by one. When the pixels are completely the same, it can be determined that the two macroblocks are the same; otherwise, it can be determined that the two macroblocks are not the same.
- the above scheme also includes:
- the motion vector with the most occurrences exceeds one; continue to expand the position of the target macroblock row, such as moving the target macroblock row upward or Extend one line downward, and then perform motion vector calculation for the N+1 continuous macroblock lines after expansion, and determine the motion vector with the most occurrences among the calculated motion vectors as the global motion vector. If the global motion vector with the most occurrences is still not determined, the current N+1 continuous macroblock rows can be extended up or down by one line, and so on, until the most-occurring motion vector is found. When expanding, you can alternate upward and downward expansion.
- the method may further include:
- Step 105 For each slice, perform inter-frame coding based on the result of macroblock recognition.
- pipeline processing is performed in the order of the strips, that is, when a strip is encoded, the strip is transmitted immediately, thereby reducing the delay at the encoding and decoding end.
- the current frame can be divided into multiple strips, and at the same time, each strip can be matched with the current frame and the offset vector can be calculated, which can improve the processing efficiency and speed up the calculation of the global motion vector to meet the needs of high-definition video real-time Transmission requires high compression efficiency.
- the present disclosure is mainly aimed at desktop virtualization and cloud desktop scenarios, and is mainly used for motion vector recognition and encoding and decoding of computer images.
- the so-called computer image is simply the desktop image produced by the user operating the computer.
- the continuously changing natural image forms a natural image video
- the continuously changing computer image forms a computer image video.
- computer image video has more significant characteristics, for example, the motion vector has a certain regularity compared with natural video. This is determined by the way the image is generated. Since computer images are generated by user operations, user operations may or may not generate a motion vector between the two frames. If a motion vector is generated, it is mostly It is generated by the user’s mouse drag operation. In this case, the number of motion vectors is usually one.
- This motion vector can be called a global motion vector; while the motion vector in natural image video shows irregularities. This is because in natural video, multiple objects may shift in different directions between two frames of images, resulting in multiple motion vectors.
- the present disclosure mainly studies computer images with relatively simple conditions.
- Fig. 6 is an example diagram of an application environment of encoding and decoding in the image processing process of the present disclosure.
- the video signal is encoded in the encoding end, and then transmitted to the decoding end through the network transmission channel.
- the encoding end is located on the server end; the decoding end is located on the receiving device.
- the receiving device can be a personal computer, mobile phone, etc., in the desktop virtualization scenario, the receiving device Can be zero terminal.
- the number of receiving devices may be one or more, which is not limited in the present invention.
- FIG. 7 is a structural diagram of a first image processing device provided by an embodiment of the present disclosure.
- the image processing device 70 shown in FIG. 7 includes an acquisition module 701, a first determination module 702, and a second determination module 703, wherein the acquisition module 701 uses To obtain multiple initial macroblock lines of the current image frame, identify the characteristic points of each initial macroblock line; wherein, the current image frame includes multiple initial macroblock lines; the first determining module 702 is configured to The characteristic points of the initial macroblock rows to determine the target macroblock row; wherein the target macroblock row includes N consecutive initial macroblock rows; the second determining module 703 is used to determine the target macroblock row according to the characteristic points of the target macroblock row, Calculate the motion vector of the target macroblock line according to a preset rule, and use the motion vector of the target macroblock line as the global motion vector.
- the first determining module 702 is specifically configured to determine the target macroblock row of N consecutive initial macroblock behaviors containing the most feature points.
- FIG. 8 is a structural diagram of a first image processing device provided by an embodiment of the present disclosure.
- the image processing device 80 shown in FIG. 8 includes an acquisition module 801, a first determination module 802, and a second determination module 803, wherein the second determination module 803 includes:
- the first calculation sub-module 8031 is used to calculate the feature values of multiple feature points in the reference image frame
- the second calculation sub-module 8032 is configured to calculate the characteristic value of each characteristic point in the target macroblock row of the current image frame
- the comparison sub-module 8033 is configured to compare the feature value of each feature point in the target macroblock row of the current image frame with the feature value of multiple feature points in the reference image frame;
- the identification sub-module 8034 is configured to identify the feature points whose feature values of each feature point in the target macroblock row of the current image frame are the same as the feature values of the feature points in the reference frame as matching feature points;
- the third calculation sub-module 8035 is configured to calculate the motion vector of each matching feature point in the target macroblock row relative to the matching feature point corresponding to the reference image frame;
- the determining sub-module 8036 is configured to determine a global motion vector according to the motion vector of the matching feature point according to a preset rule.
- the determining submodule 8036 is specifically configured to:
- the motion vector of the matching feature point with the most occurrences is determined as the global motion vector.
- FIG. 9 is a structural diagram of the first image processing device provided by an embodiment of the present disclosure.
- the image processing device 90 shown in FIG. 9 includes an acquisition module 901, a first determination module 902, a second determination module 903, and an identification module 904, in which,
- the recognition module 904 is configured to divide the current image frame into multiple strips, and perform macroblock type recognition on the current image frame according to the global motion vector.
- the embodiment of the present disclosure also provides a computer-readable storage medium.
- the non-transitory computer-readable storage medium may be a read-only memory (English: Read Only Memory, ROM), random access memory (English: Random Access Memory, RAM), CD-ROM, magnetic tape, floppy disk and optical data storage device, etc.
- the storage medium stores computer instructions for executing the image processing method described in the embodiment corresponding to FIG. 1, which will not be repeated here.
Abstract
Description
Claims (10)
- 一种图像处理方法,其特征在于,所述方法包括:获取当前图像帧的多个初始宏块行,识别每个初始宏块行的特征点;其中,所述当前图像帧包括多个初始宏块行;根据所述每个初始宏块行的特征点,确定目标宏块行;其中,所述目标宏块行包括连续N个初始宏块行;根据所述目标宏块行的特征点,计算所述目标宏块行的运动矢量,并将所述目标宏块行的运动矢量作为全局运动矢量。
- 根据权利要求1所述的图像处理方法,其特征在于,所述根据所述每个初始宏块行的特征点,确定目标宏块行包括:确定包含特征点最多的连续N个初始宏块行为目标宏块行。
- 根据权利要求1所述的图像处理方法,其特征在于,所述根据所述目标宏块行的特征点,按照预设规则计算所述目标宏块行的运动矢量,并将所述目标宏块行的运动矢量作为全局运动矢量包括:计算参考图像帧中多个特征点的特征值;计算所述当前图像帧目标宏块行中各个特征点的特征值;将所述当前图像帧目标宏块行中各个特征点的特征值与参考图像帧中多个特征点的特征值进行比对;将所述当前图像帧目标宏块行中各个特征点的特征值与参考帧中特征点的特征值相同的特征点标识为匹配特征点;计算所述目标宏块行中各个匹配特征点相对于参考图像帧对应的匹配特征点的运动矢量;按照预设规则,根据所述匹配特征点的运动矢量确定全局运动向量。
- 根据权利要求3所述的图像处理方法,其特征在于,所述按照预设规则,根据所述匹配特征点的运动矢量确定全局运动向量包括:确定出现次数最多的匹配特征点的运动矢量作为全局运动矢量。
- 根据权利要求4所述的图像处理方法,其特征在于,所述方法还包括:将所述当前图像帧划分为多个条带;根据所述全局运动矢量对当前图像帧逐条带进行宏块类型识别。
- 一种图像处理装置,其特征在于,所述装置包括:获取模块,用于获取当前图像帧的多个初始宏块行,识别每个初始宏块行的特征点;其中,所述当前图像帧包括多个初始宏块行;第一确定模块,用于根据所述每个初始宏块行的特征点,确定目标宏块行;其中,所述目标宏块行包括连续N个初始宏块行;第二确定模块,用于根据所述目标宏块行的特征点,按照预设规则计算所述目标宏块行的运动矢量,并将所述目标宏块行的运动矢量作为全局运动矢量。
- 根据权利要求6所述的图像处理装置,其特征在于,所述第一确定模块具体用于:确定包含特征点最多的连续N个初始宏块行为目标宏块行。
- 根据权利要求6所述的图像处理装置,其特征在于,所述第二确定模块包括:第一计算子模块,用于计算参考图像帧中多个特征点的特征值;第二计算子模块,用于计算所述当前图像帧目标宏块行中各个特征点的特征值;比对子模块,用于将所述当前图像帧目标宏块行中各个特征点的特征值与参考图像帧中多个特征点的特征值进行比对;标识子模块,用于将所述当前图像帧目标宏块行中各个特征点的特征值与参考帧中特征点的特征值相同的特征点标识为匹配特征点;第三计算子模块,用于计算所述目标宏块行中各个匹配特征点相对于参考图像帧对应的匹配特征点的运动矢量;确定子模块,用于按照预设规则,根据所述匹配特征点的运动矢量确定全局运动向量。
- 根据权利要求8所述的图像处理装置,其特征在于,所述确定子模块具体用于:确定出现次数最多的匹配特征点的运动矢量作为全局运动矢量。
- 根据权利要求9所述的图像处理装置,其特征在于,所述装置还包括:识别模块,用于将所述当前图像帧划分为多个条带,并根据所述全局运动矢量对当前图像帧进行宏块类型识别。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910986217.7 | 2019-10-17 | ||
CN201910986217.7A CN110933428B (zh) | 2019-10-17 | 2019-10-17 | 图像处理方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021073066A1 true WO2021073066A1 (zh) | 2021-04-22 |
Family
ID=69849222
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/086269 WO2021073066A1 (zh) | 2019-10-17 | 2020-04-23 | 图像处理方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110933428B (zh) |
WO (1) | WO2021073066A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110933428B (zh) * | 2019-10-17 | 2023-03-17 | 西安万像电子科技有限公司 | 图像处理方法及装置 |
CN111310744B (zh) * | 2020-05-11 | 2020-08-11 | 腾讯科技(深圳)有限公司 | 图像识别方法、视频播放方法、相关设备及介质 |
CN111770334B (zh) * | 2020-07-23 | 2023-09-22 | 西安万像电子科技有限公司 | 数据编码方法及装置、数据解码方法及装置 |
CN112087626A (zh) * | 2020-08-21 | 2020-12-15 | 西安万像电子科技有限公司 | 图像处理方法、装置及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102763136A (zh) * | 2010-02-11 | 2012-10-31 | 诺基亚公司 | 用于提供多线程视频解码的方法和设备 |
JP2015026922A (ja) * | 2013-07-25 | 2015-02-05 | 三菱電機株式会社 | 動画像符号化装置および動画像符号化方法 |
CN105379279A (zh) * | 2013-06-12 | 2016-03-02 | 微软技术许可有限责任公司 | 屏幕映射以及用于屏幕内容编码的基于标准的渐进式编解码器 |
CN106375771A (zh) * | 2016-08-31 | 2017-02-01 | 苏睿 | 图像特征匹配方法和装置 |
CN106470342A (zh) * | 2015-08-14 | 2017-03-01 | 展讯通信(上海)有限公司 | 全局运动估计方法及装置 |
CN107197278A (zh) * | 2017-05-24 | 2017-09-22 | 西安万像电子科技有限公司 | 屏幕图像的全局运动向量的处理方法和装置 |
US20190116376A1 (en) * | 2017-10-12 | 2019-04-18 | Qualcomm Incorporated | Motion vector predictors using affine motion model in video coding |
CN110933428A (zh) * | 2019-10-17 | 2020-03-27 | 西安万像电子科技有限公司 | 图像处理方法及装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008133455A1 (en) * | 2007-04-25 | 2008-11-06 | Lg Electronics Inc. | A method and an apparatus for decoding/encoding a video signal |
US8130277B2 (en) * | 2008-02-20 | 2012-03-06 | Aricent Group | Method and system for intelligent and efficient camera motion estimation for video stabilization |
TR201104918A2 (tr) * | 2011-05-20 | 2012-12-21 | Vestel Elektroni̇k Sanayi̇ Ve Ti̇caret A.Ş. | Derinlik haritası ve 3d video oluşturmak için yöntem ve aygıt. |
CN103517078A (zh) * | 2013-09-29 | 2014-01-15 | 清华大学深圳研究生院 | 分布式视频编码中边信息生成方法 |
CN105263026B (zh) * | 2015-10-12 | 2018-04-17 | 西安电子科技大学 | 基于概率统计与图像梯度信息的全局矢量获取方法 |
-
2019
- 2019-10-17 CN CN201910986217.7A patent/CN110933428B/zh active Active
-
2020
- 2020-04-23 WO PCT/CN2020/086269 patent/WO2021073066A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102763136A (zh) * | 2010-02-11 | 2012-10-31 | 诺基亚公司 | 用于提供多线程视频解码的方法和设备 |
CN105379279A (zh) * | 2013-06-12 | 2016-03-02 | 微软技术许可有限责任公司 | 屏幕映射以及用于屏幕内容编码的基于标准的渐进式编解码器 |
JP2015026922A (ja) * | 2013-07-25 | 2015-02-05 | 三菱電機株式会社 | 動画像符号化装置および動画像符号化方法 |
CN106470342A (zh) * | 2015-08-14 | 2017-03-01 | 展讯通信(上海)有限公司 | 全局运动估计方法及装置 |
CN106375771A (zh) * | 2016-08-31 | 2017-02-01 | 苏睿 | 图像特征匹配方法和装置 |
CN107197278A (zh) * | 2017-05-24 | 2017-09-22 | 西安万像电子科技有限公司 | 屏幕图像的全局运动向量的处理方法和装置 |
US20190116376A1 (en) * | 2017-10-12 | 2019-04-18 | Qualcomm Incorporated | Motion vector predictors using affine motion model in video coding |
CN110933428A (zh) * | 2019-10-17 | 2020-03-27 | 西安万像电子科技有限公司 | 图像处理方法及装置 |
Non-Patent Citations (1)
Title |
---|
LAN TIAN, LI YUANYUAN, MURUGI JONAH KIMANI, DING YI, QIN ZHIGUANG: "RUN:Residual U-Net for Computer-Aided Detection of Pulmonary Nodules without Candidate Selection", 30 May 2018 (2018-05-30), XP055802437, Retrieved from the Internet <URL:https://arxivbs/1805.11856v1.org/a> [retrieved on 20210507] * |
Also Published As
Publication number | Publication date |
---|---|
CN110933428A (zh) | 2020-03-27 |
CN110933428B (zh) | 2023-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021073066A1 (zh) | 图像处理方法及装置 | |
US9852511B2 (en) | Systems and methods for tracking and detecting a target object | |
US6380986B1 (en) | Motion vector search method and apparatus | |
JP2000050281A (ja) | 動きベクトル検出方法および装置、記録媒体 | |
CN109688407B (zh) | 编码单元的参考块选择方法、装置、电子设备及存储介质 | |
CN101120594B (zh) | 全局运动估计 | |
WO2019072248A1 (zh) | 运动估计方法、装置、电子设备及计算机可读存储介质 | |
KR102080694B1 (ko) | 곡면 모델링을 통한 깊이 영상 부호화에서 움직임 추정 방법 및 장치와 비일시적 컴퓨터 판독가능 기록매체 | |
JP2005354528A (ja) | 動きベクトル検出装置及び方法 | |
WO2018230294A1 (ja) | 動画像処理装置、表示装置、動画像処理方法、および制御プログラム | |
US8509303B2 (en) | Video descriptor generation device | |
US20240080439A1 (en) | Intra-frame predictive coding method and system for 360-degree video and medium | |
JP2011041275A (ja) | 画像処理方法、データ処理方法、コンピューター読み取り可能な媒体、およびデータ処理装置 | |
CN110839157B (zh) | 图像处理方法及装置 | |
JP2014110020A (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
CN110493599B (zh) | 图像识别方法及装置 | |
KR101541077B1 (ko) | 텍스쳐 기반 블록 분할을 이용한 프레임 보간 장치 및 방법 | |
CN114040209A (zh) | 运动估计方法、装置、电子设备及存储介质 | |
CN110780780B (zh) | 图像处理方法及装置 | |
US20110228851A1 (en) | Adaptive search area in motion estimation processes | |
CN113810692A (zh) | 对变化和移动进行分帧的方法、图像处理装置及程序产品 | |
JP5173946B2 (ja) | 符号化前処理装置、符号化装置、復号装置及びプログラム | |
KR101925785B1 (ko) | 해시 정보 기반의 화면 내 예측 기술에 관한 영상 부호화 방법 | |
KR100451184B1 (ko) | 모션 벡터 탐색 방법 | |
US10063880B2 (en) | Motion detecting apparatus, motion detecting method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20877008 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20877008 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20877008 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14-10-2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20877008 Country of ref document: EP Kind code of ref document: A1 |