CN113542752A - Video data processing method and device - Google Patents
Video data processing method and device Download PDFInfo
- Publication number
- CN113542752A CN113542752A CN202110693874.XA CN202110693874A CN113542752A CN 113542752 A CN113542752 A CN 113542752A CN 202110693874 A CN202110693874 A CN 202110693874A CN 113542752 A CN113542752 A CN 113542752A
- Authority
- CN
- China
- Prior art keywords
- data
- target
- mouse
- image data
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a video data processing method and device, relates to the field of communication, and can solve the problems of resource waste and low data processing efficiency caused by the fact that motion vectors need to be calculated in video compression. The specific technical scheme is as follows: acquiring target image data and corresponding target mouse data; generating comparison data according to the target mouse data and preset mouse data; and finally, generating a target motion vector according to the target mouse data and the comparison data. According to the method and the device, the motion vector between the two frames of image data is calculated by acquiring the mouse data in the current image data, so that a large amount of calculation loss of calculating the motion vector block by block in the video compression process is avoided, and the data processing efficiency is improved. The invention is used for processing the video data.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for processing video data.
Background
Video image data is extremely correlated, i.e., contains a large amount of redundant information. In order to eliminate redundant information, the prior art removes the redundant information in the data by a video compression technology, specifically, in video coding, a motion vector can play a role in reducing inter-frame information redundancy, thereby reducing a coded code stream. A common motion vector calculation method is to perform pixel-level calculation on a given block in a current frame and a candidate block in a reference frame a plurality of times using a motion search algorithm. In the prior art, because the motion vector of the corresponding macro block needs to be calculated by comparing the current image data with the reference image data block by block according to the macro block, the compression processing of the video image can be completed through the calculation of the motion vector, which causes the waste of data processing resources and reduces the efficiency of data processing.
Disclosure of Invention
The embodiment of the disclosure provides a video data processing method and device, which can solve the problems of resource waste and low data processing efficiency caused by the fact that motion vectors need to be calculated in video compression. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a video data processing method, including:
acquiring target image data and corresponding target mouse data, wherein the mouse data comprises: the state data of the mouse comprises mouse wheel information and mouse button information;
generating comparison data according to the target mouse data and the preset mouse data, wherein the comparison data comprises coordinate difference data between the target mouse and the preset mouse;
and generating a target motion vector according to the target mouse data and the comparison data.
In one embodiment, the method comprises, before acquiring the target image data and the current mouse position data:
acquiring previous frame image data corresponding to the target image data and corresponding mouse data in the previous frame image data;
and when the previous frame of image data is inconsistent with the target image data, determining that the previous frame of image data is preset image data and the mouse data is preset mouse data.
In one embodiment, in the method, the coordinate difference data between the target mouse and the preset mouse includes:
and when the coordinate difference data is not zero, generating a target motion vector according to the target mouse data and the comparison data.
In one embodiment, the method for generating the target motion vector according to the target mouse data and the comparison data comprises the following steps:
if the state data in the target mouse data indicate a dragging state, acquiring coordinate data of the target mouse, wherein the coordinate data comprises x-axis coordinate data and y-axis coordinate data;
from the coordinate data, (x, y) coordinate point data, (0, y) coordinate point data, and (x, 0) coordinate point data are acquired, and a target motion vector is generated.
In one embodiment, the method for generating the target motion vector according to the target mouse data and the comparison data comprises the following steps:
if the state data in the target mouse data indicate a non-dragging state and the state data in the target mouse data indicate an up-down rolling state, acquiring coordinate data of the target mouse, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y;
and when the target image data is inconsistent with the preset image data, acquiring (y, 0) coordinate point data according to the coordinate data and generating a target motion vector.
In one embodiment, after generating the target motion vector according to the target mouse data and the comparison data, the method further includes:
acquiring at least one target macro block image data and corresponding target position data in the target image data;
acquiring preset macro block image data of a corresponding position in preset image data according to the target position data;
and storing the target motion vector if the target macro block image data is consistent with the preset macro block image data after the target motion vector is moved.
According to the data processing method provided by the embodiment of the disclosure, target image data and corresponding target mouse data are obtained; generating comparison data according to the target mouse data and preset mouse data; and finally, generating a target motion vector according to the target mouse data and the comparison data. According to the method and the device, the motion vector between the two frames of image data is calculated by acquiring the mouse data in the current image data, so that a large amount of calculation loss of calculating the motion vector block by block in the video compression process is avoided, and the calculation efficiency is improved.
According to a second aspect of the embodiments of the present disclosure, there is provided a video data processing apparatus comprising: the device comprises an acquisition module, a first generation module and a second generation module;
the acquisition module is respectively connected with the first generation module and the second generation module;
the acquisition module is used for acquiring target image data and corresponding target mouse data, and the mouse data comprises: the state data of the mouse comprises mouse wheel information and mouse button information;
the first generation module is used for generating comparison data according to the target mouse data and the preset mouse data, wherein the comparison data comprises coordinate difference data between the target mouse and the preset mouse;
and the second generation module is used for generating a target motion vector according to the target mouse data and the comparison data.
In one embodiment, the obtaining module of the apparatus further comprises: a first acquisition unit and a first determination unit,
the first acquisition unit is used for acquiring the previous frame of image data corresponding to the target image data and the corresponding mouse data in the previous frame of image data;
and the first determining unit is used for determining the previous frame of image data as the preset image data and the mouse data as the preset mouse data when the previous frame of image data is inconsistent with the target image data.
In one embodiment, the second generating module of the apparatus further comprises: a first acquisition unit and a first generation unit;
the first obtaining unit is used for obtaining coordinate data of the target mouse when the state data in the target mouse data indicate a dragging state, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y;
a first generation unit configured to acquire (x, y) coordinate point data, (0, y) coordinate point data, and (x, 0) coordinate point data from the coordinate data, and generate a target motion vector.
In one embodiment, the second generating module of the apparatus further comprises: a second acquisition unit and a second generation unit;
the second acquisition unit is used for acquiring coordinate data of the target mouse when the state data in the target mouse data indicate a non-dragging state and the state data in the target mouse data indicate an up-down rolling state, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y;
and a second generation unit configured to acquire (y, 0) coordinate point data according to the coordinate data and generate a target motion vector when the target image data is inconsistent with the preset image data.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of a video data processing method provided by an embodiment of the present disclosure;
fig. 2 is a block diagram of a video data processing apparatus according to an embodiment of the present disclosure;
fig. 3 is a block diagram of an acquisition module in a video data processing apparatus according to an embodiment of the disclosure;
fig. 4 is a block diagram of a second generation module in a video data processing apparatus according to an embodiment of the disclosure;
fig. 5 is a block diagram 1 of a second generation module in a video data processing apparatus according to an embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Example one
An embodiment of the present disclosure provides a data processing method, as shown in fig. 1, the video data processing method includes the following steps:
101. and acquiring target image data and corresponding target mouse data.
The mouse data includes: state data of the mouse and position data of the mouse; the state data of the mouse comprises mouse wheel information and mouse button information.
Before acquiring target image data and current mouse position data, the method comprises the following steps:
acquiring previous frame image data corresponding to the target image data and corresponding mouse data in the previous frame image data;
and when the previous frame of image data is inconsistent with the target image data, determining that the previous frame of image data is preset image data and the mouse data is preset mouse data.
And when the previous frame of image data is consistent with the target image data, determining that the previous frame of image data is not preset image data, and ending the video data processing process.
In an alternative embodiment, when the preset image data and the target image data are not consistent, the method includes:
acquiring at least one target macro block image data in the target image data and corresponding position information;
acquiring macro block image data of a corresponding position in the previous frame of image data according to the position information;
and when the target macro block image data is consistent with the macro block image data at the corresponding position in the previous frame of image data, confirming that the target image data is consistent with the previous frame of image data.
102. And generating comparison data according to the target mouse data and the preset mouse data.
Comparing data, including coordinate difference data between the target mouse and a preset mouse; the mouse position information indicating device is used for indicating whether the position information of the target mouse is consistent with that of the preset mouse, namely whether the two mice are overlapped.
In an optional embodiment, the coordinate difference data between the target mouse and the preset mouse includes:
and when the coordinate difference data is not zero, generating a target motion vector according to the target mouse data and the comparison data.
And when the coordinate difference data is not zero, generating a target motion vector according to the target mouse data and the comparison data, wherein the coordinate difference data is used for indicating whether the target mouse coordinate is not changed compared with the preset mouse coordinate, namely whether the target mouse coordinate is overlapped with the preset mouse coordinate.
When the calculation result is not zero, according to the state data in the target mouse data including the state of the mouse wheel, if the state of the mouse wheel is not changed, the motion vector is calculated by using a motion search algorithm.
103. Generating a target motion vector based on the target mouse data and the comparison data
In an alternative embodiment, generating the target motion vector based on the target mouse data and the comparison data comprises:
if the state data in the target mouse data indicate a dragging state, acquiring coordinate data of the target mouse, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y; the dragging state comprises a state that the target mouse is pressed and moved;
from the coordinate data, (x, y) coordinate point data, (0, y) coordinate point data, and (x, 0) coordinate point data are acquired, and a target motion vector is generated. For example, if the coordinate data of the target mouse is (4,5), that is, the x-axis coordinate data value is 4 and the y-axis coordinate data value is 5, the (x, y) -coordinate point data is (4,5), (0, y) -coordinate point data is (0,5), and the (x, 0) -coordinate point data is (0, 4), and the target motion vector is generated.
In an alternative embodiment, generating the target motion vector based on the target mouse data and the comparison data comprises:
if the state data in the target mouse data indicate a non-dragging state and the state data in the target mouse data indicate an up-down rolling state, acquiring coordinate data of the target mouse, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y;
and when the target image data is inconsistent with the preset image data, acquiring (y, 0) coordinate point data according to the coordinate data of the target mouse, and generating a target motion vector. When the state in the target mouse data is up-and-down rolling, the corresponding target image data is correspondingly changed in the up-and-down direction, and the target motion vector at this time only needs to acquire the y-axis coordinate data value y in the coordinate data of the target mouse.
In a specific embodiment, when the target image data is inconsistent with the preset image data, acquiring an application type of a window to which the target image data belongs, and generating a motion vector (0, y) according to the application type and the target mouse data, that is, acquiring a target motion vector according to y ═ epsilon · S, in the above formula: epsilon represents a preset parameter, the preset parameter value is determined according to the window application type of the target image data, and multiple experience parameters of the same application type can be provided, such as a word application type and a PDF application type; and S represents the value which is returned by triggering the mouse wheel event in the method and represents the moving direction and the size of the mouse wheel. The method determines a target motion vector according to the key state of the target mouse and the offset of the roller in the vertical direction of the target mouse.
In an alternative embodiment, generating the target motion vector based on the target mouse data and the comparison data comprises:
if the state data indicate that the state data are in a non-dragging state and indicate that the mouse wheel is not clicked again or other keys are not clicked after clicking, coordinate data of the target mouse are obtained, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y;
and when the target image data is inconsistent with the preset image data, acquiring (y, 0) coordinate point data according to the coordinate data of the target mouse, and generating a target motion vector.
In an alternative embodiment, after generating the target motion vector according to the target mouse data and the comparison data, the method further includes:
acquiring at least one target macro block image data and corresponding target position data in the target image data;
acquiring target position data and acquiring preset macro block image data of a corresponding position in preset image data;
and storing the target motion vector if the target macro block image data is consistent with the preset macro block image data after the target motion vector is moved.
In an alternative embodiment, after the target motion vector is generated, all macro blocks in the target image data change area are enumerated, and the macro blocks in the target image data, that is, the macro blocks at corresponding positions in the preset image data are compared with the macro blocks after the target macro blocks are translated by using the motion vector. If the two data are the same, the target macro block can be obtained from the preset image data through the motion vector, so that the calculation amount of obtaining the motion vector can be reduced, and the data processing efficiency is improved.
In an optional embodiment, after the target motion vector is stored, target position data and a target motion vector corresponding to the target image data are acquired to generate target data. The target data is used for indicating that when the previous frame of image data and the current frame of image data are compressed and processed by the video data, the previous frame of image data can be translated according to the target data to obtain the current image data; the method comprises the steps of determining preset macro block image data of a previous frame of image data corresponding to a target position through position data in target data, and determining a motion vector of the preset macro block image data needing translation through a target motion vector in the target data to obtain current image data.
According to the data processing method provided by the embodiment of the disclosure, target image data and corresponding target mouse data are obtained; generating comparison data according to the target mouse data and preset mouse data; and finally, generating a target motion vector according to the target mouse data and the comparison data. According to the method and the device, the motion vector between the two frames of image data is calculated by acquiring the mouse data in the current image data, so that a large amount of calculation loss of calculating the motion vector block by block in the video compression process is avoided, and the data processing efficiency is improved.
Example two
Based on the video data processing method described in the embodiment corresponding to fig. 1, the following is an embodiment of the apparatus of the present disclosure, which can be used to execute an embodiment of the method of the present disclosure.
The embodiment of the present disclosure provides a video data processing apparatus, as shown in fig. 2, the video data processing apparatus 20 includes: an acquisition module 201, a first generation module 202 and a second generation module 203;
the acquisition module 201 is connected with the first generation module 202 and the second generation module 203 respectively, and the first generation module 202 is connected with the second generation module 203;
an obtaining module 201, configured to obtain target image data and corresponding target mouse data, where the mouse data includes: mouse state data and mouse position data, and mouse state data including mouse wheel information and mouse button information.
As shown in fig. 3, in an alternative embodiment, the obtaining module 201 further includes: a first acquiring unit 2011 and a first determining unit 2012,
a first obtaining unit 2011, configured to obtain previous frame image data corresponding to the target image data and corresponding mouse data in the previous frame image data;
the first determining unit 2012 is configured to determine that the image data of the previous frame is the preset image data and the mouse data is the preset mouse data when the image data of the previous frame is inconsistent with the target image data.
The first generating module 202 is configured to generate comparison data according to the target mouse data and the preset mouse data, where the comparison data includes coordinate difference data between the target mouse and the preset mouse.
And the second generating module 203 is configured to generate a target motion vector according to the target mouse data and the comparison data.
As shown in fig. 4, in an alternative embodiment, the second generating module 203 further includes: a first acquisition unit 2031 and a first generation unit 2032;
a first obtaining unit 2031, configured to obtain coordinate data of the target mouse when the state data indicates a dragging state, where the coordinate data includes an x-axis coordinate data value x and a y-axis coordinate data value y;
a first generating unit 2032 for acquiring (x, y) coordinate point data, (0, y) coordinate point data, and (x, 0) coordinate point data from the coordinate data of the target mouse, and generating a target motion vector.
As shown in fig. 5, in an alternative embodiment, the second generating module 203 further includes: a second acquiring unit 2033 and a second generating unit 2034;
a second obtaining unit 2033, configured to obtain coordinate data of the target mouse when the state data indicates a non-dragging state and the state data indicates a vertical scrolling state, where the coordinate data includes an x-axis coordinate data value x and a y-axis coordinate data value y;
a second generating unit 2034 for, when the target image data does not coincide with the preset image data, acquiring (y, 0) coordinate point data according to the coordinate data of the target mouse, and generating a target motion vector.
The video data processing device provided by the embodiment of the disclosure acquires target image data and corresponding target mouse data; generating comparison data according to the target mouse data and preset mouse data; and finally, generating a target motion vector according to the target mouse data and the comparison data. According to the method and the device, the motion vector between the two frames of image data is calculated by acquiring the mouse data in the current image data, so that a large amount of calculation loss of calculating the motion vector block by block in the video compression process is avoided, and the data processing efficiency is improved. .
Based on the video data processing method described in the embodiment corresponding to fig. 1, an embodiment of the present disclosure further provides a computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a Read Only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The storage medium stores computer instructions for executing the video data processing method described in the embodiment corresponding to fig. 1, which is not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (9)
1. A method of video data processing, the method comprising:
acquiring target image data and corresponding target mouse data, wherein the mouse data comprises: the state data of the mouse comprises mouse roller information and mouse button information;
generating comparison data according to the target mouse data and preset mouse data, wherein the comparison data comprises coordinate difference data between the target mouse and the preset mouse;
generating a target motion vector according to the target mouse data and the comparison data;
after the generating the target motion vector, at least:
acquiring at least one target macro block image data and corresponding target position data in the target image data;
acquiring preset macro block image data of a corresponding position in preset image data according to the target position data;
and after moving the target macro block image data according to the target motion vector, if the target macro block image data is consistent with the preset macro block image data, storing the target motion vector.
2. The method of claim 1, prior to said obtaining target image data and current mouse position data, comprising:
acquiring previous frame image data corresponding to the target image data and corresponding mouse data in the previous frame image data;
and when the previous frame of image data is inconsistent with the target image data, determining that the previous frame of image data is the preset image data, and the mouse data is the preset mouse data.
3. The method of claim 1, wherein the coordinate difference data between the target mouse data and a predetermined mouse comprises:
and when the coordinate difference data is not zero, generating the target motion vector according to the target mouse data and the comparison data.
4. The method of claim 3, wherein generating a target motion vector based on the target mouse data and comparison data comprises:
if the state data in the target mouse data indicate a dragging state, acquiring coordinate data of the target mouse, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y;
acquiring (x, y) coordinate point data, (0, y) coordinate point data and (x, 0) coordinate point data according to the coordinate data, and generating a target motion vector.
5. The method of claim 4, wherein generating a target motion vector based on the target mouse data and comparison data comprises:
if the state data in the target mouse data indicate a non-dragging state and the state data in the target mouse data indicate an up-down rolling state, acquiring coordinate data of the target mouse, wherein the coordinate data comprise an x-axis coordinate data value x and a y-axis coordinate data value y;
and when the target image data is inconsistent with the preset image data, acquiring (y, 0) coordinate point data according to the coordinate data and generating a target motion vector.
6. A video data processing apparatus, comprising: the device comprises an acquisition module, a first generation module and a second generation module;
the acquisition module is respectively connected with the first generation module and the second generation module;
the acquisition module is used for acquiring target image data and corresponding target mouse data, wherein the mouse data comprises: the state data of the mouse comprises mouse roller information and mouse button information;
the first generation module is used for generating comparison data according to the target mouse data and preset mouse data, wherein the comparison data comprises coordinate difference data between the target mouse and the preset mouse;
the second generation module is used for generating a target motion vector according to the target mouse data and the comparison data;
wherein the second generating module is further configured to:
acquiring at least one target macro block image data and corresponding target position data in the target image data;
acquiring preset macro block image data of a corresponding position in preset image data according to the target position data;
and after moving the target macro block image data according to the target motion vector, if the target macro block image data is consistent with the preset macro block image data, storing the target motion vector.
7. The apparatus of claim 6, wherein the obtaining module further comprises: a first acquisition unit and a first determination unit,
the first obtaining unit is used for obtaining previous frame image data corresponding to the target image data and corresponding mouse data in the previous frame image data;
the first determining unit is configured to determine that the previous frame of image data is the preset image data and the mouse data is the preset mouse data when the previous frame of image data is inconsistent with the target image data.
8. The apparatus of claim 6, wherein the second generating module further comprises: a first acquisition unit and a first generation unit;
the first obtaining unit is configured to obtain coordinate data of the target mouse when state data in the target mouse data indicates a dragging state, where the coordinate data includes an x-axis coordinate data value x and a y-axis coordinate data value y;
the first generation unit is configured to acquire (x, y) coordinate point data, (0, y) coordinate point data, and (x, 0) coordinate point data according to the coordinate data, and generate a target motion vector.
9. The apparatus of claim 6, wherein the second generating module further comprises: a second acquisition unit and a second generation unit;
the second obtaining unit is configured to obtain coordinate data of the target mouse when state data in the target mouse data indicates a non-dragging state and the state data in the target mouse data indicates a vertical scrolling state, where the coordinate data includes an x-axis coordinate data value x and a y-axis coordinate data value y;
and the second generation unit is used for acquiring (y, 0) coordinate point data according to the coordinate data and generating a target motion vector when the target image data is inconsistent with the preset image data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110693874.XA CN113542752A (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110693874.XA CN113542752A (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
CN201910122767.4A CN110012293B (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910122767.4A Division CN110012293B (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113542752A true CN113542752A (en) | 2021-10-22 |
Family
ID=67165844
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110693874.XA Pending CN113542752A (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
CN202110690368.5A Pending CN113542751A (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
CN201910122767.4A Active CN110012293B (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110690368.5A Pending CN113542751A (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
CN201910122767.4A Active CN110012293B (en) | 2019-02-19 | 2019-02-19 | Video data processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN113542752A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110568983B (en) * | 2019-07-16 | 2022-08-12 | 西安万像电子科技有限公司 | Image processing method and device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040057520A1 (en) * | 2002-03-08 | 2004-03-25 | Shijun Sun | System and method for predictive motion estimation using a global motion predictor |
US20040151390A1 (en) * | 2003-01-31 | 2004-08-05 | Ryuichi Iwamura | Graphic codec for network transmission |
US20060159177A1 (en) * | 2004-12-14 | 2006-07-20 | Stmicroelectronics Sa | Motion estimation method, device, and system for image processing |
CN1984327A (en) * | 2006-05-16 | 2007-06-20 | 华为技术有限公司 | Video-frequency compression method |
CN101819475A (en) * | 2010-04-06 | 2010-09-01 | 郭小卫 | Method for acquiring indication information by indication equipment |
CN102318344A (en) * | 2008-12-30 | 2012-01-11 | 萨基姆通讯宽带公司 | Video encoding system and method |
CN103037210A (en) * | 2011-09-30 | 2013-04-10 | 宏碁股份有限公司 | Method using touch screen for assisting video compression and monitoring system |
CN103631474A (en) * | 2012-08-28 | 2014-03-12 | 鸿富锦精密工业(深圳)有限公司 | System and method for controlling graph moving |
CN107483940A (en) * | 2017-09-19 | 2017-12-15 | 武汉大学 | A kind of screen video coding method based on screen change detection |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020140672A1 (en) * | 2001-04-02 | 2002-10-03 | A-Man Hung | Computer cursor movement controlling device |
CN101169693B (en) * | 2007-11-30 | 2010-06-09 | 埃派克森微电子有限公司 | Optical movement sensing method |
CN101593045B (en) * | 2009-07-07 | 2012-05-23 | 埃派克森微电子(上海)股份有限公司 | Motion vector stability predicting method of optical indicating device |
JP5400604B2 (en) * | 2009-12-28 | 2014-01-29 | 株式会社メガチップス | Image compression apparatus and image compression method |
JP2017169001A (en) * | 2016-03-15 | 2017-09-21 | 富士通株式会社 | Transmission device, transmission method and transmission program for display screen data |
CN108769688B (en) * | 2018-05-24 | 2021-09-03 | 西华师范大学 | Video coding and decoding method |
-
2019
- 2019-02-19 CN CN202110693874.XA patent/CN113542752A/en active Pending
- 2019-02-19 CN CN202110690368.5A patent/CN113542751A/en active Pending
- 2019-02-19 CN CN201910122767.4A patent/CN110012293B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040057520A1 (en) * | 2002-03-08 | 2004-03-25 | Shijun Sun | System and method for predictive motion estimation using a global motion predictor |
US20040151390A1 (en) * | 2003-01-31 | 2004-08-05 | Ryuichi Iwamura | Graphic codec for network transmission |
US20060159177A1 (en) * | 2004-12-14 | 2006-07-20 | Stmicroelectronics Sa | Motion estimation method, device, and system for image processing |
CN1984327A (en) * | 2006-05-16 | 2007-06-20 | 华为技术有限公司 | Video-frequency compression method |
CN102318344A (en) * | 2008-12-30 | 2012-01-11 | 萨基姆通讯宽带公司 | Video encoding system and method |
CN101819475A (en) * | 2010-04-06 | 2010-09-01 | 郭小卫 | Method for acquiring indication information by indication equipment |
CN103037210A (en) * | 2011-09-30 | 2013-04-10 | 宏碁股份有限公司 | Method using touch screen for assisting video compression and monitoring system |
CN103631474A (en) * | 2012-08-28 | 2014-03-12 | 鸿富锦精密工业(深圳)有限公司 | System and method for controlling graph moving |
CN107483940A (en) * | 2017-09-19 | 2017-12-15 | 武汉大学 | A kind of screen video coding method based on screen change detection |
Also Published As
Publication number | Publication date |
---|---|
CN110012293A (en) | 2019-07-12 |
CN113542751A (en) | 2021-10-22 |
CN110012293B (en) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3576987B2 (en) | Image template matching method and image processing apparatus | |
EP3068137B1 (en) | Method and device for image processing | |
US20140196082A1 (en) | Comment information generating apparatus and comment information generating method | |
CN112528786B (en) | Vehicle tracking method and device and electronic equipment | |
KR20130025944A (en) | Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation | |
CN110363748B (en) | Method, device, medium and electronic equipment for processing dithering of key points | |
CN104284057A (en) | Video processing method and device | |
CN110012293B (en) | Video data processing method and device | |
CN115063750A (en) | Region position updating method, security system and computer readable storage medium | |
Wu et al. | Temporal complementarity-guided reinforcement learning for image-to-video person re-identification | |
US10536713B2 (en) | Method and apparatus for determining motion vector in video | |
JP7176590B2 (en) | Image processing device, image processing method, and program | |
CN107645663B (en) | Method and device for determining motion estimation search range | |
CN110177278B (en) | Inter-frame prediction method, video coding method and device | |
CN111787410A (en) | Keyboard input method and keyboard input device | |
CN107993247B (en) | Tracking and positioning method, system, medium and computing device | |
JP2008085491A (en) | Image processor, and image processing method thereof | |
CN110780780B (en) | Image processing method and device | |
JP4622265B2 (en) | Motion vector detection device, motion vector detection method, and program | |
JP4228705B2 (en) | Motion vector search method and apparatus | |
CN110580274B (en) | GIS data rendering method | |
JP2005252360A (en) | Motion vector detecting apparatus, motion vector detection method and computer program | |
WO2020145138A1 (en) | Video editing device, method of same, and program | |
CN112764570A (en) | Touch screen event processing method, device and system | |
JP2702307B2 (en) | Fingerprint correction system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211022 |