CN106406651B - Method and device for dynamically amplifying and displaying video - Google Patents
Method and device for dynamically amplifying and displaying video Download PDFInfo
- Publication number
- CN106406651B CN106406651B CN201510481774.5A CN201510481774A CN106406651B CN 106406651 B CN106406651 B CN 106406651B CN 201510481774 A CN201510481774 A CN 201510481774A CN 106406651 B CN106406651 B CN 106406651B
- Authority
- CN
- China
- Prior art keywords
- appointed
- amplification
- video playing
- area
- touch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The invention discloses a method for dynamically amplifying and displaying videos, which comprises the following steps: acquiring an amplifying gesture touch instruction in real time in a video playing state; calculating to obtain an appointed amplification area according to the occurrence position of the touch instruction of the amplification gesture; sequentially extracting single-frame images of a specified amplification area; creating an amplified video playing window in a video playing area outside the appointed amplification area; amplifying the single-frame image of the appointed amplification area to the size of an amplified video playing window, and sequentially displaying the single-frame image in the amplified video playing window; the invention also provides a device for dynamically amplifying and displaying the video, which comprises: the system comprises an amplification instruction acquisition module, an appointed amplification area acquisition module, an image extraction module, a window creation module, an image amplification module and a display module. The method and the device for dynamically amplifying and displaying the video can locally amplify the video, and are optimization of the current video playing mode.
Description
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a method and an apparatus for dynamically magnifying and displaying a video.
Background
When a video is played on a traditional intelligent device (a PC, a tablet computer and a smart phone), the function of locally amplifying the video in real time is not available. If when playing a section of chemistry teacher when doing the video of chemical experiment, because test tube or this other chemical apparatus volumes are less, it is difficult to see chemical reaction picture wherein clearly, if can enlarge or the mode of mouse frame selection through the gesture, with a certain key region at present, dynamic enlargies to superpose in current video window and show, thereby can be very convenient, swift let the viewer obtain a certain important video content and focus and watch, this will bring better experience for the viewer, and can improve the validity that the teacher carried out the classroom experiment.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for dynamically magnifying and displaying a video, which can partially magnify a video, and is an optimization of the current video playing mode, and aims to optimize an operation method in an operation process, so as to provide a more flexible and smoother operation experience for operations.
The method for dynamically amplifying and displaying the video, which is provided by the invention, comprises the following steps:
acquiring an amplifying gesture touch instruction in real time in a video playing state;
calculating to obtain an appointed amplification area according to the occurrence position of the touch instruction of the amplification gesture;
sequentially extracting single-frame images of a specified amplification area;
creating an amplified video playing window in a video playing area outside the appointed amplification area;
and amplifying the single-frame image of the appointed amplification area to the size of an amplified video playing window, and sequentially displaying the single-frame image in the amplified video playing window.
In some embodiments, the zoom-in gesture touch instruction refers to a touch gesture in which a starting point is a two-point touch, a motion trajectory of two touch points is a substantially straight line, and motion directions of the two touch points are substantially opposite.
In some embodiments, the calculating the designated zoom-in area according to the occurrence position of the zoom-in gesture touch instruction includes:
obtaining coordinates of the last positions of the two touch points;
calculating the x-axis distance and the y-axis distance between the final positions of the two touch points and the midpoint of the initial positions of the two touch points;
taking the middle point of the initial positions of the two touch points as the center of the appointed amplification area, taking the x-axis distance as the length of the appointed amplification area in the x-axis direction, and taking the y-axis distance as the length of the appointed amplification area in the y-axis direction;
a designated enlargement area is obtained.
In some embodiments, the step of creating a magnified video playback window in the video playback area outside the specified magnification area comprises:
dividing a screen into 4 blocks with the same shape and size, namely a left upper block, a left lower block, a right upper block and a right lower block;
judging the position of the center of the appointed amplification area on the screen;
if the center of the appointed amplification area is positioned in the upper left block of the screen, an amplified video playing window is established in the lower right block of the screen;
if the center of the appointed amplification area is positioned in the lower left block of the screen, an amplified video playing window is established in the upper right block of the screen;
if the center of the appointed amplification area is positioned in the upper right block of the screen, an amplified video playing window is established in the lower left block of the screen;
and if the center of the appointed amplification area is positioned in the lower right block of the screen or the center of the screen, creating an amplified video playing window in the upper left block of the screen.
In some embodiments, the method further comprises: and monitoring and responding to an operation instruction of the video playing window in real time.
In some embodiments, the step of monitoring and responding to the operation instruction for enlarging the video playing window in real time comprises:
monitoring a new touch instruction of the amplifying gesture;
judging whether the touch command of the amplifying gesture occurs at the position of an amplifying video playing window;
if so, carrying out secondary image amplification on the single-frame image of the appointed amplification area, cutting the single-frame image of the appointed amplification area subjected to secondary amplification to the size of an amplified video playing window by taking the center of the single-frame image as a reference, and sequentially displaying the single-frame image in the amplified video playing window;
if not, no processing is carried out.
Another aspect of the present invention further provides an apparatus for dynamically enlarging a display video, including:
the amplifying instruction acquisition module is used for acquiring an amplifying gesture touch instruction in real time in a video playing state;
the appointed amplification area acquisition module is used for calculating to obtain an appointed amplification area according to the occurrence position of the touch instruction of the amplification gesture;
the image extraction module is used for sequentially extracting single-frame images of the appointed amplification area;
the window creating module is used for creating an amplified video playing window in a video playing area outside the appointed amplification area;
the image amplification module is used for amplifying the single-frame image of the appointed amplification area to the size of an amplified video playing window;
and the display module is used for sequentially displaying the amplified single-frame images in the appointed amplification area in the amplified video playing window.
In some embodiments, the zoom-in gesture touch instruction refers to a touch gesture in which a starting point is two-point touch, a motion trajectory of two touch points is a substantially straight line, and motion directions of the two touch points are substantially opposite;
the appointed amplification area acquisition module is also used for acquiring the coordinates of the last positions of the two touch points;
calculating the x-axis distance and the y-axis distance between the final positions of the two touch points and the midpoint of the initial positions of the two touch points; taking the middle point of the initial positions of the two touch points as the center of the appointed amplification area, taking the x-axis distance as the length of the appointed amplification area in the x-axis direction, and taking the y-axis distance as the length of the appointed amplification area in the y-axis direction; a designated enlargement area is obtained.
In some embodiments, the apparatus further includes a monitoring module for monitoring and responding to the operation instruction for enlarging the video playing window in real time.
In some embodiments, the listening module is further configured to listen for a new magnification gesture touch instruction; judging whether the touch command of the amplifying gesture occurs at the position of an amplifying video playing window; if so, carrying out secondary image amplification on the single-frame image of the appointed amplification area, cutting the single-frame image of the appointed amplification area subjected to secondary amplification to the size of an amplified video playing window by taking the center of the single-frame image as a reference, and sequentially displaying the single-frame image in the amplified video playing window; if not, no processing is carried out.
From the above, it can be seen that, according to the method and the device for dynamically amplifying and displaying a video, provided by the invention, after the amplifying gesture is used for receiving the amplifying instruction and obtaining the specified amplifying area, the amplifying video playing window is created in the video playing area outside the specified amplifying area, and the amplifying images are sequentially played in the video playing area, so that the amplifying playing of the specified video area is completed for the part which is interested by the user, and the user can conveniently watch the video.
Drawings
FIG. 1 is a schematic flow chart illustrating an embodiment of a method for dynamically displaying an enlarged video according to the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of an apparatus for dynamically displaying an enlarged video according to the present invention;
fig. 3 is a simplified schematic diagram of a video playing area when an enlargement gesture touch instruction is obtained in an embodiment of the method or the apparatus for dynamically enlarging and displaying a video provided by the present invention;
fig. 4 is a simplified schematic diagram of a video playing area for performing local amplification in an embodiment of a method or an apparatus for dynamically amplifying and displaying a video according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
Referring to fig. 1, a flowchart of an embodiment of a method for dynamically displaying an enlarged video according to the present invention is shown.
The method for dynamically amplifying and displaying the video comprises the following steps:
step 101: acquiring an amplifying gesture touch instruction in real time in a video playing state; referring to arrows on two sides of the cuvette in fig. 3, the touch command of the zoom-in gesture is selectable, that is, the starting point is a two-point touch, the motion tracks of the two touch points are substantially straight lines, and the motion directions of the two touch points are substantially opposite touch gestures.
Step 102: calculating to obtain an appointed amplification area according to the occurrence position of the touch instruction of the amplification gesture;
preferably, the step 102 may further include the following steps:
obtaining coordinates of the last positions of the two touch points;
calculating the x-axis distance and the y-axis distance between the final positions of the two touch points and the midpoint of the initial positions of the two touch points;
taking the middle point of the initial positions of the two touch points as the center of the appointed amplification area, taking the x-axis distance as the length of the appointed amplification area in the x-axis direction, and taking the y-axis distance as the length of the appointed amplification area in the y-axis direction;
a designated enlargement area is obtained (refer to a small box in fig. 4, which is the designated enlargement area of this embodiment). Therefore, the appointed amplification area needing to be amplified can be obtained according to the occurrence position of the touch instruction of the amplification gesture, and no additional selection is needed. Thus, the triggering of the zoom-in command and the selection of the zoom-in region are accomplished by one command.
Step 103: after the video data is decoded, single-frame images used for displaying the video can be obtained, and the single-frame images of the appointed amplification area are sequentially extracted from the original video single-frame images according to the coordinate values of the appointed amplification area;
step 104: creating a magnified video playback window in a video playback area other than the designated magnified area (refer to the large box in fig. 4, which is the magnified video playback window created in this embodiment); the size of the magnified video playing window is optionally smaller than 1/4 video playing area and larger than 2 times of the designated magnified area.
Optionally, the step 104 may further include the following processing steps:
dividing a screen into 4 blocks with the same shape and size, namely a left upper block, a left lower block, a right upper block and a right lower block;
judging the position of the center of the appointed amplification area on the screen;
if the center of the appointed amplification area is positioned in the upper left block of the screen, an amplified video playing window is established in the lower right block of the screen;
if the center of the appointed amplification area is positioned in the lower left block of the screen, an amplified video playing window is established in the upper right block of the screen;
if the center of the appointed amplification area is positioned in the upper right block of the screen, an amplified video playing window is established in the lower left block of the screen;
and if the center of the appointed amplification area is positioned in the lower right block of the screen or the center of the screen, creating an amplified video playing window in the upper left block of the screen. The design can avoid the superposition of the appointed amplification area and the amplified video playing window; of course, if the enlarged video playing area and the designated enlarged area are inevitably overlapped, the position or size of the enlarged video playing window may be adjusted accordingly.
Step 105: and amplifying the single-frame image of the appointed amplification area to the size of an amplified video playing window, and sequentially displaying the single-frame image in the amplified video playing window. Thus, real-time superposition playing of the amplified images can be realized.
It can be seen from the foregoing embodiments that, in the method for dynamically enlarging and displaying a video provided by the present invention, after an enlargement instruction is received through an enlargement gesture and a specified enlargement area is obtained therefrom, an enlargement video playing window is created in a video playing area outside the specified enlargement area, and enlargement images are sequentially played therein, so that enlargement playing of the specified area of the video is completed for a local area of interest of a user, so as to facilitate viewing of the user.
Preferably, the method for dynamically magnifying and displaying the video further includes step 106: and monitoring and responding to an operation instruction of the video playing window in real time.
Here, while creating the enlarged video playing window, a window style of the enlarged video playing window may be added (as shown in fig. 4), an icon of a close button style may appear in the upper right corner of the window, and when a click instruction of the close button icon is monitored, the enlarged display function may be closed in real time; in addition, when a dragging instruction of the amplified video playing window is monitored, the position of the amplified video playing window can be adjusted in real time, and when dragging of a window frame is monitored, the size of the window can be adjusted in real time.
Further, the step of monitoring and responding to the operation instruction for enlarging the video playing window in real time may further include the following processing steps:
monitoring a new touch instruction of the amplifying gesture;
judging whether the touch command of the amplifying gesture occurs at the position of an amplifying video playing window;
if so, carrying out secondary image amplification on the single-frame image of the appointed amplification area, cutting the single-frame image of the appointed amplification area subjected to secondary amplification to the size of an amplified video playing window by taking the center of the single-frame image as a reference, and sequentially displaying the single-frame image in the amplified video playing window;
if not, no processing is carried out.
Therefore, when the touch command of the amplifying gesture aiming at the amplifying video playing window is monitored, the image displayed in the amplifying video playing window is judged to need to be amplified for the second time, the instruction of the second amplification is responded, the image is amplified for the second time, and the watching requirement of a user is met.
Wherein, the magnification of the secondary magnification instruction is 2-4 times of the original magnified image.
It should be particularly noted that the various steps in the above-mentioned embodiments of the method can be mutually intersected, replaced, added, or deleted, and therefore, the reasonable permutation and combination changes should also belong to the scope of the present invention, and the scope of the present invention should not be limited to the above-mentioned embodiments.
Fig. 2 is a schematic structural diagram of an embodiment of the apparatus for dynamically displaying an enlarged video according to the present invention.
The apparatus 200 for dynamically magnifying and displaying video comprises:
the magnification instruction acquisition module 201 is configured to acquire a magnification gesture touch instruction in real time in a video playing state;
the designated amplification area acquisition module 202 is configured to calculate to obtain a designated amplification area according to the occurrence position of the touch instruction of the amplification gesture;
the image extraction module 203 is used for sequentially extracting single-frame images of the appointed amplification area;
a window creating module 204, configured to create an enlarged video playing window in a video playing area outside the designated enlarged area;
the image amplification module 205 is configured to perform image amplification on the single-frame image in the designated amplification area to the size of the video playback window;
and the display module 206 is configured to sequentially display the single-frame images of the enlarged designated enlarged area in the enlarged video playing window.
The device for dynamically amplifying and displaying the video can be a smart phone, a tablet computer, a touch screen with a processing function, a PC with a touch screen or a notebook computer and the like.
Preferably, the touch instruction for the magnification gesture is selectable, namely the touch gesture with the starting point being two-point touch, the motion tracks of the two touch points being substantially straight lines, and the motion directions of the two touch points being substantially opposite;
the specified amplification area obtaining module 202 is further configured to obtain coordinates of the last positions of the two touch points; calculating the x-axis distance and the y-axis distance between the final positions of the two touch points and the midpoint of the initial positions of the two touch points; taking the middle point of the initial positions of the two touch points as the center of the appointed amplification area, taking the x-axis distance as the length of the appointed amplification area in the x-axis direction, and taking the y-axis distance as the length of the appointed amplification area in the y-axis direction; a designated enlargement area is obtained.
Optionally, the apparatus 200 for dynamically magnifying and displaying a video further includes a monitoring module 207, configured to monitor and respond to an operation instruction of a video playing window in real time.
Further, the monitoring module 207 is further configured to monitor a new touch instruction of the magnification gesture; judging whether the touch command of the amplifying gesture occurs at the position of an amplifying video playing window; if so, carrying out secondary image amplification on the single-frame image of the appointed amplification area, cutting the single-frame image of the appointed amplification area subjected to secondary amplification to the size of an amplified video playing window by taking the center of the single-frame image as a reference, and sequentially displaying the single-frame image in the amplified video playing window; if not, no processing is carried out.
It can be seen from the foregoing embodiments that, in the apparatus for dynamically magnifying and displaying a video according to the present invention, after receiving a magnification instruction through a magnification gesture and obtaining a designated magnification area therefrom, a magnified video playing window is created in a video playing area outside the designated magnification area, and magnified images are sequentially played therein, so that the magnified playing of the designated video area is completed for a local area of interest of a user, so as to facilitate the viewing of the user.
Referring to fig. 1, a method for dynamically enlarging a display video according to an apparatus for dynamically enlarging a display video according to the present invention will be described.
The method for dynamically amplifying and displaying the video comprises the following steps:
step 101: the magnification instruction acquisition module 201 acquires a magnification gesture touch instruction in real time in a video playing state; referring to arrows on two sides of the cuvette in fig. 3, the touch command of the zoom-in gesture is selectable, that is, the starting point is a two-point touch, the motion tracks of the two touch points are substantially straight lines, and the motion directions of the two touch points are substantially opposite touch gestures.
Step 102: the designated amplification area acquisition module 202 calculates to obtain a designated amplification area according to the occurrence position of the touch command of the amplification gesture;
preferably, the step 102 may further include the following steps:
the designated amplification area acquisition module 202 acquires coordinates of the last positions of the two touch points;
the designated amplification area acquisition module 202 calculates an x-axis distance and a y-axis distance between the last positions of the two touch points and a midpoint of the initial positions of the two touch points;
the designated magnified region acquisition module 202 takes the midpoint of the initial positions of the two touch points as the center of the designated magnified region, the x-axis distance as the length of the designated magnified region in the x-axis direction, and the y-axis distance as the length of the designated magnified region in the y-axis direction;
the specified enlargement area acquisition module 202 obtains a specified enlargement area (refer to a small box in fig. 4, which is the specified enlargement area of this embodiment). Therefore, the appointed amplification area needing to be amplified can be obtained according to the occurrence position of the touch instruction of the amplification gesture, and no additional selection is needed. Thus, the triggering of the zoom-in command and the selection of the zoom-in region are accomplished by one command.
Step 103: after the video data is decoded, the image extraction module 203 can obtain a single-frame image used for displaying the video, and sequentially extracts the single-frame image of the appointed amplification area from the original video single-frame image according to the coordinate value of the appointed amplification area;
step 104: the window creation module 204 creates a magnified video playback window in a video playback area other than the specified magnified area (refer to the large box in fig. 4, which is the magnified video playback window created in this embodiment); the size of the magnified video playing window is optionally smaller than 1/4 video playing area and larger than 2 times of the designated magnified area.
Optionally, the step 104 may further include the following processing steps:
the window creating module 204 divides the screen into 4 blocks with the same shape and size, namely an upper left block, a lower left block, an upper right block and a lower right block;
the window creation module 204 determines that the center of the designated magnified region is located at the position of the screen;
if the center of the designated enlarged area is located in the upper left block of the screen, the window creating module 204 creates an enlarged video playing window in the lower right block of the screen;
if the center of the designated magnified region is located in the lower left block of the screen, the window creation module 204 creates a magnified video playing window in the upper right block of the screen;
if the center of the designated enlarged area is located in the upper right block of the screen, the window creating module 204 creates an enlarged video playing window in the lower left block of the screen;
if the center of the designated zoom-in area is located in the lower right block of the screen or the center of the screen, the window creation module 204 creates a zoom-in video playback window in the upper left block of the screen.
Step 105: the image enlargement module 205 enlarges the single frame image of the designated enlargement area to the size of the enlarged video playing window, and the display module 206 sequentially displays the enlarged single frame image of the designated enlargement area in the enlarged video playing window. Thus, real-time superposition playing of the amplified images can be realized.
It should be particularly pointed out that the above-described embodiments of the apparatus only employ the embodiments of the method to specifically describe the working process of each module, and those skilled in the art can easily conceive of applying these modules to other embodiments of the method. Of course, since the steps in the embodiments of the method can be mutually intersected, replaced, added, or deleted, these reasonable permutations and combinations should also fall within the scope of the present invention, and should not limit the scope of the present invention to the embodiments.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (5)
1. A method for dynamically magnifying a displayed video, comprising:
acquiring an amplifying gesture touch instruction in real time in a video playing state; the amplifying gesture touch instruction refers to a touch gesture with a starting point of two-point touch, a motion track of two touch points being a basic straight line, and motion directions of the two touch points being basically opposite;
calculating to obtain an appointed amplification area according to the occurrence position of the touch instruction of the amplification gesture;
sequentially extracting single-frame images of a specified amplification area;
creating an amplified video playing window in a video playing area outside the appointed amplification area;
amplifying the single-frame image of the appointed amplification area to the size of an amplified video playing window, and sequentially displaying the single-frame image in the amplified video playing window;
monitoring and responding to an operation instruction for amplifying a video playing window in real time, and the method specifically comprises the following steps:
monitoring a new touch instruction of the amplifying gesture;
judging whether the touch command of the amplifying gesture occurs at the position of an amplifying video playing window;
if so, carrying out secondary image amplification on the single-frame image of the appointed amplification area, cutting the single-frame image of the appointed amplification area subjected to secondary amplification to the size of an amplified video playing window by taking the center of the single-frame image as a reference, and sequentially displaying the single-frame image in the amplified video playing window;
if not, no processing is carried out.
2. The method according to claim 1, wherein the step of calculating the designated zooming area according to the occurrence position of the zooming gesture touch instruction comprises:
obtaining coordinates of the last positions of the two touch points;
calculating the x-axis distance and the y-axis distance between the final positions of the two touch points and the midpoint of the initial positions of the two touch points;
taking the middle point of the initial positions of the two touch points as the center of the appointed amplification area, taking the x-axis distance as the length of the appointed amplification area in the x-axis direction, and taking the y-axis distance as the length of the appointed amplification area in the y-axis direction;
a designated enlargement area is obtained.
3. The method of claim 1, wherein the step of creating a magnified video playback window in a video playback area outside the specified magnified area comprises:
dividing a screen into 4 blocks with the same shape and size, namely a left upper block, a left lower block, a right upper block and a right lower block;
judging the position of the center of the appointed amplification area on the screen;
if the center of the appointed amplification area is positioned in the upper left block of the screen, an amplified video playing window is established in the lower right block of the screen;
if the center of the appointed amplification area is positioned in the lower left block of the screen, an amplified video playing window is established in the upper right block of the screen;
if the center of the appointed amplification area is positioned in the upper right block of the screen, an amplified video playing window is established in the lower left block of the screen;
and if the center of the appointed amplification area is positioned in the lower right block of the screen or the center of the screen, creating an amplified video playing window in the upper left block of the screen.
4. An apparatus for dynamically magnifying a displayed video, comprising:
the amplifying instruction acquisition module is used for acquiring an amplifying gesture touch instruction in real time in a video playing state; the amplifying gesture touch instruction refers to a touch gesture with a starting point of two-point touch, a motion track of two touch points being a basic straight line, and motion directions of the two touch points being basically opposite;
the appointed amplification area acquisition module is used for calculating to obtain an appointed amplification area according to the occurrence position of the touch instruction of the amplification gesture;
the image extraction module is used for sequentially extracting single-frame images of the appointed amplification area;
the window creating module is used for creating an amplified video playing window in a video playing area outside the appointed amplification area;
the image amplification module is used for amplifying the single-frame image of the appointed amplification area to the size of an amplified video playing window;
the display module is used for sequentially displaying the amplified single-frame images in the appointed amplification area in the amplified video playing window;
the monitoring module is used for monitoring and responding to an operation instruction of the video playing window in real time;
the monitoring module is also used for monitoring a new amplifying gesture touch instruction; judging whether the touch command of the amplifying gesture occurs at the position of an amplifying video playing window; if so, carrying out secondary image amplification on the single-frame image of the appointed amplification area, cutting the single-frame image of the appointed amplification area subjected to secondary amplification to the size of an amplified video playing window by taking the center of the single-frame image as a reference, and sequentially displaying the single-frame image in the amplified video playing window; if not, no processing is carried out.
5. The apparatus of claim 4,
the appointed amplification area acquisition module is also used for acquiring the coordinates of the last positions of the two touch points;
calculating the x-axis distance and the y-axis distance between the final positions of the two touch points and the midpoint of the initial positions of the two touch points; taking the middle point of the initial positions of the two touch points as the center of the appointed amplification area, taking the x-axis distance as the length of the appointed amplification area in the x-axis direction, and taking the y-axis distance as the length of the appointed amplification area in the y-axis direction; a designated enlargement area is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510481774.5A CN106406651B (en) | 2015-08-03 | 2015-08-03 | Method and device for dynamically amplifying and displaying video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510481774.5A CN106406651B (en) | 2015-08-03 | 2015-08-03 | Method and device for dynamically amplifying and displaying video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106406651A CN106406651A (en) | 2017-02-15 |
CN106406651B true CN106406651B (en) | 2020-05-22 |
Family
ID=58008183
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510481774.5A Active CN106406651B (en) | 2015-08-03 | 2015-08-03 | Method and device for dynamically amplifying and displaying video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106406651B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109121000A (en) * | 2018-08-27 | 2019-01-01 | 北京优酷科技有限公司 | A kind of method for processing video frequency and client |
CN110941378B (en) * | 2019-11-12 | 2022-03-01 | 北京达佳互联信息技术有限公司 | Video content display method and electronic equipment |
CN111935532B (en) * | 2020-08-14 | 2024-03-01 | 腾讯科技(深圳)有限公司 | Video interaction method and device, electronic equipment and storage medium |
CN112261428A (en) * | 2020-10-20 | 2021-01-22 | 北京字节跳动网络技术有限公司 | Picture display method and device, electronic equipment and computer readable medium |
CN112770130B (en) * | 2020-12-30 | 2022-10-14 | 咪咕互动娱乐有限公司 | Live broadcast control method, electronic equipment and storage equipment |
CN113946261B (en) * | 2021-08-30 | 2023-12-05 | 福建中红信创科技有限公司 | Man-machine interaction display method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090126686A (en) * | 2008-06-05 | 2009-12-09 | 주식회사 케이티테크 | Method of zooming in/out of video processing apparatus with touch input device and video processing apparatus performing the same |
CN101616281A (en) * | 2009-06-26 | 2009-12-30 | 中兴通讯股份有限公司南京分公司 | A kind of with local method and the portable terminal that amplifies of mobile TV playing picture |
CN101951493A (en) * | 2010-09-25 | 2011-01-19 | 中兴通讯股份有限公司 | Mobile terminal and method for partially amplifying far-end images in video call thereof |
CN102208171A (en) * | 2010-03-31 | 2011-10-05 | 安凯(广州)微电子技术有限公司 | Local detail playing method on portable high-definition video player |
CN103546716A (en) * | 2012-07-17 | 2014-01-29 | 三星电子株式会社 | System and method for providing image |
-
2015
- 2015-08-03 CN CN201510481774.5A patent/CN106406651B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090126686A (en) * | 2008-06-05 | 2009-12-09 | 주식회사 케이티테크 | Method of zooming in/out of video processing apparatus with touch input device and video processing apparatus performing the same |
CN101616281A (en) * | 2009-06-26 | 2009-12-30 | 中兴通讯股份有限公司南京分公司 | A kind of with local method and the portable terminal that amplifies of mobile TV playing picture |
CN102208171A (en) * | 2010-03-31 | 2011-10-05 | 安凯(广州)微电子技术有限公司 | Local detail playing method on portable high-definition video player |
CN101951493A (en) * | 2010-09-25 | 2011-01-19 | 中兴通讯股份有限公司 | Mobile terminal and method for partially amplifying far-end images in video call thereof |
CN103546716A (en) * | 2012-07-17 | 2014-01-29 | 三星电子株式会社 | System and method for providing image |
Also Published As
Publication number | Publication date |
---|---|
CN106406651A (en) | 2017-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106406651B (en) | Method and device for dynamically amplifying and displaying video | |
US9628744B2 (en) | Display apparatus and control method thereof | |
WO2017101441A1 (en) | Method and system for remote synchronization of annotation moving and scaling | |
US20160307604A1 (en) | Interface apparatus for designating link destination, interface apparatus for viewer, and computer program | |
WO2014120312A1 (en) | Systems and methods of creating an animated content item | |
CN112099707A (en) | Display method and device and electronic equipment | |
WO2017096854A1 (en) | Picture preview method and device for smart terminal | |
CN112965780B (en) | Image display method, device, equipment and medium | |
CN107870795B (en) | Method and device for displaying electronic map | |
CN112860163A (en) | Image editing method and device | |
CN112783398A (en) | Display control and interaction control method, device, system and storage medium | |
WO2015078257A1 (en) | Search information display device and method | |
CN112911147A (en) | Display control method, display control device and electronic equipment | |
CN114339363B (en) | Picture switching processing method and device, computer equipment and storage medium | |
CN112887794B (en) | Video editing method and device | |
CN104715700A (en) | Electronic sand table system | |
CN112181252B (en) | Screen capturing method and device and electronic equipment | |
CN112099714B (en) | Screenshot method and device, electronic equipment and readable storage medium | |
CN113918070A (en) | Synchronous display method and device, readable storage medium and electronic equipment | |
JP2023510443A (en) | Labeling method and device, electronic device and storage medium | |
CN109766530B (en) | Method and device for generating chart frame, storage medium and electronic equipment | |
CN112202958B (en) | Screenshot method and device and electronic equipment | |
CN115291778A (en) | Display control method and device, electronic equipment and readable storage medium | |
CN114090896A (en) | Information display method and device and electronic equipment | |
KR101399234B1 (en) | Enhanced user interface based on gesture input for motion picture authoring tool on a mobile device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
CB02 | Change of applicant information |
Address after: 100086 Beijing Haidian District information on Road No. 11, room 407, floor 405 West Fourth Applicant after: Beijing hitevision Intelligent System Co. Ltd. Address before: 100086 Beijing Haidian District information on Road No. 11, room 407, floor 405 West Fourth Applicant before: BEIJING HONGHE INTELLIGENT SYSTEMS CO., LTD. |
|
CB02 | Change of applicant information | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |