CN113507614A - Video playing progress adjusting method and display equipment - Google Patents

Video playing progress adjusting method and display equipment Download PDF

Info

Publication number
CN113507614A
CN113507614A CN202110699639.3A CN202110699639A CN113507614A CN 113507614 A CN113507614 A CN 113507614A CN 202110699639 A CN202110699639 A CN 202110699639A CN 113507614 A CN113507614 A CN 113507614A
Authority
CN
China
Prior art keywords
video
acquisition
positions
display device
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110699639.3A
Other languages
Chinese (zh)
Inventor
张博
宋亮
闫超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202110699639.3A priority Critical patent/CN113507614A/en
Publication of CN113507614A publication Critical patent/CN113507614A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video playing progress adjusting method and display equipment, and relates to the technical field of multimedia playing. The display equipment can display a plurality of acquisition positions when the camera acquires the video while playing the video. Therefore, the user can conveniently execute selection operation on the plurality of acquisition positions, the display equipment can adjust the playing progress of the video based on the selection operation, and the flexibility of adjusting the playing progress is effectively improved. In addition, the plurality of collecting positions correspond to each video frame in the video, so that the playing progress of the video is adjusted based on the collecting positions, and the adjusting precision and the adjusting efficiency of the playing progress can be effectively improved.

Description

Video playing progress adjusting method and display equipment
Technical Field
The present application relates to the field of multimedia playing technologies, and in particular, to a method for adjusting a playing progress of a video and a display device.
Background
The display device can play videos collected by monitoring devices such as a vehicle event data recorder and the like. In the process of playing the video, the display device may adjust the playing progress of the video in response to a touch operation of the user on the progress bar of the video.
However, the flexibility of this way of adjusting the playing progress of the video is low.
Disclosure of Invention
The application provides a video playing progress adjusting method and display equipment, which can solve the problem of low flexibility of a video playing progress adjusting mode in the related art. The technical scheme is as follows:
in one aspect, there is provided a display apparatus, including: a processor and a display screen; the processor is configured to:
responding to a playing instruction for a video, playing the video in a first display area of the display screen, and displaying a plurality of acquisition positions where a camera is located in the process of acquiring the video in a map interface of a second display area of the display screen;
determining a target acquisition position from the plurality of acquisition positions in response to a selection operation for the plurality of acquisition positions;
and playing the video from the target video frame corresponding to the target acquisition position in the first display area.
On the other hand, the method for adjusting the playing progress of the video is applied to display equipment, and the display equipment comprises a display screen; the method comprises the following steps:
responding to a playing instruction for a video, playing the video in a first display area of the display screen, and displaying a plurality of acquisition positions where a camera is located in the process of acquiring the video in a map interface of a second display area of the display screen;
determining a target acquisition position from the plurality of acquisition positions in response to a selection operation for the plurality of acquisition positions;
and playing the video from the target video frame corresponding to the target acquisition position in the first display area.
Optionally, the selecting operation is a touch operation for a target acquisition position in the plurality of acquisition positions.
Optionally, a position mark point is also displayed in the map interface, and the selection operation of the acquisition position is a dragging operation for the position mark point;
the determining a target acquisition location from the plurality of acquisition locations in response to the selecting operation for the plurality of acquisition locations comprises:
and after the dragging operation is finished, determining the acquisition position which is closest to the position of the position marking point in the plurality of acquisition positions as a target acquisition position.
Optionally, the selecting operation is a voice operation; the determining a target acquisition location from the plurality of acquisition locations in response to the selecting operation for the plurality of acquisition locations comprises:
performing voice recognition on the voice operation to obtain a reference position;
and determining the acquisition position with the distance from the reference position smaller than a distance threshold value from the plurality of acquisition positions as a target acquisition position.
Optionally, the determining, as a target acquisition position, an acquisition position, of the plurality of acquisition positions, whose distance from the reference position is smaller than a distance threshold includes:
if a plurality of candidate acquisition positions with the distance from the reference position smaller than a distance threshold exist in the plurality of acquisition positions, determining the position acquisition time of each candidate acquisition position;
and determining the candidate acquisition position with the earliest position acquisition time in the plurality of candidate acquisition positions as a target acquisition position.
Optionally, the method further includes:
and displaying a position marking point in the map interface based on the playing progress of the video, wherein the position marking point is positioned at the acquisition position corresponding to the currently played video frame.
Optionally, before the video is played in the first display area from the target video frame corresponding to the target capture position, the method further includes:
determining the position acquisition time of the target acquisition position;
and determining a video frame which is closest to the position acquisition time in a plurality of video frames included in the video as a target video frame.
Optionally, the playing the video in the first display area of the display screen in response to the playing instruction for the video includes:
responding to a playing instruction for a video, and playing the video from a reference video frame of the video in a first display area of the display screen, wherein the reference video frame is a video frame acquired at the position acquisition moment of a first acquisition position;
the first collecting position is the collecting position with the earliest position collecting time in the plurality of collecting positions.
Optionally, the display device further includes: a camera; before the playing the video in the first display area of the display screen in response to the play instruction for the video, the method further comprises:
responding to a video acquisition instruction, and starting the camera to acquire the video;
and in the process of collecting the video, collecting the collecting positions where the cameras are located to obtain the plurality of collecting positions.
In still another aspect, there is provided a display apparatus including: the device comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the video playing progress adjusting method according to the aspect.
In still another aspect, there is provided a computer-readable storage medium having a computer program stored therein, the computer program being loaded and executed by a processor to implement the method for adjusting a playing progress of a video according to the above aspect.
In still another aspect, a computer program product containing instructions is provided, which when run on the computer, causes the computer to execute the method for adjusting the playing progress of a video according to the above aspect.
The beneficial effect that technical scheme that this application provided brought includes at least:
the application provides a video playing progress adjusting method and display equipment, wherein the display equipment can display a plurality of collecting positions of a camera when the camera collects a video while playing the video. Therefore, the user can conveniently execute selection operation on the plurality of acquisition positions, the display equipment can adjust the playing progress of the video based on the selection operation, and the flexibility of adjusting the playing progress is effectively improved.
In addition, the plurality of collecting positions correspond to each video frame in the video, so that the playing progress of the video is adjusted based on the collecting positions, and the adjusting precision and the adjusting efficiency of the playing progress can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for adjusting a playing progress of a video according to an embodiment of the present application;
fig. 2 is a flowchart of another video playing progress adjusting method provided in an embodiment of the present application;
fig. 3 is a schematic interface diagram for adjusting a playing progress of a video according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another interface for adjusting the playing progress of a video according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a distance between an alternative acquisition location and a reference location provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a display device according to an embodiment of the present application;
fig. 7 is a block diagram of a software structure of a display device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a video playing progress adjusting method which can be applied to display equipment. The display device may include: a display screen. Optionally, the display device may be a mobile phone, a tablet, a laptop, a desktop computer, or a television, for example, the display device may be a desktop computer. Referring to fig. 1, the method may include:
step 101, responding to a playing instruction for the video, playing the video in a first display area of a display screen, and displaying a plurality of acquisition positions where a camera is located in the process of acquiring the video in a map interface of a second display area of the display screen.
Wherein the second display area is different from the first display area. For example, the first display area may be a partial display area on the left side of the display screen, and the second display area may be a partial display area on the right side of the display screen. Alternatively, the first display region may be a partial display region on an upper side of the display screen, and the second display region may be a partial display region on a lower side of the display screen.
Optionally, the display device may receive a video to be played sent by another device, and a plurality of acquisition positions where the camera of the other device is located in the process of acquiring the video. Or, the display device may include a camera, and the display device may start the camera to acquire a video and obtain a plurality of acquisition positions where the camera is located in the process of acquiring the video.
Step 102, in response to a selection operation for a plurality of acquisition positions, determining a target acquisition position from the plurality of acquisition positions.
In an embodiment of the application, the user may select a target acquisition position from the plurality of acquisition positions displayed in the second display area. Accordingly, the display device may determine a target acquisition position from the plurality of acquisition positions in response to a user selection operation for the plurality of acquisition positions. Optionally, the selection operation of the target acquisition position may be a touch operation of the user for the target acquisition position in the plurality of acquisition positions.
And 103, starting to play the video from the target video frame corresponding to the target acquisition position in the first display area.
The display device may determine a target video frame corresponding to the target capture location and then play the video starting from the target video frame.
To sum up, the embodiment of the present application provides a method for adjusting a video playing progress, where a display device can display a plurality of collecting positions of a camera when the camera collects a video while playing the video. Therefore, the user can conveniently execute selection operation on the plurality of acquisition positions, the display equipment can adjust the playing progress of the video based on the selection operation, and the flexibility of adjusting the playing progress is effectively improved.
In addition, the plurality of collecting positions correspond to each video frame in the video, so that the playing progress of the video is adjusted based on the collecting positions, and the adjusting precision and the adjusting efficiency of the playing progress can be effectively improved.
In the embodiment of the application, the display device may receive a video to be played, which is sent by the acquisition device, and a plurality of acquisition positions where a camera of the acquisition device is located in the process of acquiring the video. Or, the display device may include a camera, and the display device may directly acquire the video to be played through the camera, and may acquire a plurality of acquisition positions where the camera is located in the process of acquiring the video.
In the following embodiments, a video to be played and a plurality of acquisition positions where the camera is located are acquired by the display device are taken as examples, and the method for adjusting the playing progress of the video provided by the embodiments of the present application is exemplarily described. The method can be applied to a display device, and the display device comprises a camera and a display screen. Referring to fig. 2, the method may include:
step 201, responding to a video acquisition instruction, and starting a camera to acquire a video.
The user may touch a video recording control of the display device. The display device can receive a video acquisition instruction triggered by the user touching the video recording control, and can respond to the video acquisition instruction to start the camera. The camera can periodically collect video frames according to a first sampling period after being started. Accordingly, the display device can acquire a video including a plurality of video frames acquired by the camera.
Each video frame corresponds to one frame acquisition time, and the frame acquisition times of any two video frames are different. The moment when the display device collects one video frame every other first sampling period is the frame collecting moment of the video frame, namely the interval of the frame collecting moments of two adjacent video frames is the first sampling period. The first sampling period may be pre-stored in the display device.
In an embodiment of the application, the display device may record a frame capture time of each of a plurality of video frames included in the video. For example, the display device may record a first correspondence of each of a plurality of video frames included in the video and a frame capture time of the video frame. The first correspondence may have recorded therein an identification of each video frame. The identification of the video frame may be a sequence number of the video frame in a plurality of video frames. Therefore, the display device can be ensured to quickly determine the target video frame corresponding to the target acquisition position.
For example, the display device may acquire the current time of each video frame and determine the current time as the frame capture time of the video frame.
By way of example, table 1 shows a first correspondence between video frames and frame capture instants of the video frames. Assuming that the identifier of the video frame is the serial number of the video frame, referring to table 1, it can be seen that the frame acquisition time of the first video frame (i.e. the video frame with the serial number of 1) acquired after the camera is started is 6:00:10.200, i.e. 6 hours (h), 0 minutes (min), 10 seconds(s), and 200 milliseconds (ms). The frame acquisition time of the video frame with sequence number 10 is 6:00:17.400, i.e. 6h 0min 17s 400 ms.
TABLE 1
Identification of video frames Frame acquisition time
1 6:00:10.200
2 6:00:11.000
3 6:00:11.800
4 6:00:12.600
9 6:00:16.600
10 6:00:17.400
11 6:00:18.200
12 6:00:19.000
Also, as can be seen from table 1, the frame capture time interval of any two adjacent video frames is 800 ms. I.e. the first sampling period is 800 ms.
Step 202, in the process of collecting the video, collecting the collecting positions where the cameras are located to obtain a plurality of collecting positions.
The display equipment is internally provided with a position acquisition assembly, and the display equipment can control the position acquisition assembly to periodically acquire the position of the camera according to a second sampling period in the process of acquiring videos by the camera, so that a plurality of acquisition positions are obtained, namely all acquisition positions of the camera in the process of acquiring the videos.
Each acquisition position corresponds to one position acquisition time, and the position acquisition times of any two acquisition positions are different. The moment when the display device collects one collection position every second sampling period is the position collection moment of the collection position, namely the position collection moments of two adjacent collection positions are the second sampling period. The second sampling period may be pre-stored by the display device.
The second acquisition period may be greater than or equal to the first acquisition period. That is, the interval between the position acquisition time of two adjacent acquisition positions may be greater than or equal to the frame acquisition time of two adjacent video frames. Accordingly, the number of capture positions acquired by the display device in a unit time may be less than or equal to the number of video frames acquired in the unit time. For example, the second sampling period is greater than the first acquisition period. The first sampling period may be on the order of milliseconds and the second acquisition period may be on the order of seconds.
Optionally, the position acquisition assembly may include: at least one of a Global Positioning System (GPS) component, a base station positioning component, and a wireless positioning component (e.g., a bluetooth positioning component). Each acquisition location at which a camera is located may refer to the geographic coordinates at which the camera is located, which may include longitude and latitude. Also, the display device may move during the process of capturing the video. Accordingly, at least two acquisition positions of the plurality of acquisition positions may be different.
In the embodiment of the application, in the process of acquiring videos by the camera, each acquisition position acquired by the position acquisition assembly can be recorded in the target position file by the display device. For example, the display device may sequentially record the acquired acquisition positions in the target position file according to the acquisition sequence of the acquisition positions. And after the video and the collection of the collection position are completed, the display device can also record a second corresponding relation between the video and a target position file corresponding to the video, so that after a playing instruction for the video is received subsequently, the target position file corresponding to the video can be determined based on the second corresponding relation, and a plurality of collection positions where the camera collects the video are read from the target position file. Wherein, the second corresponding relation records the file name of the video.
In this embodiment of the application, the display device may further record the position acquisition time of each acquisition position in the aforementioned target position file. Thus, the display device can be ensured to quickly determine the position acquisition time of the target acquisition position.
For example, it is assumed that the position acquisition time of each of the plurality of acquisition positions recorded by the display device is as shown in table 2. It can be seen from table 2 that the position acquisition time of the acquisition position (x1, y1) is 6:00:10.800, and the position acquisition time of the acquisition position (x6, y6) is 6:00: 15.800.
TABLE 2
Acquisition position Time of position acquisition
(x1,y1) 6:00:10.800
(x2,y2) 6:00:11.800
(x3,y3) 6:00:12.800
(x7,y7) 6:00:16.800
(x8,y8) 6:00:17.800
(x9,y9) 6:00:18.800
As can be seen from table 2, the position acquisition times of any two adjacent acquisition positions are spaced by 1 s. That is, the second sampling period is 1 s.
In the embodiment of the present application, in the process of capturing a video by the camera, after the position capture component captures a first capture position (i.e. the capture position at the earliest position capture time), the display device may record and store a third corresponding relationship between the first capture position and a video frame (hereinafter referred to as a reference video frame) captured at the position capture time of the first capture position. That is, the frame capture instant of the reference video frame is the same as the position capture instant of the first capture position. Wherein, the third correspondence may record the identifier of the reference video frame.
Alternatively, the display device may record the third correspondence in the aforementioned target location file.
Step 203, responding to a playing instruction for the video, playing the video in a first display area of the display screen, and displaying a plurality of acquisition positions where the camera is located in the process of acquiring the video in a map interface of a second display area of the display screen.
The display device may be displayed with a play control for the video. The user can touch the playing control, and correspondingly, the display device can receive a playing instruction triggered by the user touching the playing control. Then, the display device may respond to the playing instruction, play the video in the first display area of the display screen of the display device, and display a plurality of acquisition positions where the camera is located in the process of acquiring the video in a map interface in the second display area of the display screen.
Wherein the second display area is different from the first display area. For example, the first display area may be a partial display area on the left side of the display screen, and the second display area may be a partial display area on the right side of the display screen. Alternatively, the first display region may be a partial display region on an upper side of the display screen, and the second display region may be a partial display region on a lower side of the display screen.
In an embodiment of the present application, the process of displaying the plurality of acquisition positions by the display device may include: and the display equipment responds to the playing instruction aiming at the video, and determines a target position file corresponding to the video based on the second corresponding relation between the video and the target position file. The display device can then read the plurality of capture locations from the target location file and display the plurality of capture locations in a map interface in a second display area of the display screen.
Optionally, in the process of playing the video in the first display area, the display device may further display a position mark point in a map interface in the second display area based on the playing progress of the video, where the position mark point is located at a capture position corresponding to a currently played video frame. Therefore, the user can quickly and intuitively know the position where the camera collects the video frame, and the user experience is improved.
The acquisition position corresponding to the currently played video frame may refer to: and the position acquisition time is the closest acquisition position to the frame acquisition time of the currently played video frame. For example, the capture position corresponding to the currently played video frame may be the capture position with the same position capture time as the frame capture time of the video frame.
As an alternative implementation, the play instruction may be used to instruct the display device to play the video from the beginning. Accordingly, the display device may, in response to a play instruction for a video, play the video starting from a first video frame of the video in a first display area of the display screen, display a plurality of capture positions in a map interface in a second display area of the display screen, and display a position mark point at the first capture position (i.e., the capture position with the earliest position capture time among the plurality of capture positions).
Illustratively, referring to fig. 3, the display screen includes a first display area a and a second display area B. As can be seen from fig. 3, the first display area a is located on the left side of the display screen and the second display area B is located on the right side of the display screen.
Assuming that the plurality of capturing positions acquired by the display device are the plurality of capturing positions shown in the foregoing table 2, the plurality of capturing positions that the display device can display in the map interface of the second display area B may be as shown in fig. 3. The positions a to i in fig. 3 correspond in sequence to the acquisition positions (x1, y1) to (x9, y9) in table 2.
As can be seen from fig. 3, the position marker point Z is located at position a, i.e., at the acquisition position (x1, y 1). In addition, as shown in fig. 3, a progress bar 01 is also displayed in the first display area a, and the current playing progress of the video is "0: 00:01/0:00: 09". The file name "VID 20210618_ 123" of the video and the total playing time period "0: 00: 09" of the video may also be displayed in the second display area B.
Or, if the display device records the third corresponding relationship between the first capture position and the reference video frame, the display device may respond to the play instruction for the video, play the video from the reference video frame in the first display area of the display screen, display the plurality of capture positions in the second display area of the display screen, and display the position mark point at the first capture position.
Since the second sampling period is greater than or equal to the first sampling period, in the process of acquiring the video, the display device may not acquire the acquisition position where the camera is located after acquiring the first video frame in the video. In this case, if the video is played from the first video frame of the video, the position of the position marker point is not synchronized with the first video frame, that is, the position capture time of the position marker point is not the closest to the frame capture time of the first video frame to be displayed. In the embodiment of the present application, after receiving the play instruction for the video, the display device may start playing the video from the reference video frame in the first display area, so that synchronization between the position of the position marker and the frame capture time of the displayed video frame may be ensured.
For example, assume that the sequence number of the video frame corresponding to the first capture position recorded by the display device is 3, that is, the camera captures the third video frame. The display device may begin playing video from the third video frame after receiving a play instruction instructing the display device to begin playing video from the beginning.
As another alternative implementation, the play instruction may be used to instruct the display device to play the video starting from the middle video frame of the video. Accordingly, the display device may play the video from the intermediate video frame in response to a play instruction for the video, display a plurality of capture positions in the second display area of the display screen, and display the position mark point at a capture position whose position capture time is closest to the frame capture time of the intermediate video frame.
According to the above description, when the display device starts playing a video in the first display area of the display screen from any video frame in the video, all the capturing positions of the camera in the process of capturing the video can be displayed in the second display area of the display screen.
Step 204, in response to the selection operation for the plurality of acquisition positions, determining a target acquisition position from the plurality of acquisition positions.
In an embodiment of the application, the user may select a target acquisition position from the plurality of acquisition positions displayed in the second display area. Accordingly, the display device may determine a target acquisition position from the plurality of acquisition positions in response to a user selection operation for the plurality of acquisition positions.
In an alternative implementation, the selection operation may be a touch operation for a target acquisition position in the plurality of acquisition positions. That is, the user may touch a target acquisition location of the plurality of acquisition locations. The display device may determine the target acquisition position in response to a touch operation for the target acquisition position.
For example, continuing with fig. 3, the user has touched position g, i.e., the acquisition position (x7, y 7). Accordingly, the display device may take the capturing position (x7, y7) as the target capturing position in response to a touch operation of the user with respect to the capturing position (x7, y 7).
In another alternative implementation, position mark points are displayed in the map interface. The selection operation may be a drag operation for the position marker. That is, the user may drag the position mark point. Correspondingly, the display device can respond to the dragging operation aiming at the position mark point, update the position of the position mark point, and determine the acquisition position closest to the position of the position mark point in the plurality of acquisition positions as the target acquisition position after detecting that the dragging operation is finished. For example, the display device may determine, as the target capturing position, a capturing position that is the same as a position where the position marker point is located, from among the plurality of capturing positions, after the dragging operation is ended.
Furthermore, a trajectory line formed by connecting the plurality of acquisition positions according to the sequence of the position acquisition time is displayed in the map interface. The display device may update the position of the position mark point on the trajectory line in response to the drag operation for the position mark point.
For example, referring to fig. 4, the user drags the position marker Z on the trajectory line S and drags the position marker Z to the position g, i.e., the acquisition position (x7, y 7). Accordingly, the display apparatus may determine that the position marking point Z is located at the acquisition position (x7, y7) when detecting the end of the drag operation, and then may determine the acquisition position (x7, y7) as the target acquisition position.
In yet another alternative implementation, the selection operation may be a voice operation. That is, the user may express a voice operation including the reference position, and the display device may collect the voice operation and perform voice recognition on the voice operation to obtain the reference position. Thereafter, the display device may determine a distance of each of the plurality of acquisition positions from the reference position, and determine, as the target acquisition position, an acquisition position of the plurality of acquisition positions whose distance from the reference position is less than a distance threshold. That is, the display device may determine a reference distance (i.e., a distance between the aforementioned acquisition position and the reference position) corresponding to each of the plurality of acquisition positions, and determine an acquisition position corresponding to a reference distance smaller than the distance threshold among the plurality of reference distances as the target acquisition position.
Wherein the distance threshold may be greater than a smallest reference distance of the plurality of reference distances and less than a next smallest reference distance of the plurality of reference distances.
Because the selection operation is a voice operation, that is, when the user needs to adjust the playing progress of the video, the user only needs to express the voice operation to adjust the playing progress of the video, so that the operation complexity of the user is effectively reduced, and the user experience is improved.
Optionally, if there are multiple candidate acquisition positions in the multiple acquisition positions whose distances from the reference position are smaller than the distance threshold, the display device may determine the position acquisition time of each candidate acquisition position. Then, the display device may determine, as the target acquisition position, an alternative acquisition position with the earliest position acquisition time among the plurality of alternative acquisition positions.
It should be noted that the voice operation expressed by the user may include a name of an area, and in this case, the display device may determine the position of the center point of the orthographic projection of the area on the map interface as the reference position recognized from the voice operation.
For example, referring to fig. 5, the reference position determined by the display device is position m, and the distance w1 between the acquisition position (x6, y6) (i.e., position f) and the position m among the plurality of acquisition positions is equal to the distance w2 between the acquisition position (x7, y7) (i.e., position g) and the position m. And both w1 and w2 are less than the distance threshold.
Assuming that the position acquisition timings of the acquisition positions (x6, y6) and (x7, y7) are as shown in table 2, since the position acquisition timing 6:00:15.800 of the acquisition position (x6, y6) is earlier than the position acquisition timing 6:00:16.800 of the acquisition position (x7, y7), the display device may determine the acquisition position (x6, y6) as the target acquisition position.
And step 205, starting to play the video from the target video frame corresponding to the target acquisition position in the first display area.
After the target acquisition position is determined, the display device may determine a target video frame corresponding to the target acquisition position, and start playing a video from the target video frame in the first display area. That is, the display device may retrieve a target video frame from a plurality of video frames included in the video based on the target capture position and play the video from the target video frame, thereby achieving an effect of adjusting the play progress of the video based on the capture position.
In this embodiment of the application, the display device may determine a position capture time of the target capture position first, and then may determine, as the target video frame, a video frame of the plurality of video frames included in the video, where the frame capture time is closest to the position capture time. For example, the frame capture instant of the target video frame may be the same as the position capture instant of the target capture position.
For example, assuming that the display device determines the position capture time of the target capture position to be 00:14.800, the frame capture time of a plurality of video frames included in the video is shown in table 1. Since, among the plurality of frame capture times shown in table 1, the frame capture time 00:15.000 is closest to the position capture time 00:14.800, and the sequence number of the video frame corresponding to the frame capture time 00:15.000 is 7, the display device may determine the video frame with the sequence number of 7 as the target video frame. Also, as shown in fig. 3 and 4, the display device may play the video from the video frame to adjust the play progress of the video.
In the related art, a video includes a plurality of video frames, and a watermark is drawn in each of the video frames. The watermark includes a location of a camera of the monitoring device at the time the video frame was captured. If a user needs the display device to start playing the video from a video frame acquired by the camera at the target position, the user can watch the video first to know the track of the video acquired by the camera from the multiple watermarks of the video. Then, the user can estimate the playing time corresponding to the target position based on the track and the total playing time of the video, and touch the progress bar of the video based on the playing time. Correspondingly, the display device can respond to the touch operation and start playing the video from the target video frame so as to adjust the playing progress of the video. However, in this manner, the efficiency of the display device adjusting the playing progress of the video is low, and due to the influence of manual errors, the accuracy of adjusting the playing progress of the video is easily low.
In the method provided by the embodiment of the application, when the display device plays the video frame in the first display area of the display screen, the display device displays a plurality of acquisition positions in the second display area. The display device can directly display a plurality of collecting positions where the camera collects the video, and does not need to manually know the collecting positions after watching the video, so that a user can quickly select a target collecting position, and the efficiency of adjusting the playing progress of the video by the display device can be ensured. In addition, according to the method provided by the embodiment of the application, the display device can automatically play the video from the target video frame corresponding to the target acquisition position after the target acquisition position is determined, and the playing time corresponding to the target acquisition position does not need to be manually estimated to adjust the playing progress of the video, so that the influence of manual errors can be avoided, and the adjustment precision of the playing progress of the video is ensured.
It should be noted that, the order of the steps of the method for adjusting the playing progress of the video provided by the embodiment of the present application may be appropriately adjusted, and the steps may also be increased or decreased according to the situation. For example, step 201 and step 202 may also be deleted as the case may be, i.e., the display device may not need to perform step 201 and step 202. Any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application is covered by the protection scope of the present application, and thus the detailed description thereof is omitted.
To sum up, the embodiment of the present application provides a method for adjusting a video playing progress, where a display device can display a plurality of collecting positions of a camera when the camera collects a video while playing the video. Therefore, the user can conveniently execute selection operation on the plurality of acquisition positions, the display equipment can adjust the playing progress of the video based on the selection operation, and the flexibility of adjusting the playing progress is effectively improved.
In addition, the plurality of collecting positions correspond to each video frame in the video, so that the playing progress of the video is adjusted based on the collecting positions, and the adjusting precision and the adjusting efficiency of the playing progress can be effectively improved.
The embodiment of the application provides display equipment, and the display equipment can execute the video playing progress adjusting method provided by the method embodiment. Referring to fig. 6, the display device 110 includes: a processor 1101 and a display screen 131. The processor 1101 is configured to:
responding to a playing instruction aiming at the video, playing the video in a first display area of a display screen, and displaying a plurality of acquisition positions where a camera is located in the process of acquiring the video in a map interface of a second display area of the display screen;
determining a target acquisition position from the plurality of acquisition positions in response to a selection operation for the plurality of acquisition positions;
and playing the video from the target video frame corresponding to the target acquisition position in the first display area.
Optionally, the selecting operation is a touch operation for a target acquisition position in the plurality of acquisition positions.
Optionally, a position mark point is also displayed in the map interface, and the selection operation is a dragging operation for the position mark point. The processor 1101 may be configured to:
and after the dragging operation is finished, determining the acquisition position closest to the position of the position marking point in the plurality of acquisition positions as the target acquisition position.
Optionally, the selecting operation is a voice operation; the processor 1101 may be configured to:
performing voice recognition on the voice operation to obtain a reference position;
and determining the acquisition position with the distance from the reference position smaller than the distance threshold value as the target acquisition position.
Optionally, the processor 1101 may be configured to:
if a plurality of candidate acquisition positions with the distance from the reference position smaller than the distance threshold exist in the plurality of acquisition positions, determining the position acquisition time of each candidate acquisition position;
and determining the candidate acquisition position with the earliest position acquisition time in the plurality of candidate acquisition positions as the target acquisition position.
Optionally, the processor 1101 may further be configured to:
and displaying a position marking point in a map interface based on the playing progress of the video, wherein the position marking point is positioned at the acquisition position corresponding to the currently played video frame.
Optionally, the processor 1101 may further be configured to:
determining the position acquisition time of the target acquisition position;
and determining a video frame which is closest to the frame acquisition time and the position acquisition time in a plurality of video frames included in the video as a target video frame.
Optionally, the processor 1101 may further be configured to:
responding to a playing instruction for the video, and playing the video from a reference video frame of the video in a first display area of a display screen, wherein the reference video frame is a video frame acquired at the position acquisition moment of a first acquisition position;
the first acquisition position is the acquisition position with the earliest position acquisition time in the plurality of acquisition positions.
Optionally, as shown in fig. 6, the display device 110 further includes: a camera 121. The processor 1101 may also be configured to:
responding to a video acquisition instruction, and starting a camera to acquire a video;
in the process of collecting the video, collecting the collecting positions where the cameras are located to obtain a plurality of collecting positions.
To sum up, the embodiment of the application provides a display device, and the display device can display a plurality of acquisition positions when the camera acquires the video while playing the video. Therefore, the user can conveniently execute selection operation on the plurality of acquisition positions, the display equipment can adjust the playing progress of the video based on the selection operation, and the flexibility of adjusting the playing progress is effectively improved.
In addition, the plurality of collecting positions correspond to each video frame in the video, so that the playing progress of the video is adjusted based on the collecting positions, and the adjusting precision and the adjusting efficiency of the playing progress can be effectively improved.
As shown in fig. 6, the display device 110 provided in the embodiment of the present application may further include: a display unit 130, a Radio Frequency (RF) circuit 150, an audio circuit 160, a wireless fidelity (Wi-Fi) module 170, a bluetooth module 180, and a power supply 190.
The camera 121 may be used to capture still pictures or video, among other things. The object generates an optical picture through the lens and projects the optical picture to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensitive elements convert the light signals into electrical signals which are then passed to the processor 1101 for conversion into digital picture signals.
The processor 1101 is a control center of the display device 110, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the display device 110 and processes data by running or executing software programs stored in the memory 140 and calling data stored in the memory 140. In some embodiments, processor 1101 may include one or more processing units; the processor 1101 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 1101. In the present application, the processor 1101 may run an operating system and an application program, may control a user interface to display, and may implement the method for adjusting the playing progress of a video provided in the embodiment of the present application. Additionally, processor 1101 is coupled to input unit and display unit 130.
The display unit 130 may be used to receive input numeric or character information and generate signal inputs related to user settings and function control of the display device 110, and optionally, the display unit 130 may also be used to display information input by the user or information provided to the user and a Graphical User Interface (GUI) for various menus of the display device 110. The display unit 130 may include a display screen 131 disposed on the front surface of the display device 110. The display screen 131 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces described herein.
The display unit 130 includes: a display screen 131 and a touch screen 132 disposed on the front of the display device 110. The display screen 131 may be used to display preview pictures. Touch screen 132 may collect touch operations on or near by the user, such as clicking a button, dragging a scroll box, and the like. The touch screen 132 may be covered on the display screen 131, or the touch screen 132 and the display screen 131 may be integrated to implement the input and output functions of the display device 110, and after the integration, the touch screen may be referred to as a touch display screen for short.
Memory 140 may be used to store software programs and data. The processor 1101 executes various functions of the display device 110 and data processing by executing software programs or data stored in the memory 140. The memory 140 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 140 stores an operating system that enables the display device 110 to operate. The memory 140 may store an operating system and various application programs, and may also store codes for executing the video playing progress adjusting method provided in the embodiment of the present application.
The RF circuit 150 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then deliver the received downlink data to the processor 1101 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between a user and the display device 110. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The display device 110 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the RF circuit 150 to be transmitted to, for example, another terminal or outputs the audio data to the memory 140 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi is a short-range wireless transmission technology, and the display device 110 may help a user to send and receive e-mails, browse webpages, access streaming media, etc. through the Wi-Fi module 170, which provides a wireless broadband internet access for the user.
And the Bluetooth module 180 is used for performing information interaction with other Bluetooth devices with Bluetooth modules through a Bluetooth protocol. For example, the display device 110 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) that is also equipped with a bluetooth module through the bluetooth module 180, so as to perform data interaction.
The display device 110 also includes a power supply 190 (such as a battery) to power the various components. The power supply may be logically coupled to the processor 1101 through a power management system to manage charging, discharging, and power consumption functions through the power management system. The display device 110 may also be configured with a power button for powering on and off the terminal, and locking the screen.
The display device 110 may include at least one sensor 1110, such as a motion sensor 11101, a distance sensor 11102, a fingerprint sensor 11103, and a temperature sensor 11104. The display device 110 may also be configured with other sensors such as gyroscopes, barometers, hygrometers, thermometers, and infrared sensors.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the display device and each device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 7 is a block diagram of a software structure of a display device according to an embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the android system is divided into four layers, an application layer, an application framework layer, an Android Runtime (ART) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in fig. 7, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 7, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, pictures, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide the communication function of the display device 110. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the communication terminal vibrates, and an indicator light flashes.
The android runtime comprises a core library and a virtual machine. The android runtime is responsible for scheduling and management of the android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still picture files, etc. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, picture rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program is loaded by a processor and executes a video playing progress adjusting method provided in the above embodiment, for example, the method shown in fig. 1 or fig. 2.
The embodiment of the present application further provides a computer program product including instructions, which, when the computer program product runs on a computer, causes the computer to execute the method for adjusting the playing progress of a video provided by the foregoing method embodiment, for example, the method shown in fig. 1 or fig. 2.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
It should be understood that reference herein to "and/or" means that there may be three relationships, for example, a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Also, the term "at least one" in the present application means one or more, and the term "a plurality" in the present application means two or more.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution. For example, the first display region may be referred to as the second display region, and similarly, the second display region may be referred to as the first display region, without departing from the scope of the various described examples.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A display device, characterized in that the display device comprises: a processor and a display screen; the processor is configured to:
responding to a playing instruction for a video, playing the video in a first display area of the display screen, and displaying a plurality of acquisition positions where a camera is located in the process of acquiring the video in a map interface of a second display area of the display screen;
determining a target acquisition position from the plurality of acquisition positions in response to a selection operation for the plurality of acquisition positions;
and playing the video from the target video frame corresponding to the target acquisition position in the first display area.
2. The display device according to claim 1, wherein a position marker point is further displayed in the map interface, and the selection operation is a dragging operation for the position marker point; the processor is configured to:
and after the dragging operation is finished, determining the acquisition position which is closest to the position of the position marking point in the plurality of acquisition positions as a target acquisition position.
3. The display device according to claim 1, wherein the selection operation is a voice operation; the processor is configured to:
performing voice recognition on the voice operation to obtain a reference position;
and determining the acquisition position with the distance from the reference position smaller than a distance threshold value from the plurality of acquisition positions as a target acquisition position.
4. The display device of claim 3, wherein the processor is configured to:
if a plurality of candidate acquisition positions with the distance from the reference position smaller than a distance threshold exist in the plurality of acquisition positions, determining the position acquisition time of each candidate acquisition position;
and determining the candidate acquisition position with the earliest position acquisition time in the plurality of candidate acquisition positions as a target acquisition position.
5. The display device of claim 1, wherein the processor is further configured to:
and displaying a position marking point in the map interface based on the playing progress of the video, wherein the position marking point is positioned at the acquisition position corresponding to the currently played video frame.
6. The display device of any of claims 1 to 5, wherein the processor is further configured to:
determining the position acquisition time of the target acquisition position;
and determining a video frame which is closest to the position acquisition time in a plurality of video frames included in the video as a target video frame.
7. The display device of any of claims 1 to 5, wherein the processor is configured to:
responding to a playing instruction for a video, and playing the video from a reference video frame of the video in a first display area of the display screen, wherein the reference video frame is a video frame acquired at the position acquisition moment of a first acquisition position;
the first collecting position is the collecting position with the earliest position collecting time in the plurality of collecting positions.
8. The display device according to any one of claims 1 to 5, characterized in that the display device further comprises: a camera; the processor is further configured to:
responding to a video acquisition instruction, and starting the camera to acquire the video;
and in the process of collecting the video, collecting the collecting positions where the cameras are located to obtain the plurality of collecting positions.
9. The method for adjusting the playing progress of the video is characterized by being applied to display equipment, wherein the display equipment comprises a display screen; the method comprises the following steps:
responding to a playing instruction for a video, playing the video in a first display area of the display screen, and displaying a plurality of acquisition positions where a camera is located in the process of acquiring the video in a map interface of a second display area of the display screen;
determining a target acquisition position from the plurality of acquisition positions in response to a selection operation for the plurality of acquisition positions;
and playing the video from the target video frame corresponding to the target acquisition position in the first display area.
10. The method according to claim 9, wherein a position marker point is further displayed in the map interface, and the selection operation is a dragging operation for the position marker point; the determining a target acquisition location from the plurality of acquisition locations in response to the selecting operation for the plurality of acquisition locations comprises:
and after the dragging operation is finished, determining the acquisition position which is closest to the position of the position marking point in the plurality of acquisition positions as a target acquisition position.
CN202110699639.3A 2021-06-23 2021-06-23 Video playing progress adjusting method and display equipment Pending CN113507614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110699639.3A CN113507614A (en) 2021-06-23 2021-06-23 Video playing progress adjusting method and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110699639.3A CN113507614A (en) 2021-06-23 2021-06-23 Video playing progress adjusting method and display equipment

Publications (1)

Publication Number Publication Date
CN113507614A true CN113507614A (en) 2021-10-15

Family

ID=78010910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110699639.3A Pending CN113507614A (en) 2021-06-23 2021-06-23 Video playing progress adjusting method and display equipment

Country Status (1)

Country Link
CN (1) CN113507614A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246166A1 (en) * 2022-06-24 2023-12-28 网易(杭州)网络有限公司 Method and apparatus for adjusting video progress, and computer device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654575A (en) * 2016-01-11 2016-06-08 深圳市凯立德科技股份有限公司 Method, device and equipment for managing driving video information
CN106534734A (en) * 2015-09-11 2017-03-22 腾讯科技(深圳)有限公司 Method and device for playing video and displaying map, and data processing method and system
CN106550202A (en) * 2015-09-16 2017-03-29 深圳市凯立德科技股份有限公司 A kind of driving recording image display method and device
US20170168705A1 (en) * 2015-12-14 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method and electronic device for adjusting video progress
CN107306345A (en) * 2016-04-20 2017-10-31 斑马网络技术有限公司 Traveling record processing method, device, equipment, operating system and the vehicles
CN107305561A (en) * 2016-04-21 2017-10-31 斑马网络技术有限公司 Processing method, device, equipment and the user interface system of image
CN107872628A (en) * 2017-09-29 2018-04-03 威创集团股份有限公司 A kind of signal display method and system based on GIS map
CN111857493A (en) * 2020-06-16 2020-10-30 佛山市华全电气照明有限公司 Video management method and system based on smart city management system
CN112929748A (en) * 2021-01-22 2021-06-08 维沃移动通信(杭州)有限公司 Video processing method, video processing device, electronic equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534734A (en) * 2015-09-11 2017-03-22 腾讯科技(深圳)有限公司 Method and device for playing video and displaying map, and data processing method and system
CN106550202A (en) * 2015-09-16 2017-03-29 深圳市凯立德科技股份有限公司 A kind of driving recording image display method and device
US20170168705A1 (en) * 2015-12-14 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method and electronic device for adjusting video progress
CN105654575A (en) * 2016-01-11 2016-06-08 深圳市凯立德科技股份有限公司 Method, device and equipment for managing driving video information
CN107306345A (en) * 2016-04-20 2017-10-31 斑马网络技术有限公司 Traveling record processing method, device, equipment, operating system and the vehicles
CN107305561A (en) * 2016-04-21 2017-10-31 斑马网络技术有限公司 Processing method, device, equipment and the user interface system of image
CN107872628A (en) * 2017-09-29 2018-04-03 威创集团股份有限公司 A kind of signal display method and system based on GIS map
CN111857493A (en) * 2020-06-16 2020-10-30 佛山市华全电气照明有限公司 Video management method and system based on smart city management system
CN112929748A (en) * 2021-01-22 2021-06-08 维沃移动通信(杭州)有限公司 Video processing method, video processing device, electronic equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246166A1 (en) * 2022-06-24 2023-12-28 网易(杭州)网络有限公司 Method and apparatus for adjusting video progress, and computer device and storage medium

Similar Documents

Publication Publication Date Title
CN111367456A (en) Communication terminal and display method in multi-window mode
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN111176766A (en) Communication terminal and component display method
CN113055585B (en) Thumbnail display method of shooting interface and mobile terminal
CN113179432B (en) Display method and display device for video acquisition position
CN113542545A (en) Electronic equipment and video recording method
CN113507614A (en) Video playing progress adjusting method and display equipment
CN112099892A (en) Communication terminal and method for rapidly scanning two-dimensional code
CN113014614A (en) Equipment control method, control equipment and controlled equipment
CN113079332B (en) Mobile terminal and screen recording method thereof
CN113254132B (en) Application display method and related device
CN113157092B (en) Visualization method, terminal device and storage medium
CN111324255B (en) Application processing method based on double-screen terminal and communication terminal
CN111163220B (en) Display method, communication terminal and computer storage medium
CN114339419A (en) Method, device and storage medium for pull processing of video stream
CN114356559A (en) Multithreading control method and terminal equipment
CN113760164A (en) Display device and response method of control operation thereof
CN113031903B (en) Electronic equipment and audio stream synthesis method thereof
CN111225113A (en) Communication terminal and starting method thereof
CN113253905B (en) Touch method based on multi-finger operation and intelligent terminal
CN112929858B (en) Method and terminal for simulating access control card
CN113255644B (en) Display device and image recognition method thereof
CN111381801B (en) Audio playing method based on double-screen terminal and communication terminal
CN115328347B (en) Interface display method, device, terminal equipment and storage medium
CN112000411B (en) Mobile terminal and display method of recording channel occupation information thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211015