CN110166815B - Video content display method, device, equipment and medium - Google Patents

Video content display method, device, equipment and medium Download PDF

Info

Publication number
CN110166815B
CN110166815B CN201910451804.6A CN201910451804A CN110166815B CN 110166815 B CN110166815 B CN 110166815B CN 201910451804 A CN201910451804 A CN 201910451804A CN 110166815 B CN110166815 B CN 110166815B
Authority
CN
China
Prior art keywords
video
information
framing
view
place
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910451804.6A
Other languages
Chinese (zh)
Other versions
CN110166815A (en
Inventor
杨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910451804.6A priority Critical patent/CN110166815B/en
Publication of CN110166815A publication Critical patent/CN110166815A/en
Application granted granted Critical
Publication of CN110166815B publication Critical patent/CN110166815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Abstract

The application discloses a method for displaying video content, which comprises the following steps: acquiring a video frame image to be displayed; acquiring framing place information corresponding to the video frame image; the method comprises the steps of displaying the video frame image, and displaying the framing place information corresponding to the video frame image in the video frame image, so that the information content in the video is enriched, when a user is interested in a scene in the video, the framing place information can be directly obtained through the video display content, extra searching operation is not needed, the time of the user is saved, and the requirements of the user on individuation and convenience of video resource service are met. The application also discloses a corresponding device, equipment and medium.

Description

Video content display method, device, equipment and medium
Technical Field
The present application relates to the field of video image processing technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for displaying video content.
Background
With the continuous development of internet technology, video resources attract a large number of users with convenient access experience, diversified film sources and real-time updating speed, so that the video resources become an indispensable important component in the network life of the users.
However, with the continuous development of terminal technology and video resource website design technology, the requirements of people on video resources become higher, and the traditional video resource service mode cannot meet the increasingly enhanced personalized and convenient requirements of people in the process of watching video resources.
Disclosure of Invention
The application provides a video content display method, which displays the framing place information in a video frame image by acquiring the framing place information corresponding to the video frame image, so that a user can acquire related information without extra search operation, the time of the user is saved, and the requirements of individuation and convenience are met. Corresponding apparatus, devices, media and computer program products are also provided.
A first aspect of the present application provides a method for displaying video content, the method including:
acquiring a video frame image to be displayed;
acquiring framing place information corresponding to the video frame image;
and displaying the video frame image, and displaying the framing information corresponding to the video frame image in the video frame image.
A second aspect of the present application provides a display apparatus of video content, the apparatus comprising:
the first acquisition module is used for acquiring a video frame image to be displayed;
the second acquisition module is used for acquiring framing place information corresponding to the video frame image;
and the display module is used for displaying the video frame image and displaying the framing place information corresponding to the video frame image in the video frame image.
Optionally, the apparatus further comprises:
the matching module is used for matching each frame image in the video resources with a view-finding image in the video scene information database for each video resource in the video resource database; the video scene information database stores video scene information, wherein the video scene information comprises a video name, framing place information corresponding to a video and a framing place image;
and the association module is used for associating and recording the playing time point of the video frame image and the framing information when the video frame image is matched with the framing map image.
Optionally, the apparatus further comprises:
the search module is used for searching a webpage carrying the framing place keywords in the network through a search engine;
the crawling module is used for crawling video scene information from the webpage, wherein the video scene information comprises a video name, view-finding information corresponding to a video and a view-finding image;
and the storage module is used for storing the crawled video scene information in a video scene information database.
Optionally, the display module is further configured to:
and displaying a collection control on the video frame image, wherein the collection control is used for recording the view finding place information corresponding to the video frame image into a view finding place collection list during touch control.
Optionally, the display module is further configured to:
and displaying a view finding place favorite list in response to a viewing operation triggered by a view finding place favorite list viewing control carried on the video playing interface by a user, wherein the view finding place favorite list records view finding place information collected by the user when the user watches videos.
Optionally, the display module is further configured to:
and displaying an electronic card manufacturing control associated with the framing place information, wherein the electronic card manufacturing control is used for manufacturing an electronic card according to the associated framing place information during touch control.
Optionally, the display module is further configured to:
and displaying a photo making control associated with the framing place information, wherein the photo making control is used for making a photo according to the framing place image corresponding to the associated framing place information and the image specified by the user during touch control.
Optionally, the display module is further configured to:
and displaying a travel strategy viewing control associated with the framing place information, wherein the travel strategy viewing control is used for displaying the travel strategy corresponding to the framing place information during touch control.
Optionally, the display module is further configured to:
and displaying a route generation control, wherein the route generation control is used for generating and displaying a navigation route according to the framing information during touch control.
A third aspect of the present application provides a terminal device, comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to perform the steps of the method for displaying video content according to the first aspect as described above according to the computer program.
A fourth aspect of the present application provides a computer-readable storage medium for storing a computer program for executing the method for displaying video content according to the first aspect.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of displaying video content of the first aspect described above.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides a method for displaying video content, which comprises the steps of obtaining a video frame image to be displayed, obtaining view-finding information corresponding to the video frame image, and displaying the view-finding information corresponding to the video frame image in the video frame image when the video frame image is displayed, so that the information content in the video is enriched.
Drawings
Fig. 1 is a system architecture diagram of a display method of video content in an embodiment of the present application;
fig. 2A is a flowchart of a method for displaying video content according to an embodiment of the present application;
FIG. 2B is a diagram illustrating an effect of a method for displaying video content according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of generating video scene information according to an embodiment of the present application;
FIG. 4A is a schematic view of a view finding information favorites list interface in an embodiment of the present application;
FIG. 4B is a schematic interface diagram of a terminal generating a navigation route in the embodiment of the present application;
fig. 5A is a flowchart of a method for displaying video content according to an embodiment of the present application;
fig. 5B is a scene diagram of a display method of video content in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a display device for video content according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a display device for video content according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a display device for video content according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal in an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Aiming at the problem that the traditional video resource service mode cannot meet the increasingly enhanced personalized and convenient requirements of people in the process of watching video resources, the embodiment of the application provides a scheme for displaying related framing place information in video playing based on the requirement that a user acquires the framing place information from a video, the scheme enriches the information content in the video, and when the user is interested in a scene in the video, the framing place information can be directly obtained through video display content without pausing the currently played video, and the related framing place information of the video is searched based on a search engine, so that the user operation is simplified, the user time is saved, and the requirements of the user on the personalization and the convenience of the video service are met.
It can be understood that the video content display method provided by the present application can be applied to a terminal. The terminal specifically refers to any data processing device with a display function, and may be a home terminal such as a television, a desktop, or a portable mobile terminal device such as a tablet computer and a smart phone. It should be noted that the terminal may independently implement the display method of the video content, for example, playing an offline video resource, and of course, the terminal may also implement the display method of the video content by interacting with the server.
The method for displaying the video content provided by the application can be stored in the terminal in the form of a computer program, and the terminal realizes the method for displaying the video content by executing the computer program. The computer program may be a stand-alone computer program, or may be a functional module, a plug-in, an applet, or the like running on another program.
In practical applications, the method for displaying video content provided by the present application can be applied to, but is not limited to, the application environment shown in fig. 1.
As shown in fig. 1, a terminal 102 is connected to a server 106 through a network 104, a client is run on the terminal 102, the client may be a video client or a browser, the terminal 102 requests a video resource from the server 106 through the client, obtains a video frame image to be displayed by parsing the video resource, and the terminal 102 obtains framing information corresponding to the video frame image and displays the framing information corresponding to the video frame image in the video frame image when the video frame image is displayed, thereby providing richer content for a user.
Next, each step of the video content display method provided by the present application will be described in detail from the viewpoint of the terminal.
Referring to a flowchart of a display method of video content shown in fig. 2A and an effect diagram of the display method of video content shown in fig. 2B, the method includes:
s201: and acquiring a video frame image to be displayed.
Specifically, the terminal acquires a video resource to be played, and analyzes the video resource to obtain a plurality of frames of video frame images. The terminal can obtain the video resource provided by the video platform from the server side through the network, and can also directly obtain the cached video resource from the local storage space.
The video resource is generally generated by encapsulating compressed and encoded image data and audio data according to a certain format, and the encapsulation format may be various, for example, MP4, MKV, RMVB, and the like, for this reason, the terminal may decapsulate the video resource to obtain audio stream compressed and encoded data and video stream compressed and encoded data, and for the video stream compressed and encoded data, the terminal may decode the data through a decoder to obtain video original data, where the video original data is specifically a multi-frame video frame image, and then play the data through a display card.
S202: and acquiring the framing place information corresponding to the video frame image.
In this embodiment, the framing specifically refers to a location of a scene in the video frame image, where the framing information is information for identifying a framing place. As an example, the information of the viewing place may be a place name of the viewing place, and in some possible implementations, the information of the viewing place may also be other information capable of identifying the viewing place, for example, an icon of a landmark building associated with the viewing place to a high degree, and the like.
In practical application, when the video frame image to be displayed has corresponding viewfinding information, the terminal acquires the viewfinding information corresponding to the video frame image. The terminal may acquire the view information corresponding to the video frame image from the server in the online playback mode, download the view information corresponding to the video frame image from the server in advance and store the view information in the local area in the offline playback mode, and acquire the view information corresponding to the video frame image from the local area when playing the offline video.
In consideration of the framing information acquisition efficiency, the video resource and the framing information can be associated in advance, so that the framing information corresponding to the video frame image can be quickly acquired directly based on the association relationship during subsequent playing. The following describes a specific implementation of associating the video frame image with the framing information, taking a video platform as an example.
For the video platform side, it may maintain a video resource database and a video scene information database, where the video resource database stores video resources, and the video scene information database stores video scene information, where the video scene information includes a video name, view-finding information corresponding to the video, and a view-finding image, and for ease of understanding, this application also provides an example of the video scene information, and as shown in fig. 3, the video name may be "passed through all over the world", and the corresponding view-finding information may be "Chongqing goose ridge two factories vintage park", and the view-finding image is shown as 30 in fig. 3.
For each video resource in the video resource database, the terminal may match each frame image in the video resource with a view-finding image in the video scene information database, and when the video frame image matches the view-finding image, associate and record a playing time point of the video frame image and the view-finding information, where the playing time point may be a frame number of the video frame image or a position of the video frame image on a time axis of the whole video. The above-described association process may be implemented by automated scripts, for efficiency and cost considerations.
It should be noted that the video scene information may be obtained by crawling from a network. Specifically, the terminal searches a webpage carrying a keyword in a network by taking a 'viewfinding' as the keyword through a search engine, then crawls a video name, viewfinding information corresponding to the video and a viewfinding image from the searched webpage, and generates video scene information according to the video name, the viewfinding information corresponding to the video and the viewfinding image.
In specific implementation, the terminal can crawl < img > tags in a webpage through a crawler tool to obtain a scene image, crawl a case with a book title number in the webpage to store video names such as a drama name and a movie name, and in some cases, crawl the case with a quotation mark "" to obtain the video names, crawl the place name in the webpage to obtain the scene information corresponding to the video. When a plurality of place names exist in the web page, the terminal can also determine framing place information corresponding to the video based on semantic analysis.
Based on the method, the terminal can store the crawled video scene information in the video scene information database so as to be used when video frame images of the video resources are related.
In this embodiment, the video scene information may be obtained in a manual tagging manner, in addition to the manner of crawling the web page. For the video platform side, operators can manually mark video names and view-finding information for video resources in a video resource database, and upload corresponding view-finding images, so as to generate video scene information.
S203: and displaying the video frame image, and displaying the framing information corresponding to the video frame image in the video frame image.
Specifically, the terminal displays a video frame image through the display card, and displays framing information corresponding to the video frame image in the video frame image. To avoid obscuring the video frame image, the viewfinder information may be displayed in a transparent manner. When the finder information is displayed, the finder information may fade in and out with the corresponding video frame image, or gradually become larger in the image and then disappear.
Of course, in some cases, for example, when there are corresponding frame location information in consecutive frames in a video frame, and the frame location information is the same information, the terminal may also display the frame location information only within a preset time length, where the preset time length may be set according to actual needs, and as an example, may be set to 1s. By the method, the user can obtain interested information, and influence on the viewing experience of the user due to long-time display of the information of the viewfinder can be avoided.
As shown in fig. 2B, when the terminal displays a video frame image 21, that is, a certain frame image in the video resource "pass through your world", the terminal also displays the information 22 of the viewing place corresponding to the video frame image 21, that is, the vineyard creation garden in the second place of the Chongqing greenling, so that the user can directly know the information of the viewing place of the video when watching the video.
Therefore, the embodiment of the application provides a method for displaying video content, which enriches the information amount in a video by acquiring a video frame image to be displayed and acquiring framing place information corresponding to the video frame image, and displaying the framing place information corresponding to the video frame image in the video frame image when the video frame image is displayed, so that when a user is interested in a scene in the video, the framing place information can be directly acquired through the video display content, no additional search operation is needed, the time of the user is saved, and the requirements of the user on individuation and convenience of a video resource service are met.
Further, the terminal may also display a collection control on the video frame image, as shown in fig. 2B, the terminal displays a collection control 23 on the video frame image 21, and the collection control is configured to record the viewfinding information corresponding to the video frame image into the viewfinding collection list during touch control, so that the user can view the viewfinding information based on the collection list subsequently without playing the video and jumping to a corresponding time point for viewing.
In consideration of the video viewing experience, the favorite control can be arranged at the edge position of the video frame image or in the viewfinder information display area, and the favorite control can be synchronously displayed or synchronously disappear along with the viewfinder information. Furthermore, the terminal can hide the collection control firstly, when the user has the intention of collection, the collection control can be called out in a mouse hovering or gesture sliding mode, and then collection operation aiming at the information of the view finding place is triggered based on the collection control.
In actual application, the video playing interface can also bear a view-finding place favorite list viewing control, a user can trigger viewing operation for the view-finding place favorite list through the favorite list viewing control, the terminal responds to the viewing operation and displays the view-finding place favorite list, and view-finding place information collected by the user when the user watches videos is recorded in the view-finding place favorite list.
Fig. 4A shows an interface schematic diagram of the view-finding place favorite list, and in the example of fig. 4A, two view-finding place information 41 collected by the user when watching the video are displayed, one is the view-finding place of the movie "pass all over the world", namely the Chongqing goose ridge two-factory literary creation garden, and the other is the view-finding place of the TV drama "Langya Pop 2", namely the Wenzhou southern Yangdang mountain.
When the view finding information of the collection is displayed, the time 42 of collecting the view finding information, namely the collection date, can be displayed, and the user can conveniently review the view finding information. As shown in FIG. 4A, the collection time of the view finding place of from your road all over the world is 2018, 9 and 28 days, and the collection time of the view finding place of Langya list 2 is 2018, 7 and 11 days. It can be understood that the interface of the view finding place favorite list may further carry a filtering 43 and/or a sorting control 44, the user filters the view finding place information of the favorite, the filtering control 43 and the sorting control 44 may filter and sort the view finding places of the favorite according to factors such as time, distance, and the like, for example, the view finding place information of the favorite in the last year may be filtered and displayed according to the time sequence from near to far.
In some possible implementations, the terminal may further display an electronic card making control 45 associated with the framing place information 41, and the electronic card making control is used for making an electronic card according to the associated framing place information when the terminal is touched. The electronic card may be a postcard, a greeting card, or the like. Fig. 4A illustrates an example of making a postcard, as shown in fig. 4A, each piece of finder area information 41 has an electronic card making control 45 corresponding thereto, and when the user triggers the electronic card making control 45 corresponding to "pass through your world", by clicking or touching, the terminal may generate the postcard according to the image of the vintage garden in the second factory on the Chongqing greenling.
Further, the terminal may display a photo creation control 46 associated with the finder area information, where the photo creation control 46 is configured to create a photo according to the finder area image and the user-specified image corresponding to the associated finder area information when the terminal is touched. Wherein the user-specified image may be a self-portrait of the user, and thus, the effect of the user at the viewing place, i.e., pretending to be there, can be achieved. Of course, the user-specified image may be another image specified by the user, such as a star image or an image of an animal. Fig. 4A is an illustration of a user self-timer image as an example, the user touches the photo making control 46 to trigger a photo making operation, specifically, the user can upload the self-timer image, and the terminal performs a synthesizing process on the image of the viewing area and the self-timer image uploaded by the user in response to the photo making operation, so as to generate a photo of the user in the viewing area.
Considering that the user has a demand for tourism in the viewing area, the terminal may further display a tourism strategy viewing control 47 associated with the viewing area information, where the tourism strategy viewing control 47 is used for displaying the tourism strategy corresponding to the viewing area information when touching. For convenience of understanding, fig. 4A is taken as an example for explanation, and as shown in fig. 4A, for "chongqing two-factory literary creation garden" in the viewing area of "passing through your world, the user triggers a tourism attack and view operation by touching a tourism attack and view control 47, and the terminal displays the tourism attack of" chongqing two-factory literary creation garden "in the two-factory literary creation garden in the two-factory mountain in the Chongqing goose area" in response to the operation.
It should be noted that the travel strategy may be pre-stored in the database, or may be obtained by crawling from the network by the terminal, which is not limited in this embodiment.
When the user is interested in a plurality of view locations with the intention of traveling at the plurality of view locations, the terminal may generate a corresponding route map. Specifically, the terminal may further display a route generation control 48, and the route generation control 48 is configured to generate and display a navigation route according to the framing information during touch.
The terminal also displays a view finding selection control 49 for the user when displaying the video image through the display interface, the user can select one or more view finding places based on the view finding selection control 49, and then triggers the route generation control, so that the terminal generates and displays a navigation route according to the view finding information selected by the user.
In a specific application, in a case that a user may select only one view finding place and then input a target address by himself/herself, the terminal may further navigate based on the view finding place selected by the user and the target address input by the user, and display a final navigation route for the user. In another case, the user may select only one view finding place, but does not need to input a target address by himself, and the terminal may acquire the geographic location of the current location of the user through the positioning module, and then perform navigation based on the current location of the user and the view finding place selected by the user, so as to display a final navigation route for the user.
In another case, the user may select a plurality of view locations, and the terminal may navigate based on the plurality of view locations to display a final navigation route for the user. Taking fig. 4B as an example for illustration, fig. 4B shows an effect diagram of displaying a navigation route by the terminal, in this example, the user selects two view finding places, specifically, a second Chongqing Green factory and a second Chongqing Green factory vintage garden through the view finding place selection control, and the terminal displays the navigation route from the second Chongqing Green factory to the second Chongqing Green factory vintage garden.
Of course, when the user selects a plurality of view finding places to navigate, and also can further input a target address, the terminal needs to navigate based on the plurality of view finding places selected by the user and the target address input by the user, and finally the navigated route passes through the plurality of view finding places and the target address.
It should be noted that, when there are multiple routes between the places of view selected by the user, the terminal may display the multiple routes for the user to select. When the route is displayed, the terminal can also display the time and money required by the route, so that richer information can be provided for the user, and the user can select the corresponding route according to the actual requirement of the user.
In order to make the technical scheme of the present application clearer and easier to understand, the present application also provides a method for displaying video content in a specific scene. The scene is specifically a scene in which the third-party video platform provides a video service for the user, and then, the method for displaying the video content is described with reference to the flowchart of the method for displaying the video content shown in fig. 5A and the scene diagram of the method for displaying the video content shown in fig. 5B.
In this scenario, the third party video platform maintains a server 510, and a video resource database 511 and a video scene information database 512 are created in the server 510, where the video resource database 511 is used for storing video resources, and the video scene information database 512 is used for storing video scene information, and of course, the video resource database 511 and the video scene information database 512 may also be in a scenario in other devices except the server 510; the third-party video platform is also provided with clients corresponding to terminals of different forms, such as corresponding clients for a PC, a tablet and a smart phone based on an Android system or an ios system. The client is installed on the terminal 520, and interacts with the server 510 by operating the client, so as to implement a display method of video content.
As shown in fig. 5A, the video content display method provided by the present application may be divided into two stages, where the first stage is a preprocessing stage, an operator of the video platform executes an automated script through a terminal 520 to implement data entry, and the second stage is a playing stage, where a user of the video platform runs a client through the terminal 520 to play a video, and displays the framing information while displaying the video frame image. These two stages are described in detail below.
Aiming at the first stage, the method specifically comprises the following steps:
the method comprises the following steps: the web pages are searched using a search engine with the "find the view" as a keyword.
Step two: crawling an img tag in a webpage by a crawler to obtain a view-finding image, crawling a document with a book title number to obtain a video name, crawling a place name in the webpage, and obtaining view-finding information according to the place name.
Step three: and manually labeling the video name, the view-finding information and the view-finding image based on the searched webpage.
The second step and the third step are executed in parallel, and belong to two different implementation modes of obtaining the video name, the framing place information and the framing place image, wherein the implementation mode of the second step is automatically implemented based on an automatic script, so that the processing efficiency is high, and the implementation mode of the third step is implemented manually, so that the error probability is relatively low. In practical application, the information can be obtained mainly by adopting a mode of step two, and a mode of step three is adopted as an auxiliary means.
Step four: the video name, the information of the place of view, and the image of the place of view are entered in the management station through the terminal 520, and the server 510 generates video scene information from the video name, the information of the place of view, and the image of the place of view, and stores it in the video scene information database 512.
Step five: in the process of coding the video resource, each frame image in the video resource is matched with a view-finding image in a video scene information database, and a time point of the video frame image matched with the view-finding image is positioned.
Step six: the management table data is read in a loop, and the time points at which all videos in the video resource database 511 match the viewfinder image are marked.
Thus, the data preprocessing process of the first stage is completed.
Aiming at the second stage, the method specifically comprises the following steps:
step seven: when playing a video, the terminal 520 requests the server 510 for a video resource, the server 510 returns the video resource, the framing place information, and a time point when the video resource matches the framing place image to the server, obtains a video frame image by decoding the video resource, displays the video frame image, and displays the framing place information corresponding to the video frame image in the video frame image.
Step eight: in response to the collection operation triggered by the user based on the collection control displayed on the video frame image, the terminal 520 records the view-finding place information corresponding to the video frame image into the view-finding place collection list.
Step nine: in response to a viewing operation triggered by the user through a view location favorite list viewing control carried on the video playing interface, the terminal 520 displays a view location favorite list in which view location information collected by the user while viewing a video is recorded.
Step ten: the terminal 520 displays an electronic card manufacturing control associated with the framing place information, a user touches the electronic card manufacturing control to trigger electronic card manufacturing operation, and the terminal 520 responds to the operation of the user and manufactures an electronic card according to the associated framing place information.
Step eleven: the terminal 520 displays a photo making control associated with the information of the viewing place, the user touches the photo making control to trigger photo making operation, and the terminal 520 responds to the operation of the user and makes a photo pretending to be installed in the viewing place of the user according to the image of the viewing place corresponding to the associated information of the viewing place and the self-portrait of the user.
Step twelve: the terminal 520 displays a travel strategy viewing control related to the framing place information, the user touches the travel strategy viewing control to trigger travel strategy viewing operation, and the terminal 520 acquires the corresponding travel strategy from the network and displays the travel strategy.
Step thirteen: the terminal 520 displays a route generation control, a user selects view finding information through a selection control of the favorite list interface, the route generation operation is triggered through the touch control route generation control, and the terminal 520 responds to the operation, generates and displays a navigation route according to the view finding information.
The steps eight to thirteen are optional steps, the steps ten to thirteen may be executed alternatively, or multiple execution may be selected from them, and in specific execution, the execution sequence may be set according to actual needs, which is not limited in this embodiment.
Based on the above specific implementation manner of the method for displaying video content provided by the embodiment of the present application, the embodiment of the present application further provides a corresponding apparatus, and the apparatus provided by the embodiment of the present application will be described below from the perspective of function modularization.
Referring to fig. 6, a schematic diagram of a display device for video content is shown, in which the device 600 includes:
a first obtaining module 610, configured to obtain a video frame image to be displayed;
a second obtaining module 620, configured to obtain framing information corresponding to the video frame image;
a display module 630, configured to display the video frame image, and display the framing information corresponding to the video frame image in the video frame image.
Optionally, referring to fig. 7, fig. 7 is a schematic structural diagram of a display apparatus for video content according to an embodiment of the present application, and based on the structure shown in fig. 6, the apparatus 600 further includes:
the matching module 640 is configured to match, for each video resource in the video resource database, each frame image in the video resource with a view-finding image in the video scene information database; the video scene information database stores video scene information, wherein the video scene information comprises a video name, view-finding information corresponding to a video and a view-finding image;
and the associating module 650 is configured to associate and record the playing time point of the video frame image and the framing information when the video frame image matches the framing map image.
Optionally, referring to fig. 8, fig. 8 is a schematic structural diagram of a display apparatus for video content provided in an embodiment of the present application, and based on the structure shown in fig. 7, the apparatus 600 further includes:
a search module 660, configured to search a web page carrying the find-place keyword in the network through a search engine;
the crawling module 670 is configured to crawl video scene information from the web page, where the video scene information includes a video name, view-finding information corresponding to a video, and a view-finding image;
and the storage module 680 is configured to store the crawled video scene information in the video scene information database.
Optionally, the display module 630 is further configured to:
and displaying a collection control on the video frame image, wherein the collection control is used for recording the view finding place information corresponding to the video frame image into a view finding place collection list during touch control.
Optionally, the display module 630 is further configured to:
and displaying a view finding place favorite list in response to a viewing operation triggered by a view finding place favorite list viewing control carried on the video playing interface by a user, wherein the view finding place favorite list records view finding place information collected by the user when the user views videos.
Optionally, the display module 630 is further configured to:
and displaying an electronic card manufacturing control associated with the framing place information, wherein the electronic card manufacturing control is used for manufacturing an electronic card according to the associated framing place information during touch control.
Optionally, the display module 630 is further configured to:
and displaying a photo making control associated with the view finding information, wherein the photo making control is used for making a photo according to the view finding image corresponding to the associated view finding information and the image specified by the user during touch control.
Optionally, the display module 630 is further configured to:
and displaying a travel strategy viewing control associated with the framing place information, wherein the travel strategy viewing control is used for displaying the travel strategy corresponding to the framing place information during touch control.
Optionally, the display module 630 is further configured to:
and displaying a route generation control, wherein the route generation control is used for generating and displaying a navigation route according to the framing information during touch control.
Based on the method and the device for displaying the video content provided by the embodiment of the present application, an embodiment of the present application further provides an apparatus, and the apparatus provided by the embodiment of the present application is introduced from the perspective of hardware materialization.
The embodiment of the present application provides a display device of video content, which may be a terminal, as shown in fig. 9, for convenience of description, only a part related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA, abbreviated as "Personal Digital Assistant"), a Sales terminal (POS, abbreviated as "Point of Sales"), a vehicle-mounted computer, etc., and the terminal is taken as a mobile phone as an example:
fig. 9 is a block diagram illustrating a partial structure of a mobile phone related to a terminal according to an embodiment of the present disclosure. Referring to fig. 9, the handset includes: radio Frequency (RF) circuit 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuit 960, wireless fidelity (WiFi) module 970, processor 980, and power supply 990. Those skilled in the art will appreciate that the handset configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 9:
the RF circuit 910 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink information of a base station and then processing the received downlink information to the processor 980; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (Low Noise Amplifier; LNA), a duplexer, and the like. In addition, RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), e-mail, short message Service (Short SMS), and so on.
The memory 920 may be used to store software programs and modules, and the processor 980 performs various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 920. The memory 920 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (e.g., a user's operation on or near the touch panel 931 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 980, and can receive and execute commands sent by the processor 980. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 940 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 940 may include a Display panel 941, and optionally, the Display panel 941 may be configured by using a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), and the like. Further, the touch panel 931 may cover the display panel 941, and when the touch panel 931 detects a touch operation on or near the touch panel 931, the touch panel transmits the touch operation to the processor 980 to determine the type of the touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of the touch event. Although in fig. 9, the touch panel 931 and the display panel 941 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 931 and the display panel 941 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 950, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 941 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 941 and/or backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and a cell phone. The audio circuit 960 may transmit the electrical signal converted from the received audio data to the speaker 961, and convert the electrical signal into a sound signal for output by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 960, and outputs the audio data to the processor 980 for processing, and then transmits the audio data to, for example, another mobile phone through the RF circuit 910, or outputs the audio data to the memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 9 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 980 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Alternatively, processor 980 may include one or more processing units; preferably, the processor 980 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 980.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to the various components, which may preferably be logically connected to the processor 980 via a power management system, such that the power management system may manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 980 included in the terminal further has the following functions:
acquiring a video frame image to be displayed;
acquiring view finding place information corresponding to the video frame image;
and displaying the video frame image, and displaying the framing information corresponding to the video frame image in the video frame image.
Optionally, the processor 980 is further configured to execute the steps of any implementation manner of the method for displaying video content provided in the embodiment of the present application.
The embodiment of the present application further provides a computer-readable storage medium for storing a computer program, where the computer program is used to execute any one implementation manner of the display method of video content described in the foregoing embodiments.
Embodiments of the present application further provide a computer program product including instructions, which when executed on a computer, cause the computer to perform any one of the implementation manners of the method for displaying video content according to the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (6)

1. A method for displaying video content, comprising:
searching a webpage carrying the keywords in a network by using a search engine to find a view as the keywords;
crawling an < img > tag in a webpage through a crawler tool to obtain a view-finding area image, crawling a file with a book name number or a quotation mark in the webpage to obtain a video name, crawling a place name in the webpage to obtain view-finding area information corresponding to the video;
sending the video name, the framing place information and the framing map image to a server, enabling the server to generate video scene information according to the video name, the framing place information and the framing map image, and storing the video scene information in a video scene information database;
aiming at each video resource in a video resource database, matching each frame image in the video resource with a view-finding image in the video scene information database in the process of coding the video resource, positioning a time point of the video frame image matched with the view-finding image when the video frame image is matched with the view-finding image, performing association recording on a playing time point of the video frame image and the view-finding information, and establishing an association relation between the video resource and the view-finding information;
circularly reading the data of the management platform, and marking the time points of all videos in the video resource database, which are matched with the viewfinder images;
when the video is played, requesting video resources from the server, and enabling the server to return the video resources, the information of the view finding place and the time point of matching of the video resources and the image of the view finding map;
decoding the video resource to obtain a video frame image to be displayed;
acquiring framing place information corresponding to the video frame image;
displaying the video frame image, displaying the framing information corresponding to the video frame image in the video frame image, wherein when the framing information is displayed, the framing information fades in and out along with the corresponding video frame image, or gradually becomes larger in the image and then disappears; when corresponding framing place information exists in continuous multiframes and the framing place information corresponding to the continuous multiframes is the same information, displaying the framing place information within a preset time length to avoid displaying the framing place information for a long time;
responding to mouse hovering or gesture sliding operation triggered by a user, calling out a collection control, and displaying the collection control on the video frame image, wherein the collection control is used for recording the view finding place information corresponding to the video frame image into a view finding place collection list during touch control, and the collection control is synchronously displayed or synchronously disappears along with the view finding place information;
displaying a view finding place favorite list in response to a viewing operation triggered by a view finding place favorite list viewing control carried on a video playing interface by a user, wherein the view finding place favorite list records view finding place information collected by the user when the user watches videos;
responding to a screening and/or sorting control triggered by the user in the view-finding place favorite list, and screening and/or sorting the view-finding place information of the favorite according to time and distance factors;
displaying an electronic card manufacturing control associated with the framing place information, wherein the electronic card manufacturing control is used for manufacturing an electronic card according to a framing map image corresponding to the associated framing place information during touch control;
and displaying a photo creation control associated with the finder area information, wherein the photo creation control is used for synthesizing a finder area image corresponding to the associated finder area information and a user-specified image during touch control to create a photo.
2. The method of claim 1, further comprising:
and displaying a travel strategy viewing control associated with the framing place information, wherein the travel strategy viewing control is used for displaying the travel strategy corresponding to the framing place information during touch control.
3. The method of claim 1, further comprising:
and displaying a route generation control, wherein the route generation control is used for generating and displaying a navigation route according to the framing information during touch control.
4. A display device for video content, comprising:
the matching module is used for searching a webpage carrying the keyword in a network by using a search engine to find a view as the keyword; crawling an < img > tag in a webpage through a crawler tool to obtain a view-finding area image, crawling a file with a book name number or a quotation mark in the webpage to obtain a video name, crawling a place name in the webpage to obtain view-finding area information corresponding to the video; sending the video name, the framing place information and the framing map image to a server, enabling the server to generate video scene information according to the video name, the framing place information and the framing map image, and storing the video scene information in a video scene information database; aiming at each video resource in a video resource database, matching each frame image in the video resource with a view-finding image in the video scene information database in the process of encoding the video resource;
the correlation module is used for positioning a time point of the video frame image matched with the framing place image when the video frame image is matched with the framing place image, performing correlation recording on the playing time point of the video frame image and the framing place information, and establishing a correlation relation between the video resource and the framing place information; circularly reading the data of the management platform, and marking the time points of all videos in the video resource database, which are matched with the viewfinder map images;
the device is used for requesting the video resource from the server when the video is played, so that the server returns the video resource, the information of the view finding place and the time point of matching of the video resource and the image of the view finding map;
the first acquisition module is used for acquiring a video frame image to be displayed by decoding the video resource;
the second acquisition module is used for acquiring the framing place information corresponding to the video frame image;
the display module is used for displaying the video frame image, displaying the framing information corresponding to the video frame image in the video frame image, and when the framing information is displayed, the framing information fades in and out along with the corresponding video frame image or gradually becomes larger in the image and then disappears; when corresponding framing place information exists in continuous multiframes and the framing place information corresponding to the continuous multiframes is the same information, displaying the framing place information within a preset time length to avoid displaying the framing place information for a long time;
responding to mouse hovering or gesture sliding operation triggered by a user, calling out a collection control, and displaying the collection control on the video frame image, wherein the collection control is used for recording the view finding place information corresponding to the video frame image into a view finding place collection list during touch control, and the collection control is synchronously displayed or synchronously disappears along with the view finding place information;
displaying a view finding place favorite list in response to a viewing operation triggered by a view finding place favorite list viewing control carried on a video playing interface by a user, wherein the view finding place favorite list records view finding place information collected by the user when the user watches videos;
responding to a screening and/or sorting control triggered by the user in the view-finding place favorite list, and screening and/or sorting the view-finding place information of the favorite according to time and distance factors;
displaying an electronic card manufacturing control associated with the framing place information, wherein the electronic card manufacturing control is used for manufacturing an electronic card according to a framing map image corresponding to the associated framing place information during touch control;
and displaying a photo production control related to the view finding information, wherein the photo production control is used for synthesizing a view finding image corresponding to the related view finding information and a user specified image during touch control to produce a photo.
5. A terminal device, comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to perform the method of any one of claims 1 to 3 in accordance with the computer program.
6. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program for performing the method of any of claims 1 to 3.
CN201910451804.6A 2019-05-28 2019-05-28 Video content display method, device, equipment and medium Active CN110166815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910451804.6A CN110166815B (en) 2019-05-28 2019-05-28 Video content display method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910451804.6A CN110166815B (en) 2019-05-28 2019-05-28 Video content display method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN110166815A CN110166815A (en) 2019-08-23
CN110166815B true CN110166815B (en) 2023-03-10

Family

ID=67629692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910451804.6A Active CN110166815B (en) 2019-05-28 2019-05-28 Video content display method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110166815B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704553B (en) * 2020-05-22 2024-04-16 上海哔哩哔哩科技有限公司 Video view finding place pushing method and system
CN112950951B (en) * 2021-01-29 2023-05-02 浙江大华技术股份有限公司 Intelligent information display method, electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001339594A (en) * 2000-05-25 2001-12-07 Fujitsu Ltd Image processing system
CN101795314A (en) * 2009-12-23 2010-08-04 惠州Tcl移动通信有限公司 Mobile communication terminal
CN103294804A (en) * 2013-05-30 2013-09-11 佛山电视台南海分台 Method and system for augmenting acquisition and interaction of scenic resort information
CN104123316A (en) * 2013-04-28 2014-10-29 腾讯科技(深圳)有限公司 Resource collection method, device and facility
CN105468679A (en) * 2015-11-13 2016-04-06 中国人民解放军国防科学技术大学 Tourism information processing and plan providing method
WO2017118754A1 (en) * 2016-01-06 2017-07-13 Robert Bosch Gmbh Interactive map informational lens
CN107147924A (en) * 2016-03-01 2017-09-08 腾讯科技(深圳)有限公司 Method for processing video frequency, device and system and data processing method and device
CN108572969A (en) * 2017-03-09 2018-09-25 阿里巴巴集团控股有限公司 The method and device of geography information point recommended information is provided

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8826325B2 (en) * 2012-12-08 2014-09-02 Joao Redol Automated unobtrusive ancilliary information insertion into a video
CN106534734A (en) * 2015-09-11 2017-03-22 腾讯科技(深圳)有限公司 Method and device for playing video and displaying map, and data processing method and system
CN106375870B (en) * 2016-08-31 2019-09-17 北京旷视科技有限公司 Video labeling method and device
US11416714B2 (en) * 2017-03-24 2022-08-16 Revealit Corporation Method, system, and apparatus for identifying and revealing selected objects from video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001339594A (en) * 2000-05-25 2001-12-07 Fujitsu Ltd Image processing system
CN101795314A (en) * 2009-12-23 2010-08-04 惠州Tcl移动通信有限公司 Mobile communication terminal
CN104123316A (en) * 2013-04-28 2014-10-29 腾讯科技(深圳)有限公司 Resource collection method, device and facility
CN103294804A (en) * 2013-05-30 2013-09-11 佛山电视台南海分台 Method and system for augmenting acquisition and interaction of scenic resort information
CN105468679A (en) * 2015-11-13 2016-04-06 中国人民解放军国防科学技术大学 Tourism information processing and plan providing method
WO2017118754A1 (en) * 2016-01-06 2017-07-13 Robert Bosch Gmbh Interactive map informational lens
CN107147924A (en) * 2016-03-01 2017-09-08 腾讯科技(深圳)有限公司 Method for processing video frequency, device and system and data processing method and device
CN108572969A (en) * 2017-03-09 2018-09-25 阿里巴巴集团控股有限公司 The method and device of geography information point recommended information is provided

Also Published As

Publication number Publication date
CN110166815A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
US10739958B2 (en) Method and device for executing application using icon associated with application metadata
RU2614137C2 (en) Method and apparatus for obtaining information
US11409817B2 (en) Display apparatus and method of controlling the same
CN109145142B (en) Management method and terminal for shared information of pictures
US8059139B2 (en) Display controller, display control method, display control program, and mobile terminal device
US10268344B2 (en) Method and system for providing a content based on preferences
US8326354B2 (en) Portable terminal for explaining information of wine and control method thereof
CN103646052A (en) Picture information processing method and device
CN111629247B (en) Information display method and device and electronic equipment
CN112486385A (en) File sharing method and device, electronic equipment and readable storage medium
CN104796743A (en) Content item display system, method and device
CN102480565A (en) Mobile terminal and method of managing video using metadata therein
JP6046874B1 (en) Information processing apparatus, information processing method, and program
CN109165320B (en) Information collection method and mobile terminal
CN104111979A (en) Search recommendation method and device
CN111491205B (en) Video processing method and device and electronic equipment
CN110166815B (en) Video content display method, device, equipment and medium
CN110989847B (en) Information recommendation method, device, terminal equipment and storage medium
CN107622074A (en) A kind of data processing method, device and computing device
CN108595107B (en) Interface content processing method and mobile terminal
CN106341728A (en) Product information displaying method, apparatus and system in video
CN112052355B (en) Video display method, device, terminal, server, system and storage medium
CN111586329A (en) Information display method and device and electronic equipment
CN110955788A (en) Information display method and electronic equipment
CN103678439A (en) Method for providing title of contents based on context information and device therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant