CN111859159A - Information pushing method, video processing method and equipment - Google Patents

Information pushing method, video processing method and equipment Download PDF

Info

Publication number
CN111859159A
CN111859159A CN202010777528.5A CN202010777528A CN111859159A CN 111859159 A CN111859159 A CN 111859159A CN 202010777528 A CN202010777528 A CN 202010777528A CN 111859159 A CN111859159 A CN 111859159A
Authority
CN
China
Prior art keywords
item
information
coordinate
appearing
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010777528.5A
Other languages
Chinese (zh)
Other versions
CN111859159B (en
Inventor
崔英林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lianshang Network Technology Co Ltd
Original Assignee
Shanghai Lianshang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lianshang Network Technology Co Ltd filed Critical Shanghai Lianshang Network Technology Co Ltd
Priority to CN202010777528.5A priority Critical patent/CN111859159B/en
Publication of CN111859159A publication Critical patent/CN111859159A/en
Application granted granted Critical
Publication of CN111859159B publication Critical patent/CN111859159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses an information pushing method, a video processing method and equipment. One specific implementation of the information pushing method includes: performing code stream conversion on the video data to obtain video streams and positioning information of the video streams on which the objects appear; in response to receiving an item search instruction of a user, determining location information of a searched item; determining a playing time point based on the positioning information of the searched article; skipping to the video frame corresponding to the playing time point to start playing; querying push information for the search item, and presenting the push information. According to the embodiment, the positioning information of the object is pre-embedded in the video stream, when the object searching instruction of the user is identified, the user quickly jumps to the image of the object to be searched, and the push information of the object to be searched is automatically presented, so that the user can review the scene of the object to be searched, the requirement that the user knows the information of the object to be interested in detail is met, the object to be interested in the user can be purchased conveniently, and the operation cost of the user is saved.

Description

Information pushing method, video processing method and equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to methods and equipment for information push and video processing.
Background
With the rapid development of the internet, the functions supported by the video application are more and more diverse, such as a live broadcast function, an on-demand broadcast function, and the like. And further attract more and more users to watch videos, and the watching time is longer and longer. Various items such as clothing, decorations, food, etc., are often present in the video. If a user wants to review an item of interest that previously appeared in the video, the video needs to be played from scratch until the item of interest is seen. And if the user wants to obtain the detailed information of the article in which the user is interested, the user needs to operate the video application background, open the search application or the shopping application, and input the name of the article for searching.
Disclosure of Invention
The embodiment of the application provides an information pushing method, a video processing method and equipment.
In a first aspect, an embodiment of the present application provides an information pushing method, including: performing code stream conversion on the video data to obtain video streams and positioning information of the video streams on which the objects appear; in response to receiving an item search instruction of a user, determining location information of a searched item; determining a playing time point based on the positioning information of the searched article; skipping to the video frame corresponding to the playing time point to start playing; querying push information for the search item, and presenting the push information.
In some embodiments, receiving an item search instruction from a user comprises: collecting voice information of a user; recognizing the voice information, and determining the name of an article contained in the voice information; and if the item name is matched with the appearing item of the video stream, determining the matched appearing item as a search item.
In some embodiments, determining the play time point based on the location information of the searched item includes: extracting an appearance video frame and an appearance time point of a searched article from the positioning information of the searched article; determining a slice starting time point of a video frame as a playing time point; and jumping to the video frame corresponding to the playing time point to start playing, wherein the method comprises the following steps: and jumping to the playing time point to play the slice quickly to the appearance time point.
In some embodiments, extracting the video frame of occurrence and the time point of occurrence of the search item from the location information of the search item includes: inquiring an article identifier corresponding to the article name of the searched article in an article name table, wherein the article name table is used for storing the corresponding relation between the article identifier and the article name; and inquiring the appearance video frame and the appearance time point corresponding to the item identification of the searched item in a positioning information table, wherein the positioning information table is used for storing the corresponding relation between the item identification and the positioning information.
In some embodiments, the method further comprises: if the search item appears in multiple slices of the video stream, a jump is made to the slice with the earliest start time, or multiple slices are presented in a list and a jump is made to the slice selected by the user.
In some embodiments, the positioning information comprises coordinate information; and presenting the push information, including: determining a display area in the appearing video frame based on the coordinate information of the searched article, and displaying the push information of the searched article in the display area.
In some embodiments, the coordinate information is a percentage coordinate; and determining a display area in the appearing video frame based on the coordinate information of the searched item, including: calculating dot matrix coordinates of the searched articles based on the resolution of the playing equipment and the percentage coordinates of the searched articles; and setting the area corresponding to the dot matrix coordinates as a display area.
In some embodiments, calculating the dot matrix coordinates of the searched item based on the resolution of the playback device and the percentage coordinates of the searched item includes: and if the coordinate system of the percentage coordinate is the same as the screen coordinate system of the playing equipment, correspondingly multiplying the horizontal pixel value and the vertical pixel value of the resolution of the playing equipment by the horizontal coordinate value and the vertical coordinate value of the percentage coordinate of the searched article to obtain the dot matrix coordinate of the searched article.
In some embodiments, calculating the dot matrix coordinates of the searched item based on the resolution of the playback device and the percentage coordinates of the searched item further comprises: if the coordinate system of the percentage coordinate is different from the screen coordinate system of the playing equipment, converting the coordinate system of the percentage coordinate to obtain a conversion percentage coordinate under the screen coordinate system; and correspondingly multiplying the horizontal pixel value and the vertical pixel value of the resolution of the playing equipment with the horizontal coordinate value and the vertical coordinate value of the conversion percentage coordinate of the searched article to obtain the dot matrix coordinate of the searched article.
In a second aspect, an embodiment of the present application provides a video processing method, including: carrying out article identification on the video stream, and determining the articles appearing in the video stream; acquiring positioning information of the appearing article; and adding the positioning information of the object to be appeared into the corresponding video frame protocol to generate video data.
In some embodiments, obtaining location information of the emerging item includes: identifying the position of the video stream, and determining the coordinate information of the object; and adding the coordinate information of the appearing item to the positioning information of the appearing item.
In some embodiments, identifying the location of the video stream and determining the coordinate information of the present item includes: simulating a test-broadcast video stream on a test-broadcast device; identifying the position of the video stream to obtain a dot matrix coordinate of an object; and determining coordinate information of the appearing article based on the dot matrix coordinates of the appearing article.
In some embodiments, determining coordinate information of the appearing item based on the lattice coordinates of the appearing item includes: and correspondingly dividing the horizontal coordinate value and the vertical coordinate value of the dot matrix coordinate of the appearing article with the horizontal pixel value and the vertical pixel value of the resolution of the trial-broadcast equipment to obtain the percentage coordinate of the appearing article.
In some embodiments, obtaining location information of the emerging item includes: determining a slice start time point of an appearance video frame in which an article appears; the slice start time point at which the article appears is added to the location information of the appearing article.
In some embodiments, adding location information of the appearing item to the corresponding video frame protocol includes: and extending the network abstraction layer information of the corresponding video frame protocol based on the positioning information of the object.
In a third aspect, an embodiment of the present application provides an information pushing apparatus, including: the code stream conversion unit is configured to perform code stream conversion on the video data to obtain the video stream and the positioning information of the video stream on which the object appears; a slice determination unit configured to determine location information of a search item in response to receiving an item search instruction of a user; a time determination unit configured to determine a play time point based on location information of the search item; the playing skipping unit is configured to skip to the video frame corresponding to the playing time point to start playing; and the information presentation unit is configured to inquire the push information of the searched article and present the push information.
In some embodiments, the slice determination unit is further configured to: collecting voice information of a user; recognizing the voice information, and determining the name of an article contained in the voice information; and if the item name is matched with the appearing item of the video stream, determining the matched appearing item as a search item.
In some embodiments, the time determination unit comprises: an extracting subunit configured to extract an appearance video frame and an appearance time point of a search item from positioning information of the search item; a determination subunit configured to determine a slice start time point at which the video frame appears, as a play time point; and the jump play unit is further configured to: and jumping to the playing time point to play the slice quickly to the appearance time point.
In some embodiments, the extraction subunit is further configured to: inquiring an article identifier corresponding to the article name of the searched article in an article name table, wherein the article name table is used for storing the corresponding relation between the article identifier and the article name; and inquiring the appearance video frame and the appearance time point corresponding to the item identification of the searched item in a positioning information table, wherein the positioning information table is used for storing the corresponding relation between the item identification and the positioning information.
In some embodiments, the apparatus further comprises: and a slice skipping unit configured to skip to a slice with an earliest start time or to list the plurality of slices and skip to a slice selected by a user if the search item appears in the plurality of slices of the video stream.
In some embodiments, the positioning information comprises coordinate information; and the information presentation unit includes: and the information presentation subunit is configured to determine a display area in the appearing video frame based on the coordinate information of the searched article, and display the push information of the searched article in the display area.
In some embodiments, the coordinate information is a percentage coordinate; and the information presentation subunit comprises: a calculation module configured to calculate dot matrix coordinates of the search item based on a resolution of the playback device and the percentage coordinates of the search item; and the setting module is configured to set the area corresponding to the dot matrix coordinates as a display area.
In some embodiments, the computing module is further configured to: and if the coordinate system of the percentage coordinate is the same as the screen coordinate system of the playing equipment, correspondingly multiplying the horizontal pixel value and the vertical pixel value of the resolution of the playing equipment by the horizontal coordinate value and the vertical coordinate value of the percentage coordinate of the searched article to obtain the dot matrix coordinate of the searched article.
In some embodiments, the computing module is further configured to: if the coordinate system of the percentage coordinate is different from the screen coordinate system of the playing equipment, converting the coordinate system of the percentage coordinate to obtain a conversion percentage coordinate under the screen coordinate system; and correspondingly multiplying the horizontal pixel value and the vertical pixel value of the resolution of the playing equipment with the horizontal coordinate value and the vertical coordinate value of the conversion percentage coordinate of the searched article to obtain the dot matrix coordinate of the searched article.
In a fourth aspect, an embodiment of the present application provides a video processing apparatus, including: a determining unit configured to perform item identification on the video stream, and determine an appearance item of the video stream; an acquisition unit configured to acquire positioning information of an appearing article; and the adding unit is configured to add the positioning information of the appearing article into the corresponding video frame protocol to generate video data.
In some embodiments, the obtaining unit comprises: the determining subunit is configured to perform position recognition on the video stream and determine coordinate information of the object; an adding subunit configured to add coordinate information of the appearing item to the positioning information of the appearing item.
In some embodiments, determining the subunit comprises: a triage module configured to simulate a triage video stream on a triage device; the identification module is configured to identify the position of the video stream to obtain a dot matrix coordinate of an object; and the determining module is configured to determine the coordinate information of the appearing article based on the lattice coordinates of the appearing article.
In some embodiments, the determination module is further configured to: and correspondingly dividing the horizontal coordinate value and the vertical coordinate value of the dot matrix coordinate of the appearing article with the horizontal pixel value and the vertical pixel value of the resolution of the trial-broadcast equipment to obtain the percentage coordinate of the appearing article.
In some embodiments, the obtaining unit is further configured to: determining a slice start time point of an appearance video frame in which an article appears; the slice start time point at which the article appears is added to the location information of the appearing article.
In some embodiments, the adding unit is further configured to: and extending the network abstraction layer information of the corresponding video frame protocol based on the positioning information of the object.
In a fifth aspect, an embodiment of the present application provides a computer device, including: one or more processors; a storage device having one or more programs stored thereon; when executed by one or more processors, cause the one or more processors to implement a method as described in any implementation of the first aspect or to implement a method as described in any implementation of the second aspect.
In a sixth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method described in any of the implementation manners in the first aspect or implements the method described in any of the implementation manners in the second aspect.
According to the information pushing and video processing method and device provided by the embodiment of the application, firstly, code stream conversion is carried out on video data to obtain video streams and positioning information of objects appearing in the video streams; then, in response to receiving an item searching instruction of a user, determining positioning information of the searched item; then, determining a playing time point based on the positioning information of the searched article; and finally, jumping to a video frame corresponding to the playing time point to start playing, simultaneously inquiring the push information of the searched article, and presenting the push information. By pre-embedding the positioning information of the object, when the object searching instruction of the user is identified, the user quickly jumps to the image of the object to be searched, and automatically presents the push information of the object to be searched, so that the user can not only review the scene of the object to be searched, but also meet the requirement of the user on knowing the information of the object in detail, the object in interest can be conveniently purchased, and the operation cost of the user is saved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an information push method according to the present application;
FIG. 3 is a flow diagram of yet another embodiment of an information push method according to the present application;
FIG. 4 is a flow diagram for one embodiment of a video processing method according to the present application;
FIG. 5 is a schematic block diagram of a computer system suitable for use in implementing the computer device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the information push, video processing method of the present application may be applied.
As shown in fig. 1, devices 101, 102 and network 103 may be included in system architecture 100. Network 103 is the medium used to provide communication links between devices 101, 102. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The devices 101, 102 may be hardware devices or software that support network connectivity to provide various network services. When the device is hardware, it can be a variety of electronic devices including, but not limited to, smart phones, tablets, laptop portable computers, desktop computers, servers, and the like. In this case, the hardware device may be implemented as a distributed device group including a plurality of devices, or may be implemented as a single device. When the device is software, the software can be installed in the electronic devices listed above. At this time, as software, it may be implemented as a plurality of software or software modules for providing a distributed service, for example, or as a single software or software module. And is not particularly limited herein.
In practice, a device may provide a respective network service by installing a respective client application or server application. After the device has installed the client application, it may be embodied as a client in network communications. Accordingly, after the server application is installed, it may be embodied as a server in network communications.
As an example, in fig. 1, device 101 is embodied as a client and device 102 is embodied as a server. For example, the device 101 may be a client of a video-class application, and the device 102 may be a server of the video-class application.
It should be noted that the information pushing method and the video processing method provided in the embodiments of the present application may be executed by the device 101. When the device 101 executes the information push method, it may be a playback device. When the device 102 executes the video processing method, it may be a trialling device.
It should be understood that the number of networks and devices in fig. 1 is merely illustrative. There may be any number of networks and devices, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an information push method according to the present application is shown. The information pushing method comprises the following steps:
step 201, performing code stream conversion on the video data to obtain the video stream and the positioning information of the appearing objects of the video stream.
In this embodiment, an execution main body of the information push method (for example, the device 101 shown in fig. 1) may obtain video data from a background server (for example, the device 102 shown in fig. 1) of the video-class application, and perform code stream conversion on the video data to obtain a video stream and location information of an appearing article of the video stream.
The video data may include, among other things, the video stream and location information for the video stream's presence. A video stream is playable data including, but not limited to, a television show, a movie, a live broadcast, a short video, and so forth. The location information of the appearing item of the video stream is non-playable data for locating the appearing item in the video stream, including but not limited to an item name, an item identifier, a slice identifier to which a video frame of the appearing item belongs, an item appearing time point, a slice starting time point, and the like. The appearing item may be an item appearing in the video stream, such as clothing, decorations, food items, and the like. The slice to which the video frame of the occurrence of the item belongs may comprise a series of consecutive video frames in the video stream, and the occurrence of the item in at least one of the video frames.
For video frames in video stream, not all articles appear in each frame of video frame, nor all the articles appear have positioning information, therefore, only the video frame where the article with the positioning information is located is subjected to code stream coding processing, a video frame protocol is modified, and the positioning information is added in the original video frame protocol. The video data added with the non-playable data cannot be directly played, so that the video data needs to be subjected to code stream conversion, and the playable video stream and the non-playable positioning information are separated. The code stream conversion can adopt a static transcoding mode or a dynamic transcoding mode.
It should be noted that, when the video frame protocol is modified, the modification modes are different for different protocol formats. Taking h.264 as an example, adding the positioning information is supported by extending NAL (Network Abstraction Layer) information of a video frame protocol. The NAL may include NAL Header, NAL Extension, and NAL payload, among others. Nalreader may be used to store basic information of video frames. NAL payload may be used to store a binary stream of video frames. NAL Extension may be used to store location information. NAL Extension is inserted into the video frame without changing the data stored by NALpayload to form a new video frame.
It should be noted that, in general, the playback device plays the video stream while performing stream conversion. Therefore, in the process of playing the video stream, the positioning information of the appearing objects of the video stream can be obtained successively. Also, the obtained positioning information may be stored in a positioning information table for easy searching.
In response to receiving an item search instruction from a user, location information of a searched item is determined, step 202.
In this embodiment, the user may issue an item search instruction to the execution main body. In the case of receiving an item search instruction from a user, the execution body may determine location information of a search item.
Generally, the user may issue an item search instruction during playback of the video stream again, or during playback of the video stream for the first time.
For the item searching instruction sent by the user in the process of playing the video stream again, when the video stream is played for the first time, the positioning information of all the items appearing in the video stream can be stored in the positioning information table. Therefore, in the process of playing back the video stream again, the item searched by the user may be an appearance item of a video frame that has been played back again, or may be an appearance item of a video frame that has not been played back again.
For an item search instruction issued by a user during the first playing of a video stream, the location information of an item appearing in a played video frame may be stored in the location information table during the first playing of the video stream. Thus, the items searched by the user may only be occurrences of video frames that have already been played.
Generally, a user may issue an item search instruction in a variety of ways. Wherein, the convenient mode is to send the item searching instruction by speaking the name of the searched item. At this time, the execution body may collect voice information of the user, recognize the voice information, and determine an article name included in the voice information. And if the item name is matched with the appearing item of the video stream, determining the matched appearing item as a search item. And if the item name is not matched with the appearing item of the video stream, continuously acquiring the voice information of the user. Since the positioning information of the appearing objects of the played video frames is stored in the positioning information table, the positioning information table usually stores the positioning information of a plurality of appearing objects. The appearing item matching the item name contained in the user's voice message is the user's search item. For example, the user speaks "watch," and the presence items corresponding to the positioning information table include brand a watch, brand B clothing, and brand C shoes, and only the brand a watch matches with "watch" is the search item of the user.
Step 203, determining a playing time point based on the positioning information of the searched article.
In this embodiment, the execution subject may determine the playing time point based on the location information of the searched item.
In general, in the case where each frame of video is a full map, the execution body may use an appearance time point of a search item included in the positioning information as a play time point. However, due to coding considerations, most video frames are not necessarily full maps. In this case, it is necessary to set the start time of the slice to which the video frame of the search item included in the positioning information belongs as the playback time. Wherein, the first frame video frame in the slice is coded based on the complete map, and the subsequent video frame is coded based on the change information of the previous video frame. For example, if the second frame of video frame is added with an item compared with the first frame of video frame, only the added item information is encoded.
And step 204, jumping to the video frame corresponding to the playing time point and starting playing.
In this embodiment, the execution main body may jump to the video frame corresponding to the playing time point to start playing.
Generally, in the case that the play time point is the appearance time point of the search item, the first frame of video frame jumped to can present the search item. Under the condition that the playing time point is the starting time point of the slice to which the video frame of the searched article belongs, the searched article cannot be presented by the skipped first frame video frame, and the searched article can be presented when the appearing time point of the searched article is played.
Step 205, inquiring the push information of the searched item, and presenting the push information.
In this embodiment, the execution subject may query push information of a search item and present the push information.
Generally, the execution body may query push information of a search item through various ways. For example, in the case of locally storing push information for a large number of items, a local lookup searches for push information for an item. For another example, in the case that the video application integrates the search function or the shopping function, a push information acquisition request for the search item is sent to the background server of the video application, and push information of the search item returned by the background server of the video application is received. For another example, a push information acquisition request for the search item is sent to the background server of the search application or the shopping application, and the push information of the search item returned by the background server of the search application or the shopping application is received.
The pushed information can be a link for the user to browse detailed information of the searched article or a link for purchasing the searched article. In general, push information may be presented on a currently playing video frame, particularly in the vicinity of a searched item. Subsequently, the user can perform corresponding operations based on the push information to view detailed information of the search item or purchase the search item.
In some embodiments, in the case that the positioning information includes coordinate information, the execution body may determine a display area in the appearing video frame based on the coordinate information of the search item, and display push information of the search item in the display area. The area corresponding to the coordinate information may be a display area.
The information pushing method provided by the embodiment of the application comprises the steps of firstly, carrying out code stream conversion on video data to obtain video streams and positioning information of objects appearing in the video streams; then, in response to receiving an item searching instruction of a user, determining positioning information of the searched item; then, determining a playing time point based on the positioning information of the searched article; and finally, jumping to a video frame corresponding to the playing time point to start playing, simultaneously inquiring the push information of the searched article, and presenting the push information. By pre-embedding the positioning information of the object, when the object searching instruction of the user is identified, the user quickly jumps to the image of the object to be searched, and automatically presents the push information of the object to be searched, so that the user can not only review the scene of the object to be searched, but also meet the requirement of the user on knowing the information of the object in detail, the object in interest can be conveniently purchased, and the operation cost of the user is saved.
With further reference to fig. 3, shown is a flow 300 that is yet another embodiment of an information push method according to the present application. The information pushing method comprises the following steps:
step 301, performing code stream conversion on the video data to obtain the video stream and the positioning information of the appearing objects of the video stream.
Step 302, in response to receiving an item search instruction from a user, determining location information of a searched item.
In the present embodiment, the specific operations of steps 301 and 302 have been described in detail in step 201 and 202 in the embodiment shown in fig. 2, and are not described herein again.
Step 303, extracting the appearance video frame and the appearance time point of the searched article from the positioning information of the searched article.
In this embodiment, an execution subject of the information push method (for example, the device 101 shown in fig. 1) may extract an appearance video frame and an appearance time point of a search item from positioning information of the search item.
In some embodiments, in the case that the location information includes an item name, an appearance video frame and an appearance time point of the item, the execution body may directly query the appearance video frame and the appearance time point of the search item from the location information table based on the name of the search item. The positioning information table may be configured to store positioning information of an appearing article of a video stream obtained by performing stream conversion on video data.
In some embodiments, where the location information does not include an item name, the user cannot directly query the location information table based on the name of the search item. To facilitate user voice queries, an item name table is created. The item name table may be used to store the corresponding relationship between the item identifier and the item name. The positioning information table may be used to store the correspondence between the item identification and the positioning information. The two are related by the item identification. Specifically, the executing agent may first query an item identifier corresponding to an item name of the search item in an item name table; and then inquiring the appearance video frame and the appearance time point corresponding to the article identification of the searched article in the positioning information table. For ease of retrieval, the storage may be consolidated for the same item appearing in different slices, as shown in tables 1 and 2. Table 1 shows a positioning information table, and table 2 shows an article name table.
Figure BDA0002619010140000121
TABLE 1
Figure BDA0002619010140000122
TABLE 2
Step 304, determining the slice start time point of the video frame as the playing time point.
In this embodiment, the execution body may determine a slice start time point at which a video frame appears as a play time point.
And step 305, jumping to a playing time point and rapidly playing the slice to an appearance time point.
In this embodiment, the execution body may jump to a play time point and play the slice quickly to an appearance time point. In general, the playback speed from the slice start time point to the appearance time point is fast in order to quickly present the search item. And, the playback is started from the slice start time point, it is possible to ensure that the screen of the search item presented to the user is complete.
In some embodiments, where the search for items occurs in multiple slices of the video stream, the execution body may preferentially select to jump to the slice with the earliest start time, may also list multiple slices, and jump to the slice selected by the user.
Step 306, push information of the search item is queried.
In this embodiment, the specific operation of step 306 is described in detail in step 205 in the embodiment shown in fig. 2, and is not described herein again.
Step 307, calculating the dot matrix coordinates of the searched article based on the resolution of the playing device and the percentage coordinates of the searched article.
In this embodiment, in the case where the coordinate information is a percentage coordinate, the execution body may calculate a dot matrix coordinate of the search item based on the resolution of the playback device and the percentage coordinate of the search item.
Since different playback devices have different screen resolutions, the coordinate information in the positioning information is a percentage coordinate in order to accommodate the different screen resolutions. The dot matrix coordinates are required for determining the display area, and therefore, the percentage coordinates need to be converted into corresponding dot matrix coordinates. Specifically, the executing body may multiply the horizontal pixel value and the vertical pixel value of the resolution of the playing device by the horizontal coordinate value and the vertical coordinate value of the percentage coordinate of the searched article, so as to obtain the dot matrix coordinate of the searched article.
For example, a video stream is played with a playback device with resolution a x B. If the percentage coordinate of the searched article is (x/a, y/B), the lattice coordinate of the searched article is (x A/a, y B/B). Wherein a, B, A and B are positive integers, x is a positive integer not greater than a, y is a positive integer not greater than B, x/a and y/B are positive integers not greater than 1, and x A/a and y B/B are positive integers.
Generally, the coordinate system of the percentage coordinate system is the same as the screen coordinate system of the playing device, and the upper left corner is used as the origin, the right side is the positive direction of the horizontal axis, and the downward side is the positive direction of the vertical axis. At this time, the horizontal pixel value and the vertical pixel value of the resolution of the playing device are directly and correspondingly multiplied by the horizontal coordinate value and the vertical coordinate value of the percentage coordinate of the searched article, and then the dot matrix coordinate of the searched article can be obtained. Under special conditions, if the coordinate system of the percentage coordinate is different from the screen coordinate system of the playing device, the coordinate system of the percentage coordinate needs to be converted first to obtain a conversion percentage coordinate under the screen coordinate system; and then, correspondingly multiplying the horizontal pixel value and the vertical pixel value of the resolution of the playing equipment with the horizontal coordinate value and the vertical coordinate value of the conversion percentage coordinate of the searched article to obtain the dot matrix coordinate of the searched article.
And 308, setting an area corresponding to the dot matrix coordinates as a display area, and displaying push information of the searched article in the display area.
In this embodiment, the execution body may set an area corresponding to the dot coordinates as a display area, and display push information of the searched article in the display area. In this way, the push information is caused to be presented in the vicinity of the searched item.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the flow 300 of the information pushing method in this embodiment highlights the step of determining the playing time point and the step of determining the display area. Therefore, the scheme described in the embodiment starts playing from the slice start time point, and can ensure the complete picture of the searched article presented to the user. And the coordinate information in the positioning information is percentage coordinates, and corresponding dot matrix coordinates are obtained through coordinate conversion, so that the method is suitable for different screen resolutions of different playing devices.
With continued reference to FIG. 4, a flow 400 of one embodiment of a video processing method according to the present application is shown. The video processing method comprises the following steps:
step 401, performing item identification on the video stream, and determining the appearing items of the video stream.
In this embodiment, an executing entity (for example, the device 101 shown in fig. 1) of the video processing method may perform item identification on the video stream, and determine an appearance item of the video stream.
In general, the execution body may determine the presence of the video stream in various ways. In some embodiments, a person skilled in the art may perform item recognition on the video stream, and input the recognition result to the execution subject. In some embodiments, the execution subject may split the video stream into a series of video frames, and perform item identification on each video frame to determine the presence of an item in the video stream.
Step 402, obtaining the positioning information of the object.
In this embodiment, the execution subject may obtain the positioning information of the present article. Wherein, the positioning information of the object is unplayable data and is used for positioning the object appearing in the video stream.
In some embodiments, the positioning information may include coordinate information. Specifically, the execution body may perform position recognition on the video stream, and determine coordinate information of an object; and adding the coordinate information of the appearing item to the positioning information of the appearing item. Wherein the coordinate information may be determined by simulating a triage video stream on a triage device. Specifically, simulating a trial-broadcast video stream on a first trial-broadcast device; then, carrying out position identification on the video stream to obtain a dot matrix coordinate of an object; and finally, determining the coordinate information of the appearing article based on the dot matrix coordinate of the appearing article.
In general, in a case where screen resolutions of most of the playback devices and the trial playback device are unified, the coordinate information may be dot matrix coordinates. In practical applications, however, different playback devices have different screen resolutions, and in order to adapt to the different screen resolutions, the coordinate information in the positioning information is percentage coordinates. Specifically, the horizontal coordinate value and the vertical coordinate value of the dot matrix coordinate of the appearing article are correspondingly divided by the horizontal pixel value and the vertical pixel value of the resolution of the trial broadcast equipment, so that the percentage coordinate of the appearing article can be obtained.
For example, a video stream is tried out using a standard device with a resolution of a b, and if the dot matrix coordinates of the appearing item captured on the trying device are (x, y), the percentage coordinates of the appearing item are (x/a, y/b). Wherein a and b are positive integers, x is a positive integer not greater than a, y is a positive integer not greater than b, and x/a and y/b are positive numbers not greater than 1.
It should be noted that the resolution of the preview device needs to be selected to match the video resolution, for example, 720p above 16:9, or 4:3 below. In this way, errors can be reduced as much as possible.
In some embodiments, the execution body may further determine a slice start time point of an appearance video frame in which the item appears, and add the slice start time point in which the item appears to the positioning information of the appearing item.
And 403, adding the positioning information of the object to the corresponding video frame protocol to generate video data.
In this embodiment, the execution subject may add the location information of the appearing item to the corresponding video frame protocol to generate video data.
Generally, a video frame protocol is modified by performing code stream coding processing on a video frame where an article with positioning information is located, and the positioning information can be added in the original video frame protocol. When a video frame protocol is modified, the modification modes are different according to different protocol formats. Taking h.264 as an example, adding location information is supported based on NAL information of a video frame protocol corresponding to location information extension of an appearing article. The NAL may include NAL Header, NAL Extension, and NAL payload, among others. NAL headers may be used to store basic information of video frames. NAL payload may be used to store a binary stream of video frames. NAL Extension may be used to store location information. NAL Extension is inserted into a video frame without changing the data stored by NAL payload to form a new video frame.
According to the video processing method provided by the embodiment of the application, firstly, article identification is carried out on a video stream, and articles appearing in the video stream are determined; then, acquiring the positioning information of the appearing article; and finally, adding the positioning information of the object to the corresponding video frame protocol to generate video data, thereby realizing the addition of the non-playable data in the video stream.
Referring now to FIG. 5, shown is a block diagram of a computer system 500 suitable for use in implementing a computing device (e.g., device 101 shown in FIG. 1) of an embodiment of the present application. The computer device shown in fig. 5 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present application.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 501.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or electronic device. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a code stream conversion unit, a slice determination unit, a time determination unit, a play skip unit, and an information presentation unit. The names of these units do not limit the unit itself in this case, for example, the code stream conversion unit may also be described as a "unit that performs code stream conversion on video data to obtain video stream and location information of the video stream in which the object appears". As another example, it can be described as: a processor includes a determination unit, an acquisition unit, and an addition unit. The names of these units do not constitute a limitation to the unit itself in this case, for example, the determination unit may also be described as "unit for identifying an item in a video stream, determining an item in the video stream.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the computer device described in the above embodiments; or may exist separately and not be incorporated into the computer device. The computer readable medium carries one or more programs which, when executed by the computing device, cause the computing device to: performing code stream conversion on the video data to obtain video streams and positioning information of the video streams on which the objects appear; in response to receiving an item search instruction of a user, determining location information of a searched item; determining a playing time point based on the positioning information of the searched article; skipping to the video frame corresponding to the playing time point to start playing; querying push information for the search item, and presenting the push information. Or cause the computer device to: carrying out article identification on the video stream, and determining the articles appearing in the video stream; acquiring positioning information of the appearing article; and adding the positioning information of the object to be appeared into the corresponding video frame protocol to generate video data.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (17)

1. An information push method, comprising:
performing code stream conversion on video data to obtain video streams and positioning information of the video streams where objects appear;
in response to receiving an item search instruction of a user, determining location information of a searched item;
determining a playing time point based on the positioning information of the searched article;
skipping to the video frame corresponding to the playing time point to start playing;
and inquiring the push information of the searched article, and presenting the push information.
2. The method of claim 1, wherein the receiving an item search instruction of a user comprises:
collecting voice information of the user;
recognizing the voice information, and determining the name of the article contained in the voice information;
and if the item name is matched with the appearing item of the video stream, determining the matched appearing item as the search item.
3. The method of claim 1, wherein the determining a play time point based on the location information of the search item comprises:
extracting an appearance video frame and an appearance time point of the search item from the positioning information of the search item;
determining the slice starting time point of the video frame as the playing time point; and
the skipping to the video frame corresponding to the playing time point to start playing comprises the following steps:
and jumping to the playing time point to quickly play the slice to the appearance time point.
4. The method of claim 3, wherein said extracting the video frame of occurrence and the time point of occurrence of the search item from the location information of the search item comprises:
inquiring an item identifier corresponding to the item name of the searched item in an item name table, wherein the item name table is used for storing the corresponding relation between the item identifier and the item name;
and inquiring the appearance video frame and the appearance time point corresponding to the item identifier of the searched item in a positioning information table, wherein the positioning information table is used for storing the corresponding relation between the item identifier and the positioning information.
5. The method of claim 3, wherein the method further comprises:
if the search item appears in multiple slices of the video stream, skipping to the slice with the earliest start time, or listing the multiple slices and skipping to the slice selected by the user.
6. The method of claim 1, wherein the positioning information comprises coordinate information; and
the presenting the push information includes:
determining a display area in the appearing video frame based on the coordinate information of the search item, and displaying the push information of the search item in the display area.
7. The method of claim 6, wherein the coordinate information is a percentage coordinate; and
the determining a display area in the appearing video frame based on the coordinate information of the search item includes:
calculating dot matrix coordinates of the searched article based on the resolution of the playing device and the percentage coordinates of the searched article;
and setting the area corresponding to the dot matrix coordinate as the display area.
8. The method of claim 7, wherein said calculating dot matrix coordinates of the search item based on a resolution of the playback device and percentage coordinates of the search item comprises:
and if the coordinate system of the percentage coordinate is the same as the screen coordinate system of the playing device, correspondingly multiplying the horizontal pixel value and the vertical pixel value of the resolution of the playing device by the horizontal coordinate value and the vertical coordinate value of the percentage coordinate of the searched article to obtain the dot matrix coordinate of the searched article.
9. The method of claim 8, wherein the calculating dot matrix coordinates of the search item based on a resolution of the playback device and percentage coordinates of the search item further comprises:
if the coordinate system of the percentage coordinate is different from the screen coordinate system of the playing equipment, converting the coordinate system of the percentage coordinate to obtain a conversion percentage coordinate under the screen coordinate system;
and correspondingly multiplying the horizontal pixel value and the vertical pixel value of the resolution of the playing equipment with the horizontal coordinate value and the vertical coordinate value of the conversion percentage coordinate of the searched article to obtain the dot matrix coordinate of the searched article.
10. A video processing method, comprising:
carrying out article identification on a video stream, and determining the appearance article of the video stream;
acquiring the positioning information of the appearing object;
and adding the positioning information of the appearing object into a corresponding video frame protocol to generate video data.
11. The method of claim 10, wherein said obtaining location information of said appearing item comprises:
identifying the position of the video stream, and determining the coordinate information of the appearing object;
and adding the coordinate information of the appeared object into the positioning information of the appeared object.
12. The method of claim 11, wherein said identifying the location of the video stream, determining the coordinate information of the appearing item, comprises:
simulating the trial playing of the video stream on trial playing equipment;
carrying out position identification on the video stream to obtain a dot matrix coordinate of the appearing article;
and determining the coordinate information of the appearing article based on the lattice coordinates of the appearing article.
13. The method of claim 12, wherein said determining coordinate information of said appearing item based on said lattice coordinates of said appearing item comprises:
and correspondingly dividing the horizontal coordinate value and the vertical coordinate value of the dot matrix coordinate of the appearing article with the horizontal pixel value and the vertical pixel value of the resolution of the pilot broadcast equipment to obtain the percentage coordinate of the appearing article.
14. The method of claim 10, wherein said obtaining location information of said appearing item comprises:
determining a slice start time point of an appearance video frame of the appearing item;
adding a slice start time point of the appearing item to the positioning information of the appearing item.
15. The method according to one of claims 10 to 14, wherein said adding location information of said appearing item to a corresponding video frame protocol comprises:
and expanding the network abstraction layer information of the corresponding video frame protocol based on the positioning information of the appearing object.
16. A computer device, comprising:
one or more processors;
a storage device on which one or more programs are stored;
when executed by the one or more processors, cause the one or more processors to implement a method as claimed in any one of claims 1-9, or to implement a method as claimed in any one of claims 10-15.
17. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 9, or carries out the method of any one of claims 10 to 15.
CN202010777528.5A 2020-08-05 2020-08-05 Information pushing and video processing method and equipment Active CN111859159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010777528.5A CN111859159B (en) 2020-08-05 2020-08-05 Information pushing and video processing method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010777528.5A CN111859159B (en) 2020-08-05 2020-08-05 Information pushing and video processing method and equipment

Publications (2)

Publication Number Publication Date
CN111859159A true CN111859159A (en) 2020-10-30
CN111859159B CN111859159B (en) 2024-07-05

Family

ID=72971300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010777528.5A Active CN111859159B (en) 2020-08-05 2020-08-05 Information pushing and video processing method and equipment

Country Status (1)

Country Link
CN (1) CN111859159B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080154889A1 (en) * 2006-12-22 2008-06-26 Pfeiffer Silvia Video searching engine and methods
CN103081488A (en) * 2010-06-29 2013-05-01 高通股份有限公司 Signaling video samples for trick mode video representations
CN105100911A (en) * 2014-05-06 2015-11-25 夏普株式会社 Intelligent multimedia system and method
CN105100944A (en) * 2014-04-30 2015-11-25 广州市动景计算机科技有限公司 Article information outputting method and device
WO2017045462A1 (en) * 2015-09-16 2017-03-23 乐视控股(北京)有限公司 Method and system for presenting information related to goods in video stream
WO2017088415A1 (en) * 2015-11-25 2017-06-01 乐视控股(北京)有限公司 Method, apparatus and electronic device for video content retrieval
WO2018232795A1 (en) * 2017-06-19 2018-12-27 网宿科技股份有限公司 Video player client, system, and method for live broadcast video synchronization
CN109348275A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 Method for processing video frequency and device
CN109697245A (en) * 2018-12-05 2019-04-30 百度在线网络技术(北京)有限公司 Voice search method and device based on video web page
CN110149558A (en) * 2018-08-02 2019-08-20 腾讯科技(深圳)有限公司 A kind of video playing real-time recommendation method and system based on content recognition
CN111163367A (en) * 2020-01-08 2020-05-15 百度在线网络技术(北京)有限公司 Information search method, device, equipment and medium based on playing video

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080154889A1 (en) * 2006-12-22 2008-06-26 Pfeiffer Silvia Video searching engine and methods
CN103081488A (en) * 2010-06-29 2013-05-01 高通股份有限公司 Signaling video samples for trick mode video representations
CN105100944A (en) * 2014-04-30 2015-11-25 广州市动景计算机科技有限公司 Article information outputting method and device
CN105100911A (en) * 2014-05-06 2015-11-25 夏普株式会社 Intelligent multimedia system and method
WO2017045462A1 (en) * 2015-09-16 2017-03-23 乐视控股(北京)有限公司 Method and system for presenting information related to goods in video stream
WO2017088415A1 (en) * 2015-11-25 2017-06-01 乐视控股(北京)有限公司 Method, apparatus and electronic device for video content retrieval
WO2018232795A1 (en) * 2017-06-19 2018-12-27 网宿科技股份有限公司 Video player client, system, and method for live broadcast video synchronization
CN110149558A (en) * 2018-08-02 2019-08-20 腾讯科技(深圳)有限公司 A kind of video playing real-time recommendation method and system based on content recognition
CN109348275A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 Method for processing video frequency and device
CN109697245A (en) * 2018-12-05 2019-04-30 百度在线网络技术(北京)有限公司 Voice search method and device based on video web page
CN111163367A (en) * 2020-01-08 2020-05-15 百度在线网络技术(北京)有限公司 Information search method, device, equipment and medium based on playing video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
乐嘉锦;姚岚;: "基于Solr的体育视频信息全文搜索研究", 计算机工程, no. 24 *
黄鹤;孟广仕;: "一种基于内容的视频检索系统设计", 科技创新与应用, no. 01 *

Also Published As

Publication number Publication date
CN111859159B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
US9961398B2 (en) Method and device for switching video streams
CN105721620B (en) Video information method for pushing and device and video information exhibit method and apparatus
CN110198432B (en) Video data processing method and device, computer readable medium and electronic equipment
CN111523566A (en) Target video clip positioning method and device
CN109640129B (en) Video recommendation method and device, client device, server and storage medium
JP3540721B2 (en) Object information providing method and system
US9946731B2 (en) Methods and systems for analyzing parts of an electronic file
WO2017080173A1 (en) Nature information recognition-based push system and method and client
US20170132267A1 (en) Pushing system and method based on natural information recognition, and a client end
CN105872717A (en) Video processing method and system, video player and cloud server
US20150195626A1 (en) Augmented media service providing method, apparatus thereof, and system thereof
WO2022028177A1 (en) Information pushing method, video processing method, and device
US8559724B2 (en) Apparatus and method for generating additional information about moving picture content
US20230093621A1 (en) Search result display method, readable medium, and terminal device
CN105657514A (en) Method and apparatus for playing video key information on mobile device browser
JP2006285654A (en) Article information retrieval system
CN105828103A (en) Video processing method and player
CN111246304A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN114245229B (en) Short video production method, device, equipment and storage medium
CN109241344B (en) Method and apparatus for processing information
JP2010268103A (en) Client terminal and computer program for moving picture distribution service
CN112784103A (en) Information pushing method and device
EP4291996A1 (en) A system for accessing a web page
CN113298589A (en) Commodity information processing method and device, and information acquisition method and device
CN111784478A (en) Method and apparatus for price comparison of items

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant