CN109151599B - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN109151599B
CN109151599B CN201811005371.3A CN201811005371A CN109151599B CN 109151599 B CN109151599 B CN 109151599B CN 201811005371 A CN201811005371 A CN 201811005371A CN 109151599 B CN109151599 B CN 109151599B
Authority
CN
China
Prior art keywords
information
retrieved
user
video stream
retrieval instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811005371.3A
Other languages
Chinese (zh)
Other versions
CN109151599A (en
Inventor
姚淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811005371.3A priority Critical patent/CN109151599B/en
Publication of CN109151599A publication Critical patent/CN109151599A/en
Application granted granted Critical
Publication of CN109151599B publication Critical patent/CN109151599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4828End-user interface for program selection for searching program descriptors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a video processing method and a video processing device. The video processing method comprises the following steps: when the video stream is played, a retrieval instruction input by a user is received, the retrieval instruction is used for indicating the retrieved object, first information of the retrieved object is acquired according to the retrieved object, and the first information is displayed on an interface for playing the video stream. The method provided by the embodiment can receive the retrieval instruction input by the user when the video stream is played, retrieve according to the retrieval instruction and display the retrieval result on the interface for playing the video stream, so that the problem that the user needs to stop playing the video and the user experience is poor in the traditional searching scheme in the video playing process is solved, the user can inquire the interested content in real time while watching the video stream, and the user experience is improved.

Description

Video processing method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a video processing method and apparatus.
Background
With the development of computer technology, the functions of intelligent devices are more and more powerful, and users can watch videos by using the intelligent devices and search contents in which the users are interested in a specific database or network by using the intelligent devices.
When a user watches a video using a smart device, there may be situations where the user is interested in people or objects in the video. At this time, the user needs to stop playing the video, and a traditional search engine is adopted to search people or objects which are interested by the user, so that the normal playing of the video is influenced, and the instant query requirement of the user is difficult to meet. The traditional searching scheme in the video playing process has poor user experience.
Disclosure of Invention
The invention provides a video processing method and a video processing device, which can solve the problem of poor user experience caused by searching in the video playing process.
A first aspect of the present invention provides a video processing method, including:
receiving a retrieval instruction input by a user when a video stream is played, wherein the retrieval instruction is used for indicating a retrieved object;
acquiring first information of the object to be searched according to the object to be searched, wherein the first information is information searched according to the object to be searched;
and displaying the first information on an interface for playing the video stream.
Optionally, the retrieval instruction includes feature information of the retrieved object; before the acquiring the first information of the object to be retrieved according to the object to be retrieved, the video processing method further includes:
and determining the object to be searched in the played video stream according to the characteristic information of the object to be searched.
Optionally, after determining the retrieved object in the played video stream according to the feature information of the retrieved object, the video processing method further includes:
displaying the retrieved object on an interface that plays the video stream;
receiving first confirmation information input by the user, wherein the first confirmation information is used for indicating that the first information of the searched object is confirmed to be searched.
Optionally, the retrieval instruction includes feature information of the retrieved object; before the acquiring the first information of the object to be retrieved according to the object to be retrieved, the video processing method further includes:
acquiring at least one alternative object in a played video stream according to the characteristic information of the object to be searched;
displaying each alternative object on an interface for playing the video stream;
and receiving second confirmation information input by the user, wherein the second confirmation information is used for determining one object candidate as the searched object in each object candidate.
Optionally, the video processing method further includes:
acquiring second information of the object to be retrieved, wherein the second information is information of an associated object having an association relation with the object to be retrieved; the related object is an object appearing in the played video stream;
the displaying the first information on the interface for playing the video stream includes:
and displaying the first information and the second information on an interface for playing the video stream.
Optionally, the receiving a retrieval instruction input by a user includes:
receiving a voice retrieval instruction input by the user; or,
receiving a character retrieval instruction input by the user; or,
and receiving a gesture retrieval instruction input by the user.
Optionally, the receiving a retrieval instruction input by a user includes:
and receiving the retrieval instruction sent by the user through the terminal.
Optionally, the displaying the first information on the interface for playing the video stream includes:
displaying the retrieved object and the first information on an interface on which the video stream is played.
Optionally, the search instruction is further used for indicating a search keyword; the acquiring first information of the object to be retrieved according to the object to be retrieved includes:
acquiring first information of the object to be searched according to the object to be searched and the search keyword; the first information is information retrieved according to the retrieved object and the retrieval keyword.
Optionally, the first information and the second information are AR information.
A second aspect of the present invention provides a video processing apparatus comprising:
a receiver, configured to receive a retrieval instruction input by a user when playing a video stream, where the retrieval instruction is used to indicate a retrieved object;
a processor, configured to acquire first information of the retrieved object according to the retrieved object, where the first information is information retrieved according to the retrieved object;
and the display is used for displaying the first information on an interface for playing the video stream.
Optionally, the retrieval instruction includes feature information of the retrieved object; the processor is further configured to determine the retrieved object in the played video stream according to the characteristic information of the retrieved object.
Optionally, the display is further configured to display the retrieved object on an interface for playing the video stream;
the receiver is further configured to receive first confirmation information input by the user, where the first confirmation information is used to instruct confirmation of retrieval of the first information of the retrieved object.
Optionally, the retrieval instruction includes feature information of the retrieved object; the processor is further configured to obtain at least one candidate object in the played video stream according to the feature information of the retrieved object;
the display is further used for displaying each alternative object on an interface for playing the video stream;
the receiver is further configured to receive second confirmation information input by the user, where the second determination information is used to determine one candidate object among the candidate objects as the retrieved object.
Optionally, the processor is further configured to acquire second information of the retrieved object, where the second information is information of an associated object having an association relationship with the retrieved object; the related object is an object appearing in the played video stream;
the display is specifically configured to display the first information and the second information on an interface on which the video stream is played.
Optionally, the receiver is specifically adapted to,
receiving a voice retrieval instruction input by the user; or,
receiving a character retrieval instruction input by the user; or,
and receiving a gesture retrieval instruction input by the user.
Optionally, the receiver is specifically configured to receive the retrieval instruction sent by the user through the terminal.
Optionally, the display is specifically configured to display the retrieved object and the first information on an interface for playing the video stream.
Optionally, the search instruction is further used for indicating a search keyword; the processor is specifically configured to obtain first information of the object to be retrieved according to the object to be retrieved and the retrieval keyword; the first information is information retrieved according to the retrieved object and the retrieval keyword.
Optionally, the first information and the second information are AR information.
A third aspect of the present invention provides a video processing apparatus comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory to cause the video processing apparatus to perform the video processing method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement the above-described video processing method.
The invention provides a video processing method and a video processing device. The video processing method comprises the following steps: when the video stream is played, a retrieval instruction input by a user is received, the retrieval instruction is used for indicating the retrieved object, first information of the retrieved object is acquired according to the retrieved object, and the first information is displayed on an interface for playing the video stream. The method provided by the embodiment can receive the retrieval instruction input by the user when the video stream is played, retrieve according to the retrieval instruction and display the retrieval result on the interface for playing the video stream, so that the problem that the user needs to stop playing the video and the user experience is poor in the traditional searching scheme in the video playing process is solved, the user can inquire the interested content in real time while watching the video stream, and the user experience is improved.
Drawings
Fig. 1 is a first schematic flow chart of a video processing method according to the present invention;
FIG. 2 is a schematic interface diagram of a video processing method according to the present invention;
FIG. 3 is a second flowchart illustrating a video processing method according to the present invention;
fig. 4 is a third schematic flowchart of a video processing method according to the present invention;
fig. 5 is a fourth schematic flowchart of a video processing method according to the present invention;
FIG. 6 is a first schematic structural diagram of a video processing apparatus according to the present invention;
fig. 7 is a schematic structural diagram of a video processing apparatus according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a video processing method according to the present invention, where an execution subject of the flow chart of the method shown in fig. 1 may be a video processing apparatus, or an application/software in the video processing apparatus. The video processing device can be intelligent equipment such as a mobile phone, a television, a tablet and the like, and the application/software in the video processing device can be video playing application/software. The execution body may be implemented by any software and/or hardware. As shown in fig. 1, the video processing method provided in this embodiment may include:
s101, receiving a retrieval instruction input by a user when the video stream is played.
Wherein the retrieval instruction is for indicating the retrieved object.
In this embodiment, a user plays a video stream on an intelligent device. The smart device may be any electronic device that includes a display screen, and the video stream may be a television program, a movie, a television show, an advertisement, an online lesson video, or the like. When a user watches a video stream, people or objects of interest may be found in the video stream. For example, a user watching a movie video may be interested in an actor in the movie, wishing to learn other works of the actor. While the user is viewing the network lesson video, the user may be interested in the instructional materials mentioned in the lesson. In order to obtain information of other works of actors or teaching material data related to the network course video, a user needs to stop playing the video, and a traditional search engine is adopted to search contents interested by the user, so that normal playing of the video is influenced, and user experience is poor.
In order to solve the above problem, the present embodiment receives a retrieval instruction input by a user while playing a video stream. Wherein the retrieval instruction is used for indicating that the content which is interested by the user is searched. Specifically, the retrieval instruction is used to indicate the retrieved object.
It is to be noted that, when the retrieval instruction indicates the retrieved object, it may be a direct indication, for example, the retrieval instruction is a name of the retrieved object. The retrieval instructions may also indirectly indicate the retrieved object, e.g., the retrieval instructions are a description of the retrieved object; for example, the retrieval instructions may also describe features that the retrieved object does not possess, thereby obtaining the retrieved object that does not possess the features.
It is contemplated that the object being retrieved may be content that appears in the video stream. The content appearing in the video stream includes: people in the video stream, things, content in the subtitles, subject matter of the video stream, etc.
It is contemplated that the object retrieved may also be content that is not present in the video stream, but is related to content that is present in the video stream. Alternatively, the correlation may be of the same/corresponding characteristics, for example, when the user views the clothing worn by the actors in the video stream, the object to be retrieved may be "a schematic diagram of the effect of wearing the clothing by all other actors/any other persons/a specific actor", and by looking at the effect diagram of wearing the clothing by different persons, it can be determined whether the clothing fits himself or herself.
It should be noted that the playing video stream can be played in any way, such as live broadcasting, on demand broadcasting, and the like.
Optionally, in any embodiment of the present invention, receiving a retrieval instruction input by a user includes:
receiving a voice retrieval instruction input by a user; or,
receiving a character retrieval instruction input by a user; or,
and receiving a gesture retrieval instruction input by a user.
For example, the retrieval instruction input by the user may be a voice instruction, and after receiving the voice instruction input by the user, the intelligent device may perform voice recognition and semantic recognition on the voice instruction, and perform retrieval according to a result after the recognition. Optionally, the intelligent device may further send the voice instruction to a back-end server, and the server connected to the network at the back-end performs voice recognition and semantic recognition on the voice instruction and sends a recognition result back to the intelligent terminal. Optionally, the sound collector such as a microphone of the smart device is configured to receive a voice retrieval instruction input by the user. The sound collector can be arranged on the intelligent equipment and can be arranged in a split mode with the intelligent equipment. When the user inputs a voice retrieval instruction, the video processing steps of the present invention are triggered.
Illustratively, the retrieval instruction input by the user may be a text instruction. And after receiving the character instruction input by the user, the intelligent equipment carries out retrieval according to the character instruction. Optionally, a text input area is arranged in an interface (play interface) of the intelligent device for playing the video stream, and is used for acquiring text, symbols and the like input by the user. The text input area may also be provided on a collection terminal separate from the smart device. When the user clicks the text entry area, the video processing steps of the present invention are triggered. Fig. 2 is a schematic interface diagram of a video processing method provided by the present invention. Fig. 2 is an exemplary illustration of a smart device as a mobile phone.
For example, the retrieval instruction input by the user may be a gesture retrieval instruction. The playing interface is provided with a gesture input area, as shown in fig. 2. The gesture input area is used for acquiring gestures of a user by adopting a camera of the intelligent equipment. When the user clicks the gesture input area to enable the intelligent device to open the camera to start working, the video processing step is triggered. Illustratively, the gesture input area can also be shot by using a camera, so that the retrieval instruction can also be a picture retrieval instruction. The gesture input area can also use a camera to scan the two-dimensional code and the like, so that the retrieval instruction can also be obtained by scanning the two-dimensional code.
Optionally, when the smart device includes a touch input module, the search instruction may also be a sliding operation/a handwriting operation of the user on the touch input module. For example, the user circles the touchscreen of the cell phone, and circles out an actor or object in the video, which is the object to be retrieved. The touch input module may be provided separately from the smart device.
It should be noted that, when receiving a retrieval instruction input by a user, the video stream can still be played normally without exiting the video playing interface.
Further, any one of the above search instruction forms, the method of receiving a search instruction input by a user, includes:
and receiving a retrieval instruction sent by a user through a terminal.
Illustratively, the terminal is provided separately from the smart device containing the video display interface. When the intelligent device is a television, the terminal can be an intelligent sound box/mobile phone and the like, and the intelligent sound box/mobile phone is bound with the television.
Optionally, in any embodiment of the present invention, the search instruction is further configured to indicate a search keyword; acquiring first information of the object to be retrieved according to the object to be retrieved, comprising:
acquiring first information of the object to be searched according to the object to be searched and the search keyword; the first information is information retrieved based on the retrieved object and the retrieval keyword.
For example, the retrieval instruction may further indicate a retrieval keyword when indicating the retrieved object. For example, when the retrieved object is a role in a video, the retrieval keyword may be an ending of the role in the video or a resume of an actor playing the role. When the object to be retrieved is an article in the video, the retrieval keyword may be a color, a position, a shape, or the like of the article.
S102, acquiring first information of the object to be searched according to the object to be searched.
Wherein the first information is information retrieved based on the object to be retrieved.
In this embodiment, after the retrieved object specified by the user is determined, the existing search engine may be invoked to retrieve the first information of the retrieved object in the network or database. Optionally, the related information of the retrieved object may also be sent to a background server, and retrieved by the background server. Optionally, the first information may be several retrieval results of the latest time among all the retrieved information. The first information can also be a plurality of retrieval results with the largest click quantity.
Illustratively, the first information may be text information, voice information, video information, web page links, etc., according to the object to be retrieved.
S103, displaying the first information on an interface for playing the video stream.
In this embodiment, according to different forms of the first information, different forms of the first information may be displayed on an interface for playing the video stream. By displaying the first information on the playing interface, the problem of poor user experience caused by stopping video playing and quitting the video playing interface can be avoided. Optionally, the video playing may be stopped or continued while the first information is displayed. Optionally, if the video stream is a live video, the played video stream may be stored from the beginning of receiving a retrieval instruction input by the user. Illustratively, the first information may be displayed simultaneously with the video stream in the playback interface. The first information is displayed in superimposition with the video stream. The first information may be displayed in a different display area from the video stream.
Optionally, in any embodiment of the present invention, displaying the first information on the interface for playing the video stream includes:
the retrieved object and the first information are displayed on an interface for playing the video stream.
For example, the retrieved object may be displayed simultaneously with the display of the first information, so that the user may determine that the retrieved object is correct and re-input the retrieval instruction when the retrieved object is incorrect. Optionally, in an implementation, the displaying the retrieved object may be displaying text converted from a voice instruction, displaying a text instruction, and the displaying the retrieved object may specifically be displaying an image frame of the retrieved object in a video stream.
The video processing method provided by the embodiment comprises the following steps: when the video stream is played, a retrieval instruction input by a user is received, the retrieval instruction is used for indicating the retrieved object, first information of the retrieved object is acquired according to the retrieved object, and the first information is displayed on an interface for playing the video stream. The method provided by the embodiment can receive the retrieval instruction input by the user when the video stream is played, retrieve according to the retrieval instruction and display the retrieval result on the interface for playing the video stream, so that the problem that the user needs to stop playing the video and the user experience is poor in the traditional searching scheme in the video playing process is solved, the user can inquire the interested content in real time while watching the video stream, and the user experience is improved.
Based on the embodiment shown in fig. 1, a possible implementation manner of determining a retrieved object in the video processing method provided by the present invention is further described below with reference to fig. 3 when the retrieval instruction includes feature information of the retrieved object, where fig. 3 is a schematic flow diagram of the video processing method provided by the present invention, and as shown in fig. 3, the video processing method provided by this embodiment may include:
s201, receiving a retrieval instruction input by a user when playing the video stream.
The implementation of S201 in this embodiment may specifically refer to the related description of S101 in the foregoing embodiment, and is not described herein again.
S202, according to the characteristic information of the object to be searched, the object to be searched is determined in the played video stream.
For example, in this embodiment, the retrieval instruction includes characteristic information of the retrieved object, and for example, the retrieval instruction may include: red, long-necked, vase. At this time, the retrieved object may be determined in the played video stream based on the feature information. Optionally, when the video stream is live, the video processing method further includes storing a video with a preset duration/a preset size. The retrieved object is determined in the stored played video.
For example, when the retrieved object is determined in the video according to the feature information, algorithms such as a feature extraction algorithm, a video content extraction algorithm, a deep neural network algorithm, and the like may be used. Retrieval may also be performed in the tags of the video. The label of the video is used for indicating the content in the video, and the label of the video can be a plurality of keywords.
Optionally, when the retrieved object cannot be determined, displaying a reminding message in the playing interface to indicate that the user cannot identify the retrieval instruction and re-input the retrieval instruction.
Optionally, after the determining of the retrieved object in S202, the video processing method further includes:
s2021, displaying the retrieved object on the interface for playing the video stream.
For example, after the retrieved object is determined, the retrieved object may be displayed on a playing interface, so that the user may determine whether the retrieved object is correct.
S2022, receiving first confirmation information input by the user.
Wherein the first confirmation information is used for indicating that the first information of the searched object is confirmed to be searched.
Illustratively, the user determines whether the retrieval instruction is correct based on the displayed retrieved object. When the first confirmation information input by the user is received, the object to be searched is determined to be correct.
It should be noted that the first confirmation information is similar to the search command, and may be voice information, text information, gesture information, and the like.
Optionally, when receiving the negative information input by the user, ending the video processing step of this time, and waiting for the user to input the retrieval instruction next time.
S203, acquiring first information of the object to be searched according to the object to be searched.
And S204, displaying the first information on an interface for playing the video stream.
The implementation of S203-S204 in this embodiment may specifically refer to the description related to S102-S103 in the embodiment shown in fig. 1, and is not repeated herein.
In this embodiment, the retrieval instruction includes feature information of the object to be retrieved, and after receiving the retrieval instruction input by the user, the object to be retrieved is determined in the played video stream according to the feature information of the object to be retrieved, and then the first information of the object to be retrieved is acquired and displayed according to the object to be retrieved. According to the method and the device for determining the searched object in the played video, the searched object is determined according to the characteristic information, so that the user can conveniently input various search instructions, the user operation is simplified, the user can still determine the searched object according to the characteristic description when the searched object is not clear, and the user experience is improved.
Based on the embodiment shown in fig. 1, another possible implementation manner of determining a retrieved object in the video processing method provided by the present invention is further described below with reference to fig. 4 when the retrieval instruction includes feature information of the retrieved object, where fig. 4 is a schematic flow diagram of the video processing method provided by the present invention, and as shown in fig. 4, the video processing method provided by this embodiment may include:
s301, receiving a retrieval instruction input by a user when playing the video stream.
The implementation of S301 in this embodiment may specifically refer to the related description of S101 in the foregoing embodiment, and is not described herein again.
S302, according to the characteristic information of the searched object, at least one candidate object is obtained in the played video stream.
And S303, displaying each alternative object on an interface for playing the video stream.
S304, receiving second confirmation information input by the user.
The second determination information is used to determine one candidate as the object to be retrieved among the candidates.
For example, when the retrieval instruction includes feature information of the object to be retrieved, there may be a case where the feature information is unclear and a plurality of objects to be retrieved can be determined in the video based on the feature information, and in this case, the plurality of objects to be retrieved can be provided to the user as candidates and the user can determine the correct object to be retrieved.
By displaying the candidate objects in the playing interface, the user can conveniently determine the searched object in the multiple candidate objects.
For example, the second confirmation information in this embodiment may determine the retrieved object from the multiple candidate objects by using a voice, a text, a gesture, or the like.
S305, acquiring first information of the object to be searched according to the object to be searched.
S306, displaying the first information on the interface for playing the video stream.
The implementation of S305 to S306 in this embodiment may specifically refer to the description related to S102 to S103 in the embodiment shown in fig. 1, and is not repeated herein.
In this embodiment, the search instruction includes feature information of the object to be searched, the object candidate determined based on the feature information is displayed in the playback interface, the object to be searched is determined based on the second confirmation information input by the user, and the first information of the object to be searched is acquired and displayed based on the object to be searched. According to the method and the device for determining the searched object in the played video, the searched object is determined according to the characteristic information, so that the user can conveniently input various search instructions, the user operation is simplified, the user can still determine the searched object according to the characteristic description when the searched object is not clear, and the user experience is improved.
On the basis of any of the above embodiments, the following further describes the video processing method provided by the present invention with reference to fig. 5, where fig. 5 is a schematic flow diagram of the video processing method provided by the present invention, as shown in fig. 5, the video processing method provided by this embodiment includes:
s401, receiving a retrieval instruction input by a user when playing the video stream.
S402, acquiring first information of the object to be searched according to the object to be searched.
The implementation of S401 to S402 in this embodiment may specifically refer to the description related to S101 to S102 in the embodiment shown in fig. 1, and is not repeated herein.
And S403, acquiring second information of the searched object.
The second information is information of an associated object having an association relation with the object to be retrieved; the associated object is an object appearing in the played video stream.
S402 and S403 in this embodiment may be executed simultaneously or sequentially, which is not limited in this disclosure.
For example, when the retrieved object does not appear in the video stream, the associated object of the retrieved object may be determined in the video stream at the same time as the retrieved corresponding first information is retrieved. For example, when the object to be retrieved is actor a, actor B who has cooperated/fellow school with actor a may be determined in the video stream, and the second information of the associated object may be acquired with actor B as the associated object. The second information may be an association relationship between the actor a and the actor B, and may also be retrieval information of the actor B.
Alternatively, the object to be retrieved may be an object that appears in the video stream. In this case, the object to be retrieved having the relationship may be an object having the highest retrieval frequency in the video stream retrieved by the search engine.
S404, displaying the first information and the second information on an interface for playing the video stream.
The implementation of S404 in this embodiment may specifically refer to the related description of S103 in the foregoing embodiment, which is not described herein again.
In this embodiment, when the information of the object to be retrieved is acquired, the related object associated with the object to be retrieved may be determined in the video stream, the information of the related object may be acquired, and the object to be retrieved and the information of the related object may be simultaneously displayed on the playing interface. The embodiment provides richer retrieval results by the retrieval instruction input by the user.
In any of the above embodiments, the first information and the second information retrieved from the retrieved object are AR information.
Fig. 6 is a schematic structural diagram of a video processing apparatus according to the first embodiment of the present invention, as shown in fig. 6, the video processing apparatus includes:
a receiver 501, configured to receive a retrieval instruction input by a user when playing a video stream, where the retrieval instruction is used to indicate a retrieved object;
a processor 502, configured to obtain first information of the retrieved object according to the retrieved object, where the first information is information retrieved according to the retrieved object;
and a display 503 for displaying the first information on an interface for playing the video stream.
For example, the receiver 501 may be a voice collecting device such as a microphone, or may be a remote controller, a mobile phone, or other terminal separately provided from the video processing device.
The video processing apparatus provided in this embodiment is similar to the video processing method in terms of the principle and technical effect, and is not described herein again.
The retrieval instruction comprises characteristic information of the retrieved object; the processor 502 is further configured to determine the retrieved object in the played video stream according to the characteristic information of the retrieved object.
Optionally, the display 503 is further configured to display the retrieved object on an interface for playing the video stream;
the receiver 501 is further configured to receive first confirmation information input by the user, where the first confirmation information is used to instruct confirmation of retrieving the first information of the retrieved object.
Optionally, the retrieval instruction includes feature information of the retrieved object; the processor 502 is further configured to obtain at least one candidate object in the played video stream according to the feature information of the retrieved object;
the display 503 is further configured to display each candidate object on an interface for playing the video stream;
the receiver 501 is further configured to receive second confirmation information input by the user, where the second determination information is used to determine one candidate object as the retrieved object in each candidate object.
Optionally, the processor 502 is further configured to acquire second information of the retrieved object, where the second information is information of an associated object having an association relationship with the retrieved object; the related object is an object appearing in the played video stream;
the display 503 is specifically configured to display the first information and the second information on an interface for playing the video stream.
Optionally, the receiver 501 is specifically configured to,
receiving a voice retrieval instruction input by the user; or,
receiving a character retrieval instruction input by the user; or,
and receiving a gesture retrieval instruction input by the user.
Optionally, the receiver 501 is specifically configured to receive the retrieval instruction sent by the user through the terminal.
Optionally, the display 503 is specifically configured to display the retrieved object and the first information on an interface for playing the video stream.
Optionally, the search instruction is further used for indicating a search keyword; the processor 502 is specifically configured to obtain first information of the retrieved object according to the retrieved object and the retrieval keyword; the first information is information retrieved according to the retrieved object and the retrieval keyword.
Optionally, the first information and the second information are AR information.
Fig. 7 is a schematic structural diagram of a second video processing apparatus according to the present invention, where the push may be a terminal device, such as a smart phone, a tablet computer, a computer, and the like. As shown in fig. 7, the video processing apparatus includes: a memory 601 and at least one processor 602.
A memory 601 for storing program instructions.
The processor 602 is configured to implement the video processing method in this embodiment when the program instructions are executed, and specific implementation principles may be referred to in the foregoing embodiments, which are not described herein again.
The video processing apparatus may further comprise an input/output interface 603.
The input/output interface 603 may include a separate output interface and input interface, or may be an integrated interface that integrates input and output. The output interface is used for outputting data, the input interface is used for acquiring input data, the output data is a general name output in the method embodiment, and the input data is a general name input in the method embodiment.
The present invention also provides a readable storage medium, in which an execution instruction is stored, and when at least one processor of the video processing apparatus executes the execution instruction, when the computer execution instruction is executed by the processor, the video processing method in the above embodiment is implemented.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the video processing apparatus may read the execution instruction from the readable storage medium, and the execution of the execution instruction by the at least one processor causes the video processing apparatus to implement the video processing method provided in the various embodiments described above.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the foregoing embodiments of the network device or the terminal device, it should be understood that the Processor may be a Central Processing Unit (CPU), or may be another general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present application may be embodied directly in a hardware processor, or in a combination of the hardware and software modules in the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (16)

1. A video processing method, comprising:
receiving a retrieval instruction input by a user when a video stream is played, wherein the retrieval instruction is used for indicating a retrieved object, the retrieval instruction comprises characteristic information of the retrieved object, and the retrieved object is an object which does not appear in the video stream;
acquiring first information of the object to be searched according to the object to be searched, wherein the first information is information searched according to the object to be searched;
displaying the first information on an interface for playing the video stream;
the method further comprises the following steps:
acquiring second information of the object to be retrieved, wherein the second information is information of an associated object having an association relation with the object to be retrieved, and the associated object is an object appearing in a played video stream;
the displaying the first information on the interface for playing the video stream includes:
and displaying the first information and the second information on an interface for playing the video stream.
2. The method of claim 1, further comprising:
displaying the retrieved object on an interface that plays the video stream;
receiving first confirmation information input by the user, wherein the first confirmation information is used for indicating that the first information of the searched object is confirmed to be searched.
3. The method of claim 1 or 2, wherein the receiving of the user-input retrieval instruction comprises:
receiving a voice retrieval instruction input by the user; or,
receiving a character retrieval instruction input by the user; or,
and receiving a gesture retrieval instruction input by the user.
4. The method of claim 1, wherein receiving a user-entered retrieval instruction comprises:
and receiving the retrieval instruction sent by the user through the terminal.
5. The method according to claim 1 or 2, wherein the displaying the first information on the interface for playing the video stream comprises:
displaying the retrieved object and the first information on an interface on which the video stream is played.
6. The method of claim 1 or 2, wherein the search instruction is further configured to indicate a search keyword; the acquiring first information of the object to be retrieved according to the object to be retrieved includes:
acquiring first information of the object to be searched according to the object to be searched and the search keyword; the first information is information retrieved according to the retrieved object and the retrieval keyword.
7. The method of claim 1, wherein the first information and the second information are AR information.
8. A video processing apparatus, comprising:
the receiver is used for receiving a retrieval instruction input by a user when a video stream is played, wherein the retrieval instruction is used for indicating a retrieved object, the retrieval instruction comprises characteristic information of the retrieved object, and the retrieved object is an object which does not appear in the video stream;
a processor, configured to acquire first information of the retrieved object according to the retrieved object, where the first information is information retrieved according to the retrieved object;
a display that displays the first information on an interface on which the video stream is played;
the processor is further configured to acquire second information of the object to be retrieved, where the second information is information of an associated object having an association relationship with the object to be retrieved, and the associated object is an object appearing in a played video stream;
the display is specifically configured to display the first information and the second information on an interface on which the video stream is played.
9. The apparatus of claim 8, wherein the display is further configured to display the retrieved object on an interface that plays the video stream;
the receiver is further configured to receive first confirmation information input by the user, where the first confirmation information is used to instruct confirmation of retrieval of the first information of the retrieved object.
10. The apparatus according to claim 8 or 9, wherein the receiver is specifically configured to,
receiving a voice retrieval instruction input by the user; or,
receiving a character retrieval instruction input by the user; or,
and receiving a gesture retrieval instruction input by the user.
11. The apparatus according to claim 8, wherein the receiver is specifically configured to receive the retrieval instruction sent by the user through the terminal.
12. The apparatus according to claim 8 or 9, wherein the display is specifically configured to display the retrieved object and the first information on an interface playing the video stream.
13. The apparatus of claim 12, wherein the search instruction is further configured to indicate a search keyword; the processor is specifically configured to obtain first information of the object to be retrieved according to the object to be retrieved and the retrieval keyword; the first information is information retrieved according to the retrieved object and the retrieval keyword.
14. The apparatus of claim 8, wherein the first information and the second information are AR information.
15. A video processing apparatus, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the video processing device to perform the method of any of claims 1-7.
16. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
CN201811005371.3A 2018-08-30 2018-08-30 Video processing method and device Active CN109151599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811005371.3A CN109151599B (en) 2018-08-30 2018-08-30 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811005371.3A CN109151599B (en) 2018-08-30 2018-08-30 Video processing method and device

Publications (2)

Publication Number Publication Date
CN109151599A CN109151599A (en) 2019-01-04
CN109151599B true CN109151599B (en) 2020-10-09

Family

ID=64829583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811005371.3A Active CN109151599B (en) 2018-08-30 2018-08-30 Video processing method and device

Country Status (1)

Country Link
CN (1) CN109151599B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314759B (en) * 2020-03-02 2021-08-10 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN113490064A (en) * 2020-09-11 2021-10-08 青岛海信电子产业控股股份有限公司 Video playing method and device and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566990A (en) * 2008-04-25 2009-10-28 李奕 Search method and search system embedded into video
CN103365959A (en) * 2013-06-03 2013-10-23 深圳市爱渡飞科技有限公司 Voice search method and voice search device
CN105069005A (en) * 2015-06-24 2015-11-18 青岛海尔智能家电科技有限公司 Data searching method and data searching device
KR20170103442A (en) * 2016-03-04 2017-09-13 (주)아이티유 Real Time Video Contents Transaction System and Method Using GPS
CN107205172A (en) * 2016-03-18 2017-09-26 百度在线网络技术(北京)有限公司 A kind of method and device that search is initiated based on video content

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544266B (en) * 2013-10-16 2017-05-31 北京奇虎科技有限公司 A kind of method and device for searching for suggestion word generation
CN104102723B (en) * 2014-07-21 2017-07-25 百度在线网络技术(北京)有限公司 Search for content providing and search engine
CN104598630A (en) * 2015-02-05 2015-05-06 北京航空航天大学 Event indexing and retrieval method and device
CN105205689A (en) * 2015-08-26 2015-12-30 深圳市万音达科技有限公司 Method and system for recommending commercial tenant
CN105279252B (en) * 2015-10-12 2017-12-26 广州神马移动信息科技有限公司 Excavate method, searching method, the search system of related term
CN105787102B (en) * 2016-03-18 2019-04-26 北京搜狗科技发展有限公司 Searching method, device and the device for search
CN108170731A (en) * 2017-12-13 2018-06-15 腾讯科技(深圳)有限公司 Data processing method, device, computer storage media and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566990A (en) * 2008-04-25 2009-10-28 李奕 Search method and search system embedded into video
CN103365959A (en) * 2013-06-03 2013-10-23 深圳市爱渡飞科技有限公司 Voice search method and voice search device
CN105069005A (en) * 2015-06-24 2015-11-18 青岛海尔智能家电科技有限公司 Data searching method and data searching device
KR20170103442A (en) * 2016-03-04 2017-09-13 (주)아이티유 Real Time Video Contents Transaction System and Method Using GPS
CN107205172A (en) * 2016-03-18 2017-09-26 百度在线网络技术(北京)有限公司 A kind of method and device that search is initiated based on video content

Also Published As

Publication number Publication date
CN109151599A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN107278374B (en) Interactive advertisement display method, terminal and smart city interactive system
US11520824B2 (en) Method for displaying information, electronic device and system
CN107818180B (en) Video association method, video display device and storage medium
CN103686344B (en) Strengthen video system and method
US11409817B2 (en) Display apparatus and method of controlling the same
US20220147741A1 (en) Video cover determining method and device, and storage medium
CN106021496A (en) Video search method and video search device
US20150012840A1 (en) Identification and Sharing of Selections within Streaming Content
JP2003157288A (en) Method for relating information, terminal equipment, server device, and program
JP6474393B2 (en) Music playback method, apparatus and terminal device based on face album
CN102710991A (en) Information processing apparatus, information processing method, and program
CN110602516A (en) Information interaction method and device based on live video and electronic equipment
CN113569037A (en) Message processing method and device and readable storage medium
CN112672208B (en) Video playing method, device, electronic equipment, server and system
EP2835981A1 (en) Information processing device, information processing method, and program
CN112291614A (en) Video generation method and device
CN113612830A (en) Information pushing method and device, terminal equipment and storage medium
CN110446104A (en) Method for processing video frequency, device and storage medium
JP2014068290A (en) Image processing apparatus, image processing method, and program
CN109151599B (en) Video processing method and device
CN110475158B (en) Video learning material providing method and device, electronic equipment and readable medium
CN113901241B (en) Page display method and device, electronic equipment and storage medium
CN111542817A (en) Information processing device, video search method, generation method, and program
CN113688260A (en) Video recommendation method and device
CN113486212A (en) Search recommendation information generation and display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant