CN116909447A - Video searching method and device - Google Patents

Video searching method and device Download PDF

Info

Publication number
CN116909447A
CN116909447A CN202310903756.6A CN202310903756A CN116909447A CN 116909447 A CN116909447 A CN 116909447A CN 202310903756 A CN202310903756 A CN 202310903756A CN 116909447 A CN116909447 A CN 116909447A
Authority
CN
China
Prior art keywords
interface
application
video
selected object
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310903756.6A
Other languages
Chinese (zh)
Inventor
王文洲
徐洁
王媛
何远银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youku Technology Co Ltd
Original Assignee
Beijing Youku Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youku Technology Co Ltd filed Critical Beijing Youku Technology Co Ltd
Priority to CN202310903756.6A priority Critical patent/CN116909447A/en
Publication of CN116909447A publication Critical patent/CN116909447A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Abstract

The disclosure relates to a video searching method and device. The method comprises the following steps: and in the process of displaying the first interface of the first application, responding to the detection of the search triggering operation aiming at the selected object, displaying a second interface of the second application, and displaying the video search result aiming at the selected object by the second application in the second interface. The user can realize video searching for calling the second application in the process of browsing the first application through simple operation, and the method is short in time consumption and good in user experience. The boundary between applications is broken, and the use experience of direct jump of video search is brought to users.

Description

Video searching method and device
Technical Field
The disclosure relates to the technical field of video searching, and in particular relates to a video searching method and device.
Background
The interaction threshold between different application programs APP on terminal equipment such as mobile phones is extremely high, direct connection between people and information is shielded, and the information circulation and use efficiency is low. This makes the user costly and experienced as less than necessary in viewing more video information. How to provide a video searching scheme based on application program interaction is a technical problem to be solved.
Disclosure of Invention
In view of this, the present disclosure provides a video searching method and apparatus.
According to an aspect of the present disclosure, there is provided a video searching method, the method including:
in response to detecting a search trigger operation for a selected object during presentation of a first interface at a first application, presenting a second interface at a second application,
and displaying video search results of the second application aiming at the selected object in the second interface.
In this way, the second application can be controlled to display the second interface with the video search result for the user based on the search triggering operation of the user on the selected object in the process of browsing the first application by the user. The user can call the second application to search the video in the process of browsing the first application through simple operation, the time consumption is short, and the user experience is good. The boundary between applications is broken, and the use experience of direct jump of video search is brought to users.
In one possible implementation manner, in a process of displaying a first interface of a first application, in response to detecting a search trigger operation for a selected object, displaying a second interface of a second application includes:
In the process of displaying a first interface of a first application, if a detected selection operation aiming at an object to be selected in the first interface is performed, displaying an identification of a second application;
if the trigger operation of the selected object aiming at the selection operation is detected, displaying the second interface;
wherein the search triggering operation includes the selecting operation and the triggering operation.
In this way, the user can be reminded to utilize the second application to conduct a video search for the selected object selected by the user by displaying the identification of the second application. The user may choose whether to initiate a video search for the selected object by issuing a trigger operation, as desired.
In one possible implementation, determining that a trigger operation for a selected object of the selection operation is detected includes:
and if the drag operation for the selected object is detected and the drag terminal point is within the identified trigger range, determining that the trigger operation for the selected object is detected.
In one possible implementation manner, in a process of displaying a first interface of a first application, in response to detecting a search trigger operation for a selected object, displaying a second interface of a second application includes:
In the process of displaying a first interface of a first application, displaying the second interface if a detected selection operation for an object to be selected in the first interface and a detected trigger operation for the selected object of the selection operation are detected, wherein the search trigger operation comprises the selection operation and the trigger operation,
the triggering operation includes at least one of:
detecting a double click operation for the selected object;
detecting a long press operation for the selected object;
a sliding operation is detected for the selected object and a sliding distance exceeds a distance threshold.
Therefore, the user can set the implementation mode of the triggering operation according to actual needs, and the selection requirements of different users on the triggering operation are met.
In one possible implementation, the presenting the identifier of the second application includes at least one of the following ways:
displaying a floating window above the first interface, and displaying an icon of the second application in the floating window;
displaying an icon of the second application in a side region of the interface;
and displaying a trigger window at the upper layer of the first interface, and displaying the icon of the second application in the trigger window.
In this way, the requirements of different users for displaying the identification of the second application can be met.
In one possible implementation, the method further includes:
if the object to be selected is a word, determining the selected part in the word as the selected object according to the selection operation;
if the object to be selected is an image, determining the selected characters and/or targets in the image as the selected object according to the selection operation;
and if the object to be selected is a short video, determining at least one of a title of the short video, characters in a video frame of the short video and a target in the short video as the selected object according to the selection operation.
Therefore, based on different types of the objects to be selected, different selected object determining modes are adopted, and the accuracy of determining the selected objects is improved.
In one possible implementation, the method further includes:
determining keywords representing characteristics of the selected object according to the selected object;
and generating a video search request aiming at the selected object according to the keywords and sending the video search request to the second application so that the second application can conduct video search based on the keywords in the video search request.
Therefore, the second application can conduct video searching based on the keywords, and the speed, efficiency and accuracy of the second application in video searching are improved.
In one possible implementation, the second interface includes a video details page interface or a video search results page interface.
According to another aspect of the present disclosure, there is provided a video search apparatus including:
the operation detection module is used for responding to detection of search triggering operation aiming at a selected object in the process of displaying the first interface of the first application, and displaying the second interface of the second application;
and the result display module is used for displaying video search results of the second application aiming at the selected object in the second interface.
In one possible implementation, the operation detection module includes:
the identification display sub-module is used for displaying the identification of the second application if the detected selection operation for the object to be selected in the first interface is performed in the process of displaying the first interface of the first application;
the first operation detection sub-module is used for displaying the second interface if the trigger operation of the selected object aiming at the selection operation is determined to be detected;
Wherein the search triggering operation includes the selecting operation and the triggering operation.
In one possible implementation, determining that a trigger operation for a selected object of the selection operation is detected includes: and if the drag operation for the selected object is detected and the drag terminal point is within the identified trigger range, determining that the trigger operation for the selected object is detected.
In one possible implementation, the operation detection module includes:
a second operation detection sub-module, configured to, in a process of displaying a first interface of a first application, display the second interface if a detected selection operation for an object to be selected in the first interface and a detected trigger operation for the selected object of the selection operation are detected, where the search trigger operation includes the selection operation and the trigger operation,
the triggering operation includes at least one of:
detecting a double click operation for the selected object;
detecting a long press operation for the selected object;
a sliding operation is detected for the selected object and a sliding distance exceeds a distance threshold.
In one possible implementation, the presenting the identifier of the second application includes at least one of the following ways:
Displaying a floating window above the first interface, and displaying an icon of the second application in the floating window;
displaying an icon of the second application in a side region of the interface;
and displaying a trigger window at the upper layer of the first interface, and displaying the icon of the second application in the trigger window.
In one possible implementation, the apparatus further includes:
the first determining module is used for determining the selected part in the text as the selected object according to the selecting operation if the object to be selected is the text;
the second determining module is used for determining the selected characters and/or targets in the image as the selected objects according to the selection operation if the object to be selected is an image;
and the third determining module is used for determining at least one of the title of the short video, the characters in the video frames of the short video and the targets in the short video as the selected object according to the selecting operation if the object to be selected is the short video.
In one possible implementation, the apparatus further includes:
the keyword determining module is used for determining keywords representing the characteristics of the selected object according to the selected object;
And the request generation module is used for generating a video search request aiming at the selected object according to the keywords and sending the video search request to the second application so that the second application can conduct video search based on the keywords in the video search request.
In one possible implementation, the second interface includes a video details page interface or a video search results page interface.
The beneficial effects of the video searching device provided above are the same as those of the video searching method provided above, and are not redundant, and are not repeated here.
According to another aspect of the present disclosure, there is provided a video search apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described method when executing the instructions stored by the memory.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described method.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a video search method according to an embodiment of the present disclosure.
Fig. 2-4 show a flow diagram of a video search method according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a video search device according to an embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating an apparatus 800 for video searching according to an exemplary embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In the related technology, a user can input keywords in a video application to search for video and then find an interested video to watch, but in the process of browsing other applications other than the video application, if the user finds that the interested content wants to watch the related video, the user needs to exit the current application, open the video application to input keywords in the video application to search for video, the operation required by the user is complex, long in time consumption and poor in user experience.
In order to solve the technical problems, the present disclosure provides a video searching method and apparatus, which may trigger a second application to perform video searching based on a selected object selected by a user in a process of browsing a first application by the user, so that the second application displays a second interface with video searching results for the user. The user can call the second application to search the video in the process of browsing the first application through simple operation, the time consumption is short, and the user experience is good. The boundary between applications is broken, and the use experience of direct jump of video search is brought to users.
Fig. 1 shows a flowchart of a video search method according to an embodiment of the present disclosure. Fig. 2-4 show a flow diagram of a video search method according to an embodiment of the present disclosure. As shown in fig. 1, the method includes step S101 and step S102, and may be applied to a terminal device. The implementation of the video search method provided by the present disclosure is schematically illustrated in the following with reference to fig. 1 and fig. 2 to fig. 4. The selected objects in fig. 2-4 are text, image and short video, so as to explain the video searching methods of the selected objects in different types.
In step S101, in a process of presenting a first interface of a first application, a second interface of a second application is presented in response to detecting a search trigger operation for a selected object.
In a possible implementation manner, in step S101, if a search triggering operation for a selected object is detected, a video search request for the selected object may be sent to a second application, so that the second application performs a video search based on the video search request.
In some embodiments, the first application may be any application program that performs presentation of text, images (images may include still images, moving images, and other various types of images), short videos, and other content for a user, and the first interface may be any interactive interface that the first application may be capable of presenting to the user during use of the first application by the user. The application program can be software for providing various requirements for users, for example, the software can be social software for realizing communication through characters, voice and video; the software can be image processing software which can be used for users to view, edit and the like the images; the software can be short video social software which can be used for users to view, edit, publish and the like the short video; the software can be office software which can be used for users to view, edit and the like text contents; etc. Those skilled in the art may set the software type of the first application according to actual needs, which is not limited by the present disclosure.
The search triggering operation may be any form of operation capable of triggering the second application to perform a video search for the selected object, presenting the video search results. In one possible implementation, the search trigger operation may include a select operation and a trigger operation. The selection operation may be used to determine a selected object from among the objects to be selected presented in the first interface, and the triggering operation may be used to trigger a video search of the second application for the selected object. In some embodiments, the first interface may display a candidate object that may be selected by the user and may be used for video searching. The object to be selected may be text, an image, a short video, etc. The selection operation may be an operation sent by the user and used for selecting the object to be selected in the first interface, and the selection operation of different types of objects to be selected may be the same or different. The selection operation may be a click, a long press, etc. operation for the object to be selected issued by the user, which is not limited by the present disclosure. The object to be selected by the user can be determined based on the selection operation, and then the selected object is determined. For example, as shown in fig. 2, the first interface T11 is a chat interface of the instant messaging social software, in which the object to be selected in the first interface T11 may be a text, a moving picture (i.e. a dynamic picture) communicated between users, and the selection operation may be a long-press operation sent by the user for the object to be selected, such as "actor W" or "dynamic picture". The object to be selected shown in the first interface T12 shown in fig. 3 is an image with a target, the object to be selected shown in the first interface T13 shown in fig. 4 is a short video, and the selection operation may be a long press, double click, click or the like operation issued by the user for the object to be selected "short video", "image with a target".
In one possible implementation, step S101 may include: the identification display step and the search trigger step are further described below.
The "logo presentation step" may include: and in the process of displaying the first interface of the first application, displaying the identification of the second application if the selection operation of the object to be selected in the first interface is detected. In this way, the user may be prompted to conduct a video search for the selected object he selects using the second application.
In one possible implementation, presenting the identification of the second application may include at least one of the following ways one-three. In this way, the requirements of different users for displaying the identification of the second application can be met.
Mode one: and displaying a floating window above the first interface, and displaying the icon of the second application in the floating window. In this implementation, as shown in fig. 2, a floating window M may be displayed directly above the first interface T11, and further, an icon Q of the second application may be displayed in the floating window M.
Mode two: and displaying the icon of the second application in the side area of the interface. In some embodiments, as shown in fig. 3, the entire first interface T12 may be divided into a main area S1 and a side area S2, and then the content presented to the user by the first interface before continuing to present in the main area S1, and further presenting the icon Q of the second application to the user in the side area S2. The side area S2 may be located on the left side, the right side, the upper side and the lower side of the entire first interface T12, and the position of the side area S2 in the first interface T12 may be adjusted according to the user selection setting, which is not limited in the present disclosure. In some embodiments, the whole display interface may be divided into a first interface and a side area, where the first interface displays content displayed for the user before the first interface continues to display, and displays an icon of the second application for the user in the side area of the whole display interface. The side regions may be located on the left, right, top and bottom sides of the overall display interface, which is not limiting to the present disclosure.
Mode three: and displaying a trigger window at the upper layer of the first interface, and displaying the icon of the second application in the trigger window. In this implementation, as shown in fig. 4, a trigger window may be displayed directly above the first interface T13, and further, an icon Q of the second application may be displayed in the trigger window.
It will be appreciated that the above implementation manner of displaying the identifier of the second application is merely a few illustrative examples provided by the embodiments of the disclosure, and those skilled in the art may set the implementation manner of displaying the identifier of the second application according to actual needs, which is not limited by the disclosure.
In some embodiments, the method may further comprise: and displaying the search trigger prompt in the process of displaying the icon of the second application. The search trigger prompt can be directly displayed in an icon of the second application or in other positions in the first interface, and the like, and can also be displayed in the forms of characters, voice, animation, and the like, and a person skilled in the art can set the display mode of the search trigger prompt according to actual needs. In this way, the user is enabled to trigger a video search for the selected object based on the search trigger alert. In some embodiments, the search trigger reminder may also be used to remind the user how to issue a trigger operation, so that the user may simply and intuitively determine how to operate to control the second application to conduct a video search for the selected object based on the search trigger reminder.
The "search trigger step" may include: and if the trigger operation for the selected object determined according to the selection operation is detected, displaying a second interface. For example, the second interface T21 shown in fig. 2 to 3, and the second interface T22 shown in fig. 4. Wherein, the "search triggering step" may further include: and if the trigger operation for the selected object determined according to the selection operation is detected, sending a video search request for the selected object to a second application. The selected object is determined according to the selecting operation and the object to be selected. In this way, the user can choose whether to initiate a video search for the selected object by issuing a trigger operation, as desired.
In this implementation, in the "search trigger step", determining that a trigger operation for the selected object determined according to the selection operation is detected may include: and if the drag operation for the selected object is detected and the drag terminal point is within the identified trigger range, determining that the trigger operation for the selected object is detected. Wherein the identified trigger range may comprise at least a range encompassed by the area in which the identification of the second application is presented. For example, as shown in fig. 2, if it is detected that the user drags the selected object "actor W" from the original position to the region where the icon Q is located and then stops, it may be determined that the drag end point is in the trigger region of the icon Q, and it may be determined that the trigger operation for the selected object is detected. Optionally, a border of the trigger range may be displayed near the icon Q, and a guiding identifier such as a path for dragging the selected object to the trigger range may be displayed to guide the user to perform the trigger operation, and in some embodiments, the trigger operation in the "search trigger step" may also include a single click, double click, long press, etc. operation for the selected object, which is not limited in this disclosure. For example, as shown in fig. 4, if a long press operation of the user to select the object "short video" is detected, it may be determined in the "search trigger step" that a trigger operation for the selected object is detected.
In one possible implementation, step S101 may include a "shortcut search step".
The shortcut searching step comprises the following steps: and in the process of displaying the first interface of the first application, displaying the second interface if the detected selection operation for the object to be selected in the first interface and the detected triggering operation for the selected object of the selection operation are detected. Wherein the search triggering operation includes the selecting operation and the triggering operation.
In this implementation, the triggering operation described in the "shortcut searching step" includes at least one of the following: a double click operation is detected for the selected object. A long press operation is detected for the selected object. A sliding operation is detected for the selected object and a sliding distance exceeds a distance threshold. Wherein the distance threshold may be set according to actual needs and/or user selections, which is not limited by the present disclosure. In some embodiments, the direction and/or route of the sliding operation may be preset, and when the sliding operation that the direction and/or route is consistent with the preset and the sliding distance exceeds the distance threshold is detected, it may be determined that the triggering operation for the selected object is detected. Therefore, the user can set the implementation mode of the triggering operation according to actual needs, and the selection requirements of different users on the triggering operation are met.
It will be appreciated that the above determination of the detection of the trigger operation, and the implementation of the selection operation are just a few illustrative examples provided by the embodiments of the present disclosure, and those skilled in the art may set the implementation of the selection operation and the trigger operation according to actual needs, which is not limited by the present disclosure.
In one possible implementation, the method may further include: and determining the selected object from the objects to be selected according to the different types of the objects to be selected and the selection operation. If the object to be selected is a text, as shown in fig. 2, the selected portion in the text is determined as the selected object according to the selection operation. For example, if a single-finger touch text is used as a selection operation, text in a region touched by a finger may be used as a selection object. As shown in fig. 3, if the object to be selected is an image, determining the selected text and/or object in the image as the selected object according to the selection operation. The target may include any person, object, logo, etc. in the image. For example, if a single-finger touch image is used as a selection operation, characters and/or targets in an image area contacted by a finger can be used as selected objects, and characters and/or targets automatically identified in the image touched by the finger can be used as selected objects, namely, the finger only needs to touch the image, and the characters or targets in the image do not need to be touched intentionally. As shown in fig. 4, if the object to be selected is a short video, at least one of a title of the short video, a text in a video frame of the short video, and a target in the short video is determined as the selected object according to the selection operation. For example, if a short video playing interface, a cover, a title, etc. are touched by a single finger as a selection operation, characters and/or targets in an area touched by the finger may be selected as a selected object, or characters and/or targets automatically identified in a short video corresponding to the finger touch area may be selected as a selected object. Therefore, based on different types of the objects to be selected, different selected object determining modes are adopted, and the accuracy of determining the selected objects is improved.
In one possible implementation, the method further includes: determining keywords representing characteristics of the selected object according to the selected object; and generating a video search request aiming at the selected object according to the keywords and sending the video search request to the second application so that the second application can conduct video search based on the keywords in the video search request. In some embodiments, the keywords may be the names of video programs, the names of episodes or movies, the names of actors, the names of characters, the names of directors, the type of video, etc. that can be used to conduct the video search, which is not limiting to the present disclosure. Therefore, the second application can conduct video searching based on the keywords, and the speed, efficiency and accuracy of the second application in video searching are improved. In some embodiments, the terminal device may determine keywords representing features of the selected object using a function call of the system software. Alternatively, the first application itself may be used to determine keywords that represent characteristics of the selected object. For example, the selected text may be identified based on a text recognition technique, or the selected image and text in the interface may be identified as keywords, or the target in the selected image may be identified based on an image recognition technique, and the identification result may be used as keywords. The implementation manner of determining the keywords may be set by those skilled in the art according to actual needs, and this disclosure is not limited thereto.
In some embodiments, the method may further comprise: after the keywords representing the characteristics of the selected object are determined, the keywords are displayed for the user in a first interface, the keywords are adjusted according to the operation of the user, and then a video search request is generated according to the adjusted keywords. The user adjustment of the keywords includes: modifying the keywords; keywords may also be pruned if there are multiple keywords. Therefore, the video search request can be more in accordance with the search requirement of the user, and the content displayed on the second interface is more in accordance with the user requirement.
In step S102, video search results of the second application for the selected object are presented in the second interface. In some embodiments, the second interface may be a video details page interface (e.g., second interface T22 shown in fig. 4) or a video search results page interface (e.g., second interface T21 shown in fig. 2, 3), or the like. The video detail page interface may be an interface for introducing detailed information to a certain video. The video search results page interface includes a plurality of video thumbnails matching the keywords, content profile descriptions such as video content titles corresponding to the video thumbnails.
In the implementation manner, the second application performs video search based on the keywords under the condition that the video search request carrying the keywords is received, and then displays the video search results in a second interface for the user according to the matching degree between the search results and the keywords and the sequence of the matching degree from high to low, the video heat from high to low, and the like. In some embodiments, when the matching degree of the video obtained by searching the video based on the keywords and the keywords is higher than a matching threshold (the matching threshold can be set according to actual needs), the video detail page interface of the video can be directly used as a video search result interface. Therefore, the video detail page interface can be directly displayed for the user, so that the user can watch the video which is wanted to watch quickly, the video searching speed is improved, and better searching experience is brought to the user.
In some embodiments, keywords may also be presented in the second interface, where the user may adjust the keywords to further conduct the video search.
Fig. 5 shows a block diagram of a video search device according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus includes an operation detection module 41 and a result display module 42.
The operation detection module 41 is configured to, in response to detecting a search trigger operation for a selected object, display a second interface of a second application during a process of displaying the first interface of the first application.
And a result display module 42, configured to display, in the second interface, a video search result of the second application for the selected object.
In one possible implementation, the operation detection module 41 may include: the identification display sub-module is used for displaying the identification of the second application if the detected selection operation for the object to be selected in the first interface is performed in the process of displaying the first interface of the first application; the first operation detection sub-module is used for displaying the second interface if the trigger operation of the selected object aiming at the selection operation is determined to be detected; wherein the search triggering operation includes the selecting operation and the triggering operation. Wherein determining that a trigger operation is detected for a selected object of the selection operation may include: and if the drag operation for the selected object is detected and the drag terminal point is within the identified trigger range, determining that the trigger operation for the selected object is detected.
In one possible implementation, the operation detection module 41 may include:
a second operation detection sub-module, configured to, in a process of displaying a first interface of a first application, display the second interface if a detected selection operation for an object to be selected in the first interface and a detected trigger operation for the selected object of the selection operation are detected, where the search trigger operation includes the selection operation and the trigger operation,
the triggering operation includes at least one of:
detecting a double click operation for the selected object;
detecting a long press operation for the selected object;
a sliding operation is detected for the selected object and a sliding distance exceeds a distance threshold.
In one possible implementation, the presenting the identifier of the second application includes at least one of the following ways:
displaying a floating window above the first interface, and displaying an icon of the second application in the floating window;
displaying an icon of the second application in a side region of the interface;
and displaying a trigger window at the upper layer of the first interface, and displaying the icon of the second application in the trigger window.
In one possible implementation, the apparatus further includes:
the first determining module is used for determining the selected part in the text as the selected object according to the selecting operation if the object to be selected is the text;
the second determining module is used for determining the selected characters and/or targets in the image as the selected objects according to the selection operation if the object to be selected is an image;
and the third determining module is used for determining at least one of the title of the short video, the characters in the video frames of the short video and the targets in the short video as the selected object according to the selecting operation if the object to be selected is the short video.
In one possible implementation, the apparatus further includes:
the keyword determining module is used for determining keywords representing the characteristics of the selected object according to the selected object;
and the request generation module is used for generating a video search request aiming at the selected object according to the keywords and sending the video search request to the second application so that the second application can conduct video search based on the keywords in the video search request.
In one possible implementation, the second interface includes a video details page interface or a video search results page interface.
The implementation manner and beneficial effects of each module and sub-module of the video searching device can refer to the related description of the corresponding steps in the video searching method, and redundancy is not avoided, and redundant description is omitted here.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
It should be noted that, although the video searching method and apparatus are described above by taking the above embodiments as examples, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each step and each module according to personal preference and/or actual application scene, so long as the technical scheme of the disclosure is met.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described method when executing the instructions stored by the memory.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
Fig. 6 is a block diagram illustrating an apparatus 800 for video searching according to an exemplary embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812 (I/O interface), a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
Input/output interface 812 provides an interface between processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of apparatus 800 to perform the above-described methods.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A video search method, the method comprising:
in response to detecting a search trigger operation for a selected object during presentation of a first interface at a first application, presenting a second interface at a second application,
and displaying video search results of the second application aiming at the selected object in the second interface.
2. The method of claim 1, wherein presenting a second interface of a second application in response to detecting a search trigger operation for a selected object during presentation of a first interface at a first application comprises:
In the process of displaying a first interface of a first application, if a detected selection operation aiming at an object to be selected in the first interface is performed, displaying an identification of a second application;
if the trigger operation of the selected object aiming at the selection operation is detected, displaying the second interface;
wherein the search triggering operation includes the selecting operation and the triggering operation.
3. The method of claim 2, wherein determining that a trigger operation is detected for a selected object of the selection operation comprises:
and if the drag operation for the selected object is detected and the drag terminal point is within the identified trigger range, determining that the trigger operation for the selected object is detected.
4. The method of claim 1, wherein presenting a second interface of a second application in response to detecting a search trigger operation for a selected object during presentation of a first interface at a first application comprises:
in the process of displaying a first interface of a first application, displaying the second interface if a detected selection operation for an object to be selected in the first interface and a detected trigger operation for the selected object of the selection operation are detected, wherein the search trigger operation comprises the selection operation and the trigger operation,
The triggering operation includes at least one of:
double-click operation for the selected object;
a long press operation for the selected object;
a sliding operation for the selected object and a sliding distance exceeding a distance threshold.
5. The method of claim 2, wherein presenting the identity of the second application comprises at least one of:
displaying a floating window above the first interface, and displaying an icon of the second application in the floating window;
displaying an icon of the second application in a side region of the interface;
and displaying a trigger window at the upper layer of the first interface, and displaying the icon of the second application in the trigger window.
6. The method according to claim 2 or 4, characterized in that the method further comprises:
if the object to be selected is a word, determining the selected part in the word as the selected object according to the selection operation;
if the object to be selected is an image, determining the selected characters and/or targets in the image as the selected object according to the selection operation;
and if the object to be selected is a short video, determining at least one of a title of the short video, characters in a video frame of the short video and a target in the short video as the selected object according to the selection operation.
7. The method according to claim 1, wherein the method further comprises:
determining keywords representing characteristics of the selected object according to the selected object;
and generating a video search request aiming at the selected object according to the keywords and sending the video search request to the second application so that the second application can conduct video search based on the keywords in the video search request.
8. The method of claim 1, wherein the second interface comprises a video details page interface or a video search results page interface.
9. A video search device, the device comprising:
the operation detection module is used for responding to detection of search triggering operation aiming at a selected object in the process of displaying the first interface of the first application, and displaying the second interface of the second application;
and the result display module is used for displaying video search results of the second application aiming at the selected object in the second interface.
10. A video search apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1 to 8 when executing the instructions stored by the memory.
11. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 8.
CN202310903756.6A 2023-07-21 2023-07-21 Video searching method and device Pending CN116909447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310903756.6A CN116909447A (en) 2023-07-21 2023-07-21 Video searching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310903756.6A CN116909447A (en) 2023-07-21 2023-07-21 Video searching method and device

Publications (1)

Publication Number Publication Date
CN116909447A true CN116909447A (en) 2023-10-20

Family

ID=88357857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310903756.6A Pending CN116909447A (en) 2023-07-21 2023-07-21 Video searching method and device

Country Status (1)

Country Link
CN (1) CN116909447A (en)

Similar Documents

Publication Publication Date Title
CN105955607B (en) Content sharing method and device
US10296201B2 (en) Method and apparatus for text selection
US20200007944A1 (en) Method and apparatus for displaying interactive attributes during multimedia playback
WO2022022196A1 (en) Bullet screen posting method, bullet screen displaying method and electronic device
US10425807B2 (en) Method and apparatus for controlling interface display
US20190073124A1 (en) Method and apparatus for controlling application
CN114003326B (en) Message processing method, device, equipment and storage medium
CN111381739B (en) Application icon display method and device, electronic equipment and storage medium
US20200012701A1 (en) Method and apparatus for recommending associated user based on interactions with multimedia processes
CN106354504B (en) Message display method and device
CN107277628B (en) video preview display method and device
CN110968364B (en) Method and device for adding shortcut plugins and intelligent device
CN108320208B (en) Vehicle recommendation method and device
CN108495168B (en) Bullet screen information display method and device
WO2019095821A1 (en) Interface display method and apparatus
WO2019095913A1 (en) Interface display method and apparatus
US20170052693A1 (en) Method and device for displaying a target object
CN112584222A (en) Video processing method and device for video processing
WO2018188410A1 (en) Feedback response method and apparatus
CN109947506B (en) Interface switching method and device and electronic equipment
CN108803892B (en) Method and device for calling third party application program in input method
CN109783171B (en) Desktop plug-in switching method and device and storage medium
WO2019095817A1 (en) Interface display method and apparatus
CN113988021A (en) Content interaction method and device, electronic equipment and storage medium
CN109756783B (en) Poster generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination