WO2022042389A1 - 搜索结果的展示方法、装置、可读介质和电子设备 - Google Patents

搜索结果的展示方法、装置、可读介质和电子设备 Download PDF

Info

Publication number
WO2022042389A1
WO2022042389A1 PCT/CN2021/113162 CN2021113162W WO2022042389A1 WO 2022042389 A1 WO2022042389 A1 WO 2022042389A1 CN 2021113162 W CN2021113162 W CN 2021113162W WO 2022042389 A1 WO2022042389 A1 WO 2022042389A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity object
multimedia
target entity
multimedia content
information
Prior art date
Application number
PCT/CN2021/113162
Other languages
English (en)
French (fr)
Inventor
吴怡雯
郦橙
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to EP21860227.4A priority Critical patent/EP4148597A4/en
Priority to JP2022577246A priority patent/JP2023530964A/ja
Publication of WO2022042389A1 publication Critical patent/WO2022042389A1/zh
Priority to US18/060,543 priority patent/US11928152B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results

Definitions

  • the present disclosure relates to the field of electronic information technology, for example, to a method, apparatus, readable medium and electronic device for displaying search results.
  • the present disclosure provides a method for displaying search results, the method comprising:
  • the at least one multimedia resource is displayed.
  • the present disclosure provides a method for displaying search results, the method comprising:
  • search result information includes: at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or at least one resource indication information, wherein the multimedia data is image frame data using the label of the target entity object in the multimedia content, and the resource indication information is used to indicate the segment of the target entity object that appears in the multimedia content;
  • the present disclosure provides an apparatus for displaying search results, the apparatus comprising:
  • the acquisition module is configured to acquire at least one multimedia resource showing the use effect of the target entity object in response to a search instruction for the target entity object, wherein the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object of;
  • the presentation module is configured to present the at least one multimedia resource.
  • the present disclosure provides an apparatus for displaying search results, the apparatus comprising:
  • a receiving module configured to receive a search instruction for the target entity object, and determine the multimedia content corresponding to the target entity object
  • a determining module configured to determine search result information according to the multimedia content corresponding to the target entity object, where the search result information includes: at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, Or at least one resource indication information, wherein the multimedia data is image frame data using the label of the target entity object in the multimedia content, and the resource indication information is used to indicate that the target entity object is in the multimedia content. Fragments appearing in;
  • the sending module is configured to send the search result information to the terminal device.
  • the present disclosure provides a computer-readable medium storing a computer program, and when the computer program is executed by a processing apparatus, the above-mentioned first method of the present disclosure is implemented.
  • the present disclosure provides an electronic device including:
  • a storage device storing a computer program
  • a processing device is configured to execute the computer program in the storage device, so as to implement the above-mentioned first method of the present disclosure.
  • the present disclosure provides a computer-readable medium storing a computer program, which, when executed by a processing apparatus, implements the above-mentioned second method of the present disclosure.
  • the present disclosure provides an electronic device including:
  • a storage device storing a computer program
  • a processing device is configured to execute the computer program in the storage device, so as to implement the above-mentioned second method of the present disclosure.
  • Fig. 1 is a kind of schematic diagram of terminal equipment, server deployment
  • FIG. 2 is a flowchart of a method for displaying search results according to an exemplary embodiment
  • FIG. 3 is a schematic diagram showing search results according to an exemplary embodiment
  • FIG. 4 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 5 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 6 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 7 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 8 is a schematic diagram showing search results according to an exemplary embodiment
  • FIG. 9 is a flowchart of a method for displaying search results according to an exemplary embodiment
  • FIG. 10 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 11 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 12 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 13 is a flowchart illustrating another method for displaying search results according to an exemplary embodiment
  • FIG. 14 is a block diagram of an apparatus for displaying search results according to an exemplary embodiment
  • 15 is a block diagram of another apparatus for displaying search results according to an exemplary embodiment
  • Fig. 16 is a block diagram of another apparatus for displaying search results according to an exemplary embodiment
  • Fig. 17 is a block diagram of another apparatus for displaying search results according to an exemplary embodiment
  • FIG. 18 is a block diagram of an apparatus for displaying search results according to an exemplary embodiment
  • Fig. 19 is a block diagram of an electronic device according to an exemplary embodiment.
  • method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • the application scenario may include a terminal device and a server, and data transmission may be performed between the terminal device and the server.
  • the terminal device may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA), a tablet computer (Portable Android Device, PAD), a portable multimedia player (Personal Multimedia Player) , PMP), mobile terminals such as in-vehicle terminals (eg, in-vehicle navigation terminals), etc., and stationary terminals such as digital televisions (TVs), desktop computers, and the like.
  • the server may include, but is not limited to, an entity server, a server cluster, or a cloud server, for example. In an implementation scenario, one or more terminal devices may be included, as shown in FIG. 1 .
  • Fig. 2 is a flow chart of a method for displaying search results according to an exemplary embodiment. As shown in Fig. 2 , the method includes the following steps.
  • Step 101 in response to a search instruction for the target entity object, acquire at least one multimedia resource showing the use effect of the target entity object.
  • the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object.
  • the execution body of the method may be a terminal device.
  • a search instruction can be input through the terminal device, and the search instruction includes a search keyword corresponding to the target entity object.
  • the entity object may be understood as an object included in the multimedia content (eg, video), and the target entity object may be any entity object.
  • the entity objects included in the multimedia content can be divided into two categories: content entity objects and operation entity objects.
  • the content entity object is used to indicate the content included in one or more image frames in the multimedia content, for example, it can be: people, cars, mountains, lakes, seas, cats, dogs, rabbits, and so on.
  • the content entity object can also be used to indicate the emotional style of the content included in one or more image frames in the multimedia content, for example, it can be: highlight content, warm content, moving content, etc. For example, if the content contained in multiple image frames belongs to the warm style, the content entity objects contained in these image frames are "warm content", wherein the emotional style of the content contained in the image frames may be automatically recognized by the terminal device, or It may be marked by the user according to requirements, which is not limited in the present disclosure.
  • the operation entity object is used to indicate operations included in one or more image frames in the multimedia content, for example, it may include: beauty special effects, skin grinding effects, baby props, cat face props, specified subtitles, specified background music, etc. Taking the target entity object as "baby props" and the search instruction as "how to use baby props", the search instruction includes the search keyword "baby props" corresponding to the target entity object.
  • the terminal device can send the search instruction to the server, and after receiving the search instruction, the server analyzes the search instruction, obtains the search keyword included in the search instruction, and determines the target entity object matching the search keyword. It can be understood that there are multiple entity objects pre-stored on the server, the matching degree between the search keyword and each entity object is determined respectively, and one or more entity objects with the highest matching degree are used as the target entity object. There can be one or more search keywords, and similarly, there can also be one or more target entity objects.
  • the server can search for the multimedia content corresponding to the target entity object from a large number of multimedia contents pre-stored on the server.
  • the multimedia content corresponding to the target entity object can be understood as the multimedia content that uses the target entity object, or appears The multimedia content of the target entity object.
  • the server may determine the search result information based on the multimedia content corresponding to the target entity object, and send the search result information to the terminal device.
  • the terminal device may acquire at least one multimedia resource capable of displaying the use effect of the target entity object according to the search result information.
  • the search result information may include: at least one multimedia resource showing the use effect of the target entity object.
  • the terminal device may directly obtain the multimedia resource from the search result information.
  • the search result may include at least one set of multimedia data, and the multimedia data is image frame data using the tag of the target entity object in the multimedia content.
  • the terminal device may generate a multimedia resource showing the use effect of the target entity object according to the multimedia data.
  • the search result may include at least one resource indication information, and the resource indication information is used to indicate a segment of the target entity object that appears in the multimedia content.
  • the terminal device may obtain the multimedia data from the server according to the resource indication information, and then according to the multimedia data , and generate a multimedia resource showing the use effect of the target entity object.
  • Multimedia content is a multimedia file with complete content and a long duration, such as video, audio and other files.
  • the multimedia resource comes from the part of the multimedia content that is closely related to the target entity object, such as a dynamic image or a short video clip. .
  • the multimedia resource may be generated according to one or more image frames using the target entity object in the multimedia content, and can visually display the use effect of the target entity object. Taking the target entity object as "cat face prop" and the multimedia content as video, for example, the multimedia content starts from 10s to 27s and uses "cat face prop", then the multimedia resource can be based on the images in the multimedia content from 10s to 27s Frame generated dynamic graph.
  • multimedia resources contain a small amount of data, and can intuitively display the use effect of the target entity object, reducing the consumption of data traffic.
  • the multimedia content is stored on the server, it can be uploaded to the server in advance by the user, or it can be captured by the server from the network.
  • This disclosure does not limit the source of the multimedia content, and the format of the multimedia content can be, for example, a video file ( For example: .avi, .mp4, .wmv, .rmvb, .3gp, .mov, .asf files), or audio files (for example: .mp3, .wav, .flac files).
  • the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object, for example, it may be a dynamic image or a video clip.
  • a dynamic image can be understood as a dynamic image obtained by splicing a certain number of image frames from multimedia content and splicing in a preset time sequence, for example, a dynamic image in .gif format.
  • a video clip can be understood as a video obtained by cutting a certain number of image frames from multimedia content and splicing them in a preset time sequence, for example, it can be .avi, .mp4, .wmv, .rmvb, .3gp, .mov, .asf video files in other formats.
  • Step 102 displaying at least one multimedia resource.
  • the at least one multimedia resource may be displayed on a display interface (eg, a display screen) of the terminal device as a display result corresponding to the search instruction. All multimedia resources can be displayed on the display interface, and a specified number (for example: 3) multimedia resources can also be displayed. Different display modes can be determined for multimedia resources of different formats. For example, if the multimedia resource is a video, the multimedia resource may be displayed in a first preset manner, and the first preset manner includes any one of loop playback, reverse order playback, double-speed playback, and window playback. Among them, the loop playback can be understood as playing the video repeatedly.
  • Playing in reverse order can be understood as playing the video from the last image frame to the first image frame, and it is also possible to repeatedly play the video in reverse order.
  • Double-speed playback can be understood as adjusting the playback speed of the video, for example, it can be 0.5 times, 1.25 times, 2 times, and so on.
  • Window playback can be understood as creating a new window on the display interface and displaying the video in this window. For example, there are 3 videos displayed on the display interface. When the user's finger moves to the position displayed by the first video, a popup will appear. A new window to display the 1st video.
  • the multimedia resource is a dynamic image
  • the second preset manner includes: enlarged display or reduced display.
  • Enlarged display can be understood as displaying every image frame included in the dynamic image. Zoom in and display them in sequence, and zoom out display can be understood as zooming out each image frame included in the dynamic image and displaying them in sequence. For example, three dynamic images are displayed on the display interface. When the user's finger moves to the position displayed by the second dynamic image, the second dynamic image is displayed enlarged or reduced.
  • the terminal device sends the search instruction to the server.
  • the server analyzes the "effect of rabbit ears", and obtains the search keyword "rabbit ears".
  • the server finds multimedia contents using "rabbit ears", including video 1, video 2, and video 3.
  • the server determines the search result information of "rabbit ears” according to video 1, video 2 and video 3, and sends the search result information to the terminal device.
  • the terminal device obtains multimedia resources that can show the effect of using the "rabbit ears" according to the search result information.
  • the multimedia resource may include A.gif corresponding to Video 1, B.gif corresponding to Video 2, and C.gif corresponding to Video 3. Display multimedia resources on the display screen of the terminal device.
  • A.gif, B.gif, and C.gif may be displayed on the display screen, as shown in FIG. 3 . Since A.gif, B.gif, and C.gif can display the use effect of "rabbit ears”, users can intuitively view the use effect of "rabbit ears props", which effectively improves the effectiveness of search, and users also There is no need to click to watch the complete video 1, video 2 and video 3 separately, which greatly reduces the consumption of data traffic.
  • the present disclosure acquires at least one multimedia resource showing the use effect of the target entity object in response to a search instruction for the target entity object, and the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object. resources are displayed. According to the search instructions of different target entity objects, the present disclosure obtains multimedia resources that can display the use effect of the target entity objects according to the multimedia content corresponding to the target entity objects, and displays the multimedia resources without displaying the complete multimedia content, which can improve the search efficiency. Effectiveness, reduce the consumption of data traffic.
  • Fig. 4 is a flow chart of another method for displaying search results according to an exemplary embodiment. As shown in Fig. 4 , the method may further include the following steps.
  • Step 103 Record at least one entity object used in the multimedia content, and the usage time information of each entity object.
  • Step 104 Send the multimedia content and attribute information of the multimedia content to the server.
  • the attribute information of the multimedia content includes identification information and usage time information of each entity object.
  • the terminal device can use various physical objects in the multimedia content during the process of shooting the multimedia content or after the shooting of the multimedia content, for example, subtitles, background music can be added to the multimedia content, or the video Using special effects, props, etc., it is also possible to directly add content tags corresponding to different content in the multimedia content.
  • the terminal device can record each kind of entity object used in the multimedia content and the usage time information corresponding to each kind of entity object.
  • the usage time information may include: the time at which the entity object is used in the multimedia content (for example: start time and end time), or the sequence number of the image frame in which the entity object is used in the multimedia content (for example: the start frame sequence number and the end frame sequence number) ).
  • the terminal device can send the identification information of each entity object (the identification information can uniquely identify the corresponding entity object) and the usage time information of each entity object as attribute information of the multimedia content, and send the multimedia content to the server together.
  • the server can determine the mapping relationship between the identification information of the entity object and the image frame information according to the identification information of each entity object and the usage time information of each entity object, wherein the image frame information indicates that each entity object is in the multimedia content Fragments appearing in .
  • a plurality of records are stored in the mapping relationship, and each record includes an entity object and a segment of the entity object that appears in the multimedia content.
  • step 101 for different scenarios of acquiring multimedia resources:
  • the multimedia resource may be that the server determines the target entity object identification information corresponding to the target entity object in the pre-stored identification information of multiple entity objects, and determines the target entity object according to the mapping relationship between the entity object identification information and the image frame information.
  • image frame information The image frame information of the target entity object can indicate the segment where the target entity object appears in the multimedia content.
  • the image frame information can be understood as the time information corresponding to the image frame using the target entity object in the multimedia content (for example: 5s to 25s), or it can be the frame serial number information corresponding to the image frame using the target entity object in the multimedia content (Example: Frames 20 to 50).
  • the server Based on the image frame information of the target entity object, the server extracts segments of the target entity object that appear in the multimedia content from the multimedia content, thereby generating a multimedia resource capable of displaying the use effect of the target entity object.
  • the terminal device can directly obtain multimedia resources from the server.
  • the image frame information of the target entity object can indicate continuous image frames in the multimedia content (which can be understood as using all the image frames of the target entity object), or it can indicate non-consecutive image frames (which can be understood as using the target key image frames for solid objects).
  • the server can filter out the image frames appearing in the "skinning effect label” from video 1, and use these image frames to make a dynamic image in .gif format Graphs, or video clips in .wmv format, the terminal device can obtain dynamic graphs or video clips from the server and display them.
  • step 101 the implementation of step 101 is shown in FIG. 5 , which may include:
  • Step 1011 acquiring multimedia data, where the multimedia data includes image frame data of the target entity object used in the multimedia content corresponding to the target entity object.
  • Step 1012 based on the multimedia data, generate a multimedia resource showing the use effect of the target entity object.
  • the multimedia resource may also be generated by the terminal device according to multimedia data sent by the server.
  • the server may determine the target entity object identification information corresponding to the target entity object from the pre-stored identification information of the multiple entity objects, and determine, according to the mapping relationship between the entity object identification information and the image frame information, to use the target entity object identification information in the multimedia content.
  • image frame data and send the image frame data to the terminal device as multimedia data.
  • the image frame information can be understood as the time information corresponding to the image frame using the target entity object in the multimedia content (for example: 5s to 25s), or it can be the frame serial number information corresponding to the image frame using the target entity object in the multimedia content (Example: Frames 20 to 50).
  • the multimedia data may also include sequence information of image frames using the target entity object in the multimedia content.
  • the image frame data may be understood as one or more image frames of the target entity object used in the multimedia content.
  • the sequence information can be understood as the sequence of the image frames using the target entity object in the multimedia content, for example, it can be the time sequence of the image frames using the target entity object in the multimedia content (for example: the time when each image frame appears in the multimedia content), It may also be the sequence of frame serial numbers corresponding to the image frames of the target entity object in the multimedia content (for example, the frame serial number of each image frame in the multimedia content).
  • the image frame data can be continuous image frames in the multimedia content (which can be understood as using all the image frames of the target entity object), or non-consecutive image frames (which can be understood as using the key image frames of the target entity object).
  • the terminal device may generate a multimedia resource showing the use effect of the target entity object based on the multimedia data.
  • the multimedia resource can be generated by splicing according to the sequence information according to the image frame data included in the multimedia data.
  • the server can filter out the image frames using the "beauty special effect” from video 2, and compare these image frames and the sequence information of these image frames. , which is sent to the terminal device as multimedia data. According to the received image frames and the sequence information of the image frames, the terminal device makes a dynamic picture in .gif format or a video clip in .mp4 format as a multimedia resource.
  • step 101 is shown in FIG. 6 , which may include:
  • Step 1013 Obtain resource indication information corresponding to the target entity object, where the resource indication information is used to indicate a segment of the target entity object that appears in the multimedia content.
  • Step 1014 Acquire multimedia data according to the resource indication information, where the multimedia data includes image frame data of the target entity object used in the multimedia content.
  • Step 1015 according to the multimedia data, generate a multimedia resource showing the use effect of the target entity object.
  • the multimedia resource may also be generated by the terminal device acquiring resource indication information from the server, acquiring multimedia data from the server according to the resource indication information, and finally generating based on the multimedia data.
  • the server may determine the target entity object identification information corresponding to the target entity object from the pre-stored identification information of the multiple entity objects, and determine the resource indication information according to the mapping relationship between the entity object identification information and the image frame information.
  • the image frame information can be understood as the time information corresponding to the image frame using the target entity object in the multimedia content (for example: 5s to 25s), or it can be the frame serial number information corresponding to the image frame using the target entity object in the multimedia content (Example: Frames 20 to 50).
  • the resource indication information can indicate the segment in which the target entity object appears in the multimedia content.
  • the resource indication information may be the time information corresponding to the image frame using the target entity object in the multimedia content (for example: the 5th to the 10th s in the multimedia content), or it may be the corresponding time information of the image frame using the target entity object in the multimedia content frame number information (for example, the 15th to 30th frames in the multimedia content).
  • the terminal device may acquire multimedia data from the server according to the resource indication information.
  • the multimedia data includes image frame data of the target entity object used in the multimedia content.
  • the multimedia data may also include sequence information of image frames using the target entity object in the multimedia content.
  • the image frame data may be understood as one or more image frames of the target entity object used in the multimedia content.
  • the sequence information can be understood as the sequence of the image frames using the target entity object in the multimedia content, for example, it can be the time sequence of the image frames using the target entity object in the multimedia content (for example: the time when each image frame appears in the multimedia content), It may also be the sequence of frame serial numbers corresponding to the image frames of the target entity object in the multimedia content (for example, the frame serial number of each image frame in the multimedia content).
  • the image frame data can be continuous image frames in the multimedia content (which can be understood as using all the image frames of the target entity object), or non-consecutive image frames (which can be understood as using the key image frames of the target entity object).
  • the terminal device may generate a multimedia resource showing the use effect of the target entity object based on the multimedia data.
  • the multimedia resources can be obtained by splicing according to the sequence information according to the image frame data included in the multimedia data.
  • the server directly determines that the image frame data of the target entity object is used in the multimedia content, and directly sends the image frame data to the terminal device.
  • the terminal device may obtain image frames using the target entity object in the multimedia content from the server in segments and frames.
  • the server can filter out the fragments that appear in the "sea” from the video 3, and send the resource indication information corresponding to these fragments to the terminal device.
  • the resource indication information such as It can be the time information of the image frames where the "sea" appears in the video 3: 15s to 100s, which means that the image frames from the 15s to the 100s in the video 3 all appear "the sea”.
  • the terminal device obtains the image frames from the 15th to the 100th s in the video 3 and the frame serial numbers of these image frames from the server according to the resource indication information, and makes these image frames into a dynamic image in .gif format according to the sequence of the frame serial numbers, or Video clips in .avi format as multimedia resources.
  • the terminal device or the server according to the image frame data included in the multimedia data, splices the image frame data according to the sequence information to obtain the multimedia resource.
  • Part of the image frame data included in the multimedia data that is, part of the image frame using the target entity object in the multimedia content
  • the image frames using the target entity object are the 21st to 75th image frames, then these 55 image frames can be combined into video clips in chronological order, or the 47th image frame in the middle can be taken. Frame to the 56th image frame, combined into a dynamic image.
  • the image frames of the used target entity object may be discontinuous in the multimedia content, that is, the target entity object may be used multiple times in the multimedia content.
  • the image frames using the target entity object are the 10th to 22nd image frames, and the 153rd to 170th image frames. Then you can select the 10th to 22nd image frames, and combine the 153rd to 170th image frames to form a dynamic image, you can also select the 15th to 20th image frames to combine into a dynamic image, you can also select the 160th image frame. Frames to the 170th frame are combined into a dynamic image.
  • Fig. 7 is a flowchart of another method for displaying search results according to an exemplary embodiment. As shown in Fig. 7 , step 102 may include the following steps.
  • Step 1021 Obtain the object identifier corresponding to the target entity object.
  • Step 1022 Display the object identifier corresponding to the target entity object in the first area of the display interface, and automatically play at least one multimedia resource in the second area of the display interface.
  • the terminal device may also obtain the object identifier corresponding to the target entity object from the server.
  • the object identification can be understood as the icon corresponding to the target entity object.
  • the terminal device can also acquire the link corresponding to the target entity object, and the object identifier is associated with the link.
  • the terminal device can simultaneously display the multimedia resource and the object identifier corresponding to the target entity object in the display interface.
  • the object identifier corresponding to the target entity object may be displayed in the first area of the display interface of the terminal device, and at least one multimedia resource is automatically played in the second area of the display interface.
  • the first area and the second area may be different areas on the display interface.
  • the method may also include:
  • Step 105 in response to the triggering operation for the object identification, jump to the multimedia shooting scene corresponding to the target entity object.
  • the user may click the object identifier corresponding to the target entity object to issue a trigger operation for the object identifier.
  • the terminal device can jump to the multimedia shooting scene corresponding to the target entity object through the link associated with the object identifier.
  • the user can not only visually check the use effect of the target entity object through the multimedia resources, but also quickly jump to the multimedia shooting scene corresponding to the target entity object through the object identifier corresponding to the target entity object, which improves the effectiveness of the search and improves the operation efficiency. Convenience.
  • the server finds the multimedia contents using the "rabbit ear prop", including video 1, video 2 and video 3.
  • the terminal device obtains multimedia resources capable of displaying the effect of the "rabbit ear prop", including A.gif corresponding to video 1, B.gif corresponding to video 2, and C.gif corresponding to video 3.
  • the terminal device also obtains the object identifier corresponding to the "rabbit ear prop".
  • the object identifier corresponding to the "rabbit ear prop" is displayed in the first area of the display screen of the terminal device, and A.gif, B.gif, and C.gif are automatically played in the second area, as shown in FIG. 8 .
  • the terminal device can jump to the multimedia shooting scene using the "rabbit ear prop".
  • step 101 may be:
  • a multimedia content may include various entity objects, and there may be an association relationship between the entity objects, so two or more entity objects with an association relationship may be associated.
  • An entity object with an associated relationship may be an entity object frequently used by users at the same time, or may be a plurality of entity objects in a group of entity objects.
  • users usually use "baby props" and "rabbit ear props” at the same time, so it can be determined that there is a relationship between “baby props” and "rabbit ear props”.
  • both "beauty effects” and “skinning effects” belong to a group of entity objects used for portrait processing, and it can also be determined that there is a relationship between “beauty effects” and “skinning effects”.
  • At least one multimedia resource showing the use effect of the entity object set can be acquired, wherein the entity object set includes the target entity object and the associated entity object that has an associated relationship with the target entity object.
  • the user can simultaneously view the use effect of the target entity object and the associated entity object through the multimedia resource, which improves the effectiveness of the search.
  • step 101 may be:
  • the multimedia resources are obtained and the associated multimedia resources showing the use effect of the associated entity object are obtained.
  • the associated entity object is an entity object that has an associated relationship with the target entity object.
  • step 102 can be as follows:
  • the multimedia resources are displayed in the third area of the display interface, and the associated multimedia resources are displayed in the fourth area of the display interface.
  • the terminal device may also acquire an associated multimedia resource showing the usage effect of the associated entity object while acquiring the multimedia resource showing the usage effect of the target entity object, where the associated entity object is an entity object that has an associated relationship with the target entity object.
  • An entity object with an associated relationship may be an entity object frequently used by users at the same time, or may be a plurality of entity objects in a group of entity objects.
  • the terminal device can simultaneously display the multimedia resource and the associated multimedia resource on the display interface.
  • the multimedia resources may be displayed in the third area of the display interface of the terminal device, and the associated multimedia resources may be displayed in the fourth area of the display interface.
  • the third area and the fourth area may be different areas on the display interface.
  • the present disclosure acquires at least one multimedia resource showing the use effect of the target entity object in response to a search instruction for the target entity object, and the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object. resources are displayed. According to the search instructions of different target entity objects, the present disclosure obtains multimedia resources that can display the use effect of the target entity objects according to the multimedia content corresponding to the target entity objects, and displays the multimedia resources without displaying the complete multimedia content, which can improve the search efficiency. Effectiveness, reduce the consumption of data traffic.
  • Fig. 9 is a flowchart of a method for displaying search results according to an exemplary embodiment. As shown in Fig. 9 , the method includes the following steps.
  • Step 201 Receive a search instruction for the target entity object, and determine the multimedia content corresponding to the target entity object.
  • the execution body of the method may be a server.
  • a search instruction may be input through a terminal device, and the search instruction includes a search keyword corresponding to the target entity object.
  • the terminal device can send the search instruction to the server, and after receiving the search instruction, the server analyzes the search instruction to obtain the search keyword included in the search instruction, thereby determining the target entity object matching the search keyword.
  • the server analyzes the search instruction to obtain the search keyword included in the search instruction, thereby determining the target entity object matching the search keyword.
  • there are multiple entity objects pre-stored on the server the matching degree between the search keyword and each entity object is determined respectively, and one or more entity objects with the highest matching degree are used as the target entity object.
  • the server can analyze "how to use baby props", and determine the search keyword "baby props" included in the search instruction, Thus, the target entity object is determined as the "baby prop”.
  • the server After the server determines the target entity object, it can search for the multimedia content corresponding to the target entity object from a large number of multimedia contents pre-stored on the server. .
  • the multimedia content stored on the server can be pre-uploaded by the user to the server, or captured by the server from the network.
  • This disclosure does not limit the source of the multimedia content, and the format of the multimedia content can be a video file (for example: . avi, .mp4, .wmv, .rmvb, .3gp, .mov, .asf files).
  • Step 202 Determine search result information according to the multimedia content corresponding to the target entity object.
  • the search result information includes: at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or at least one resource indication information.
  • the multimedia data is image frame data using the tag of the target entity object in the multimedia content, and the resource indication information is used to indicate a segment of the target entity object that appears in the multimedia content.
  • Step 203 Send the search result information to the terminal device.
  • the server may determine the search result information based on the multimedia content corresponding to the target entity object, and send the search result information to the terminal device.
  • the search result information can be divided into three categories: the first category is at least one multimedia resource showing the use effect of the target entity object, the second category is at least one set of multimedia data, and the third category is at least one resource indication information. The following describes how to determine the three types of search result information.
  • the server can determine the target entity object identification information corresponding to the target entity object from the pre-stored identification information of multiple entity objects, and determine the image of the target entity object according to the mapping relationship between the entity object identification information and the image frame information. frame information.
  • the image frame information of the target entity object can indicate the segment where the target entity object appears in the multimedia content.
  • the mapping relationship can be understood as a plurality of records are stored therein, and each record includes an entity object and a segment of the entity object that appears in the multimedia content.
  • the image frame information can be understood as the time information corresponding to the image frame using the target entity object in the multimedia content (for example: 5s to 25s), or it can be the frame serial number information corresponding to the image frame using the target entity object in the multimedia content (Example: Frames 20 to 50).
  • the server Based on the image frame information of the target entity object, the server extracts segments of the target entity object that appear in the multimedia content from the multimedia content, so as to generate multimedia resources showing the use effect of the target entity object.
  • the server can send the multimedia resource to the terminal device, so that the terminal device can display the multimedia resource.
  • the image frame information of the target entity object can indicate continuous image frames in the multimedia content (which can be understood as using all the image frames of the target entity object), or it can indicate non-consecutive image frames (which can be understood as using the target key image frames for solid objects).
  • the server can determine the target entity object identification information corresponding to the target entity object from the pre-stored identification information of multiple entity objects, and determine the multimedia content using the mapping relationship between the entity object identification information and the image frame information.
  • Image frame data of the target entity object and send the image frame data to the terminal device as multimedia data.
  • the image frame information can be understood as the time information corresponding to the image frame using the target entity object in the multimedia content (for example: 5s to 25s), or it can be the frame serial number information corresponding to the image frame using the target entity object in the multimedia content (Example: Frames 20 to 50).
  • the multimedia data may also include sequence information of image frames using the target entity object in the multimedia content.
  • the image frame data may be understood as one or more image frames of the target entity object used in the multimedia content.
  • the sequence information can be understood as the sequence of image frames using the target entity object in the multimedia content.
  • it can be the time sequence of the image frames using the target entity object in the multimedia content (for example, the time when each image frame appears in the multimedia content), or it can be the frame sequence number sequence corresponding to the image frames using the target entity object in the multimedia content (For example: the frame number of each image frame in the multimedia content).
  • the image frame data can be continuous image frames in the multimedia content (which can be understood as using all the image frames of the target entity object), or non-consecutive image frames (which can be understood as using the key image frames of the target entity object).
  • the terminal device may, based on the multimedia data, generate and display multimedia resources showing the use effect of the target entity object.
  • the multimedia resource can be generated by splicing according to the sequence information according to the image frame data included in the multimedia data.
  • the server can determine the target entity object identification information corresponding to the target entity object from the pre-stored identification information of multiple entity objects, and determine the resource indication information according to the mapping relationship between the entity object identification information and the image frame information.
  • the image frame information can be understood as the time information corresponding to the image frame using the target entity object in the multimedia content (for example: 5s to 25s), or it can be the frame serial number information corresponding to the image frame using the target entity object in the multimedia content (Example: Frames 20 to 50).
  • the resource indication information can indicate the segment in which the target entity object appears in the multimedia content.
  • the resource indication information may be the time information corresponding to the image frame using the target entity object in the multimedia content (for example: the 5th to the 10th s in the multimedia content), or it may be the corresponding time information of the image frame using the target entity object in the multimedia content frame number information (for example, the 15th to 30th frames in the multimedia content).
  • the terminal device may acquire multimedia data from the server according to the resource indication information.
  • the multimedia data includes image frame data of the target entity object used in the multimedia content.
  • the multimedia data may also include sequence information of image frames using the target entity object in the multimedia content.
  • the image frame data may be understood as one or more image frames of the target entity object used in the multimedia content.
  • the sequence information can be understood as the sequence of the image frames using the target entity object in the multimedia content, for example, it can be the time sequence of the image frames using the target entity object in the multimedia content (for example: the time when each image frame appears in the multimedia content), It may also be the sequence of frame serial numbers corresponding to the image frames of the target entity object in the multimedia content (for example, the frame serial number of each image frame in the multimedia content).
  • the image frame data can be continuous image frames in the multimedia content (which can be understood as using all the image frames of the target entity object), or non-consecutive image frames (which can be understood as using the key image frames of the target entity object).
  • the terminal device may, based on the multimedia data, generate and display multimedia resources showing the use effect of the target entity object.
  • the multimedia resources can be obtained by splicing according to the sequence information according to the image frame data included in the multimedia data.
  • the multimedia content in the above embodiment can be understood as a multimedia file with complete content and a longer duration, for example, it can be a file such as video and audio, and the multimedia resource comes from the part of the multimedia content that is closely related to the target entity object, such as a dynamic image, Or shorter video clips.
  • the multimedia resource may be generated according to one or more image frames using the target entity object in the multimedia content, and can visually display the use effect of the target entity object.
  • the target entity object as the "cat face prop" and the multimedia content as the video
  • the multimedia content starts from 10s to 27s and uses the "cat face prop”
  • the multimedia resources can be based on the images in the multimedia content from 10s to 27s Frame generated dynamic graph.
  • multimedia resources contain a small amount of data and can intuitively display the use effect of the target entity object.
  • the terminal device can display multimedia resources without displaying complete multimedia contents, which can improve the effectiveness of search and reduce the consumption of data traffic.
  • Fig. 10 is a flowchart of another method for displaying search results according to an exemplary embodiment. As shown in Fig. 10 , the method may further include the following steps.
  • Step 204 Receive multimedia content corresponding to any entity object and attribute information of the multimedia content corresponding to each entity object.
  • the attribute information includes identification information and usage time information of each entity object.
  • the terminal device can use various physical objects in the multimedia content during the process of shooting the multimedia content or after the shooting of the multimedia content, for example, subtitles, background music can be added to the multimedia content, or the video Using special effects, props, etc., it is also possible to directly add content tags corresponding to different content in the multimedia content.
  • the terminal device can record each kind of entity object used in the multimedia content and the usage time information corresponding to each kind of entity object.
  • the usage time information may include: the time at which the entity object is used in the multimedia content (for example: start time and end time), or the sequence number of the image frame in which the entity object is used in the multimedia content (for example: the start frame sequence number and the end frame sequence number) ).
  • the terminal device can send the identification information of each entity object (the identification information can uniquely identify the corresponding entity object) and the usage time information of each entity object as attribute information of the multimedia content, and send the multimedia content to the server together.
  • the server can determine the mapping relationship between the entity object identification information and the image frame information according to the identification information of each entity object and the usage time information of each entity object, wherein the image The frame information indicates the segment in which each entity object appears in the multimedia content. It can be understood that a plurality of records are stored in the mapping relationship, and each record includes an entity object and a segment of the entity object that appears in the multimedia content.
  • Fig. 11 is a flowchart showing another method for displaying search results according to an exemplary embodiment. As shown in Fig. 11 , step 202 can be implemented by the following steps.
  • Step 2021 Obtain entity object information of the multimedia content.
  • the entity object information includes: entity object identification information used in the multimedia content and a mapping relationship between the entity object identification information and image frame information, and the image frame information represents the segment of each entity object that appears in the multimedia content.
  • Step 2022 Select the target entity object identification information corresponding to the target entity object from the entity object identification information, and determine the image frame information of the target entity object according to the mapping relationship.
  • Step 2023 Generate multimedia resources based on the multimedia content and the image frame information of the target entity object.
  • the server may acquire entity object information of the multimedia content, wherein the entity object information includes entity object identification information used in the multimedia content and a mapping relationship between the entity object identification information and the image frame information.
  • the identification information of the entity object the identification information of the target entity object corresponding to the target entity object is determined, and then the image frame information of the target entity object is determined according to the mapping relationship between the entity object identification information and the image frame information.
  • the image frame information of the target entity object can indicate the segment where the target entity object appears in the multimedia content.
  • the server Based on the image frame information of the target entity object, the server extracts segments of the target entity object that appear in the multimedia content from the multimedia content, thereby generating multimedia resources showing the use effect of the target entity object.
  • Fig. 12 is a flowchart showing another method for displaying search results according to an exemplary embodiment. As shown in Fig. 12 , step 202 can be implemented by the following steps.
  • Step 2024 Obtain entity object information of the multimedia content.
  • the entity object information includes: entity object identification information used in the multimedia content and a mapping relationship between the entity object identification information and image frame information, and the image frame information represents the segment of each entity object that appears in the multimedia content.
  • Step 2025 select the target entity object identification information corresponding to the target entity object in the entity object identification information, and the associated entity object identification information corresponding to the associated entity object, determine the image frame information of the target entity object according to the mapping relationship, and the associated entity object.
  • Image frame information, the associated entity object is an entity object that has an associated relationship with the target entity object.
  • Step 2026 Generate multimedia resources based on the multimedia content, the image frame information of the target entity object, and the image frame information of the associated entity object.
  • a multimedia content may include various entity objects, and there may be an association relationship between the entity objects, so two or more entity objects with an association relationship may be associated.
  • An entity object with an associated relationship may be an entity object frequently used by users at the same time, or may be a plurality of entity objects in a group of entity objects.
  • users usually use "baby props" and "rabbit ear props” at the same time, so it can be determined that there is a relationship between “baby props” and "rabbit ear props”.
  • both "beauty effects” and “skinning effects” belong to a group of entity objects used for portrait processing, and it can also be determined that there is a relationship between “beauty effects” and “skinning effects”.
  • the entity object information of the multimedia content can be obtained, wherein the entity object information includes the entity object identification information used in the multimedia content and the mapping relationship between the entity object identification information and the image frame information.
  • the identification information of the entity object the identification information of the target entity object corresponding to the target entity object and the identification information of the associated entity object corresponding to the associated entity object are determined.
  • the image frame information of the target entity object and the image frame information of the associated entity object are determined.
  • the image frame information of the target entity object can indicate the segment of the target entity object that appears in the multimedia content
  • the image frame information of the associated entity object can indicate the segment that the associated entity object appears in the multimedia content.
  • the server Based on the image frame information of the target entity object and the image frame information of the associated entity object, the server extracts the segments where the target entity object appears and the segments where the associated entity object appears in the multimedia content from the multimedia content, so as to generate and display the target entity object and the associated entity.
  • a multimedia resource that uses the effect of the object.
  • the terminal device displays multimedia resources, the user can view the use effect of the target entity object and the associated entity object at the same time, which improves the effectiveness of the search. Extracting the segment where the target entity object appears in the multimedia content and the associated entity object. Between the segments where the multimedia content appears, there may be duplicate image frames, that is, there may be image frames in which both the target entity object and the associated entity object appear. .
  • Fig. 13 is a flowchart of another method for displaying search results according to an exemplary embodiment. As shown in Fig. 13 , the method may further include the following steps.
  • Step 205 Determine an associated entity object that has an associated relationship with the target entity object, and determine associated multimedia content corresponding to the associated entity object.
  • the server analyzes the search instruction to obtain the search keywords included in the search instruction, thereby determining the target entity object matching the search keyword, and then determining the target entity object matching the target entity object.
  • the associated multimedia content can be understood as the multimedia content using the associated entity object, or the multimedia content in which the associated entity object appears.
  • Step 206 Determine the associated search result information according to the associated multimedia content, where the associated search result information includes: at least one associated multimedia resource showing the use effect of the associated entity object, or at least one set of associated multimedia data, or at least one associated resource indication information.
  • the associated multimedia data includes image frame data in the associated multimedia content using the label of the associated entity object, and the associated resource indication information is used to indicate a segment of the associated entity object that appears in the associated multimedia content.
  • Step 207 Send the associated search result information to the terminal device.
  • the server may determine the associated search result information based on the associated multimedia content corresponding to the associated entity object, and send the associated search result information to the terminal device.
  • the associated search result information can be divided into three categories: the first category is at least one associated multimedia resource showing the use effect of the associated entity object, the second category is at least one set of associated multimedia data, and the third category is at least one associated resource indication information.
  • the method of determining the associated multimedia resource is the same as the method of generating the multimedia resource showing the use effect of the target entity object, the method of generating the associated multimedia data and the multimedia data corresponding to the target entity object is the same, and the associated resource indication information and the The resource indication information is generated in the same manner, and will not be repeated here.
  • the terminal device can acquire the multimedia resource for displaying the usage effect of the target entity object according to the search result information, and acquire the associated multimedia resource for displaying the usage effect of the associated entity object according to the associated search result information.
  • the terminal device displays multimedia resources and associated multimedia resources, the user can view the use effect of the target entity object through the multimedia resource, and view the use effect of the associated entity object through the associated multimedia resource, which improves the effectiveness of the search.
  • the present disclosure receives a search instruction for a target entity object, determines multimedia content corresponding to the target entity object, and determines search result information according to the multimedia content corresponding to the target entity object.
  • the search result information may include at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or at least one resource indication information.
  • the present disclosure is aimed at the search instructions of different target entity objects, determines search result information according to the multimedia content corresponding to the target entity objects, and sends the search result information to the terminal device, so that the terminal device can obtain the information capable of displaying the target entity object according to the search result information. Using effective multimedia resources and displaying them without displaying the complete multimedia content can improve the effectiveness of search and reduce the consumption of data traffic.
  • Fig. 14 is a block diagram of an apparatus for displaying search results according to an exemplary embodiment. As shown in Fig. 14, the apparatus 300 may include:
  • the obtaining module 301 is configured to obtain at least one multimedia resource showing the use effect of the target entity object in response to the search instruction for the target entity object.
  • the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object.
  • the presentation module 302 is configured to present at least one multimedia resource.
  • the presentation module 302 may determine different presentation modes according to the types of multimedia resources. For example, if the multimedia resource is a video, the multimedia resource is displayed according to a first preset manner, and the first preset manner includes: any one of loop playback, reverse order playback, double-speed playback, and window playback. If the multimedia resource is a dynamic image, the multimedia resource is displayed according to a second preset manner, and the second preset manner includes: enlarged display or reduced display.
  • FIG. 15 is a block diagram of another apparatus for displaying search results according to an exemplary embodiment. As shown in FIG. 15 , the apparatus 300 further includes:
  • the recording module 303 is configured to record at least one kind of entity object used in the multimedia content, and usage time information of each kind of entity object.
  • the sending module 304 is configured to send the multimedia content and attribute information of the multimedia content to the server.
  • the attribute information of the multimedia content includes identification information and usage time information of each entity object.
  • the obtaining module 301 may be configured to perform the following steps:
  • Step 1) Acquire multimedia data, where the multimedia data includes image frame data of the target entity object in the multimedia content corresponding to the target entity object.
  • Step 2 Based on the multimedia data, a multimedia resource showing the use effect of the target entity object is generated.
  • the obtaining module 301 may be configured to perform the following steps:
  • Step 3 Acquire resource indication information corresponding to the target entity object, where the resource indication information is used to indicate a segment of the target entity object that appears in the multimedia content.
  • Step 4) Acquire multimedia data according to the resource indication information, where the multimedia data includes image frame data of the target entity object used in the multimedia content.
  • Step 5 According to the multimedia data, a multimedia resource showing the use effect of the target entity object is generated.
  • FIG. 16 is a block diagram of another apparatus for displaying search results according to an exemplary embodiment.
  • the displaying module 302 may include:
  • the obtaining sub-module 3021 is configured to obtain the object identifier corresponding to the target entity object.
  • the display sub-module 3022 is configured to display the object identifier corresponding to the target entity object in the first area of the display interface, and automatically play at least one multimedia resource in the second area of the display interface.
  • Fig. 17 is a block diagram of another apparatus for displaying search results according to an exemplary embodiment. As shown in Fig. 17 , the apparatus 300 further includes:
  • the jumping module 305 is configured to jump to the multimedia shooting scene corresponding to the target entity object in response to the triggering operation for the object identification.
  • the obtaining module 301 may be set to:
  • the obtaining module 301 may be set to:
  • the multimedia resources are obtained and the associated multimedia resources showing the use effect of the associated entity object are obtained.
  • the associated entity object is an entity object that has an associated relationship with the target entity object.
  • the presentation module 302 can be set to:
  • the multimedia resources are displayed in the third area of the display interface, and the associated multimedia resources are displayed in the fourth area of the display interface.
  • the present disclosure acquires at least one multimedia resource showing the use effect of the target entity object in response to a search instruction for the target entity object, and the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object. resources are displayed. According to the search instructions of different target entity objects, the present disclosure obtains multimedia resources capable of displaying the use effect of the target entity objects according to the corresponding multimedia contents, and displays the multimedia resources without displaying the complete multimedia contents, which can improve the effectiveness of the search. Reduce data traffic consumption.
  • Fig. 18 is a block diagram of an apparatus for displaying search results according to an exemplary embodiment. As shown in Fig. 18, the apparatus 400 can be applied to a server, including:
  • the receiving module 401 is configured to receive a search instruction for the target entity object, and determine the multimedia content corresponding to the target entity object.
  • the determining module 402 is configured to determine search result information according to the multimedia content corresponding to the target entity object, and the search result information includes: at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or at least one resource indication information.
  • the multimedia data is image frame data using the tag of the target entity object in the multimedia content, and the resource indication information is used to indicate a segment of the target entity object that appears in the multimedia content.
  • the sending module 403 is configured to send the search result information to the terminal device.
  • the receiving module 401 is further configured to receive multimedia content corresponding to any entity object and attribute information of the multimedia content corresponding to each entity object.
  • the attribute information includes identification information and usage time information of each entity object.
  • the determining module 402 may be configured to perform the following steps:
  • Step A) Obtaining entity object information of the multimedia content.
  • the entity object information includes: entity object identification information used in the multimedia content and a mapping relationship between the entity object identification information and image frame information, and the image frame information represents the segment of each entity object that appears in the multimedia content.
  • Step B) Select the target entity object identification information corresponding to the target entity object in the entity object identification information, and determine the image frame information of the target entity object according to the mapping relationship.
  • Step C) Based on the multimedia content and the image frame information of the target entity object, a multimedia resource is generated.
  • the determining module 402 may be configured to perform the following steps:
  • Step D) Obtaining entity object information of the multimedia content.
  • the entity object information includes: entity object identification information used in the multimedia content and a mapping relationship between the entity object identification information and image frame information, and the image frame information represents the segment of each entity object that appears in the multimedia content.
  • Step E) in the entity object identification information select the target entity object identification information corresponding to the target entity object, and the associated entity object identification information corresponding to the associated entity object, determine the image frame information of the target entity object according to the mapping relationship, and the associated entity object.
  • Image frame information, the associated entity object is an entity object that has an associated relationship with the target entity object.
  • Step F) Based on the multimedia content, the image frame information of the target entity object and the image frame information of the associated entity object, a multimedia resource is generated.
  • the determining module 402 may also be set to:
  • An associated entity object that has an associated relationship with the target entity object is determined, and an associated multimedia content corresponding to the associated entity object is determined.
  • the associated search result information is determined according to the associated multimedia content, and the associated search result information includes: at least one associated multimedia resource showing the use effect of the associated entity object, or at least one set of associated multimedia data, or at least one associated resource indication information.
  • the associated multimedia data includes image frame data in the associated multimedia content using the label of the associated entity object, and the associated resource indication information is used to indicate a segment of the associated entity object that appears in the associated multimedia content.
  • the sending module 403 can also be set to:
  • the present disclosure receives a search instruction for a target entity object, determines multimedia content corresponding to the target entity object, and determines search result information according to the multimedia content corresponding to the target entity object.
  • the search result information may include at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or at least one resource indication information.
  • the present disclosure is aimed at the search instructions of different target entity objects, determines the search result information according to the corresponding multimedia content, and sends the search result information to the terminal device, so that the terminal device can obtain the information that can display the use effect of the target entity object according to the search result information.
  • Multimedia resources can be displayed and displayed without displaying the complete multimedia content, which can improve the effectiveness of search and reduce the consumption of data traffic.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs, PADs, PMPs, in-vehicle terminals (eg, in-vehicle navigation terminals), etc., as well as mobile terminals such as digital TV, desktop Stationary terminals for computers, etc.
  • the electronic device shown in FIG. 19 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, and the processing device 601 may be based on a program stored in a read-only memory (Read-only Memory, ROM) 602 or from a Storage device 608 loads programs into random access memory (RAM) 603 to perform various appropriate actions and processes. In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An Input/Output (I/O) interface 605 is also connected to the bus 604 .
  • the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD), speakers , an output device 607 of a vibrator, etc.; a storage device 608 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609 .
  • Communication means 609 may allow electronic device 600 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 19 shows an electronic device 600 having various means, it is not required to implement or have all of the illustrated means, and more or fewer means may be implemented or provided instead.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing apparatus 601 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Flash memory, optical fiber, portable Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
  • terminal devices and servers can communicate using any currently known or future developed network protocols such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • Communication eg, a communication network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (eg, the Internet), and end-to-end networks (eg, ad hoc Known or future developed networks.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: in response to the search instruction for the target entity object, obtain at least one display of the target entity.
  • the multimedia resource of the usage effect of the object wherein the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object; and the at least one multimedia resource is displayed.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device causes the electronic device to: receive a search instruction for the target entity object, and determine that the target entity object corresponds to The multimedia content; according to the multimedia content corresponding to the target entity object, determine search result information, the search result information includes: at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or At least one resource indication information, wherein the multimedia data is image frame data using the label of the target entity object in the multimedia content, and the resource indication information is used to indicate that the target entity object is in the multimedia content The segment that appears; send the search result information to the terminal device.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet).
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments of the present disclosure may be implemented in software or hardware. Wherein, the name of the module does not constitute a limitation of the module itself in one case, for example, the acquisition module may also be described as "a module for acquiring multimedia resources".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Products
  • SOC System on Chip
  • complex programmable logic device Complex Programmable Logic Device, CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • Machine-readable storage media include one or more wire-based electrical connections, portable computer disks, hard disks, RAM, ROM, EPROM, flash memory, optical fibers, portable CD-ROMs, optical storage devices, magnetic storage devices, or the above any suitable combination of content.
  • Example 1 provides a method for displaying search results, including: in response to a search instruction for a target entity object, acquiring at least one multimedia resource displaying the use effect of the target entity object ; wherein, the multimedia resource is obtained by processing the multimedia content corresponding to the target entity object; and the at least one multimedia resource is displayed.
  • Example 2 provides the method of Example 1, further comprising: recording at least one entity object used in the multimedia content, and usage time information of each entity object; storing the multimedia content and the attribute information of the multimedia content is sent to the server; wherein, the attribute information of the multimedia content includes the identification information and usage time information of each entity object.
  • Example 3 provides the method of Example 1, wherein the acquiring at least one multimedia resource showing a usage effect of the target entity object includes: acquiring multimedia data, the multimedia data including all The image frame data of the target entity object is used in the multimedia content corresponding to the target entity object; based on the multimedia data, the multimedia resource showing the use effect of the target entity object is generated.
  • Example 4 provides the method of Example 1, and the acquiring at least one multimedia resource showing a usage effect of the target entity object includes: acquiring a resource indication corresponding to the target entity object information, and the resource indication information is used to indicate the segment of the target entity object that appears in the multimedia content; according to the resource indication information, acquire multimedia data, where the multimedia data includes the target entity used in the multimedia content The image frame data of the entity object; according to the multimedia data, the multimedia resource showing the use effect of the target entity object is generated.
  • Example 5 provides any one of the methods in Examples 1 to 4, and the displaying the at least one multimedia resource includes: acquiring an object identifier corresponding to the target entity object; The object identifier corresponding to the target entity object is displayed in the first area of the display interface, and at least one of the multimedia resources is automatically played in the second area of the display interface.
  • Example 6 provides the method of Example 5, further comprising: jumping to a multimedia shooting scene corresponding to the target entity object in response to a trigger operation for the object identifier.
  • Example 7 provides any one of the methods in Examples 1 to 4, and the acquiring at least one multimedia resource showing the use effect of the target entity object includes: acquiring at least one presentation The multimedia resource of the use effect of an entity object set, the entity object set includes: the target entity object and an associated entity object, and the associated entity object is an entity object that has an associated relationship with the target entity object.
  • Example 8 provides any one of the methods in Examples 1 to 4, and the acquiring at least one multimedia resource showing a usage effect of the target entity object includes: acquiring the multimedia resource resources, and an associated multimedia resource showing the use effect of an associated entity object, the associated entity object being an entity object that has an associated relationship with the target entity object; the displaying the at least one multimedia resource includes: The multimedia resources are displayed in the third area, and the associated multimedia resources are displayed in the fourth area of the display interface.
  • Example 9 provides any one of the methods in Examples 1 to 4, wherein the displaying the at least one multimedia resource includes: if the multimedia resource is a video, displaying the multimedia resource The resources are displayed according to the first preset mode, and the first preset mode includes: any one of loop playback, reverse order playback, double-speed playback, and window playback; if the multimedia resource is a dynamic image, the multimedia resource is The display is performed according to a second preset manner, and the second preset manner includes: enlarged display or reduced display.
  • Example 10 provides a method for displaying search results, including: receiving a search instruction for a target entity object, determining multimedia content corresponding to the target entity object; The multimedia content corresponding to the object, and the search result information is determined, and the search result information includes: at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or at least one resource indication information; The multimedia data is image frame data using the label of the target entity object in the multimedia content, and the resource indication information is used to indicate the segment of the target entity object that appears in the multimedia content; The information is sent to the terminal device.
  • Example 11 provides the method of Example 10, further comprising: receiving multimedia content corresponding to any entity object, and attribute information of the multimedia content corresponding to each entity object; wherein, the The attribute information includes identification information and usage time information of each of the entity objects.
  • Example 12 provides the method of Example 10, wherein the determining search result information according to the multimedia content corresponding to the target entity object includes: acquiring entity object information of the multimedia content;
  • the entity object information includes: entity object identification information used in the multimedia content and a mapping relationship between the entity object identification information and image frame information, where the image frame information indicates that each entity object is in the multimedia content Select the target entity object identification information corresponding to the target entity object in the entity object identification information, and determine the image frame information of the target entity object according to the mapping relationship;
  • the image frame information of the target entity object is generated, and the multimedia resource is generated.
  • Example 13 provides the method of Example 10, wherein the determining search result information according to the multimedia content corresponding to the target entity object includes: acquiring entity object information of the multimedia content;
  • the entity object information includes: entity object identification information used in the multimedia content and a mapping relationship between the entity object identification information and image frame information, where the image frame information indicates that each entity object is in the multimedia content
  • select the target entity object identification information corresponding to the target entity object, and the associated entity object identification information corresponding to the associated entity object and determine the target entity object according to the mapping relationship.
  • the image frame information of the associated entity object, the associated entity object is an entity object that has an associated relationship with the target entity object; based on the multimedia content, the video frame image of the target entity object and the image frame information of the associated entity object to generate the multimedia resource.
  • Example 14 provides any one of the methods from Examples 10 to 13, after the receiving the search instruction for the target entity object, further comprising: determining that the target entity object exists the associated entity object of the associated relationship, and determine the associated multimedia content corresponding to the associated entity object; according to the associated multimedia content, determine associated search result information, where the associated search result information includes: at least one display of the associated entity object Associated multimedia resources using effects, or at least one group of associated multimedia data, or at least one associated resource indication information; wherein, the associated multimedia data includes image frame data using the label of the associated entity object in the associated multimedia content, The associated resource indication information is used to indicate a segment of the associated entity object that appears in the associated multimedia content; and the associated search result information is sent to the terminal device.
  • Example 15 provides an apparatus for displaying search results, comprising: an obtaining module configured to, in response to a search instruction for a target entity object, obtain at least one device displaying the target entity object A multimedia resource using effects; wherein, the multimedia resource is obtained by processing based on the multimedia content corresponding to the target entity object; the display module is configured to display the at least one multimedia resource.
  • Example 16 provides an apparatus for displaying search results, including: a receiving module configured to receive a search instruction for a target entity object, and determine multimedia content corresponding to the target entity object; a determining module, configured to determine search result information according to the multimedia content corresponding to the target entity object, where the search result information includes: at least one multimedia resource showing the use effect of the target entity object, or at least one set of multimedia data, or at least one resource indication information; wherein the multimedia data is image frame data using the label of the target entity object in the multimedia content, and the resource indication information is used to indicate that the target entity object is in the multimedia content The segment appearing in the; the sending module is configured to send the search result information to the terminal device.
  • Example 17 provides a computer-readable medium storing a computer program that, when executed by a processing apparatus, implements Examples 1 to 9, or, Examples 10 to 14 the method.
  • Example 18 provides an electronic device, comprising: a storage device storing a computer program; and a processing device configured to execute the computer program in the storage device to implement the example 1 to Example 9, or alternatively, the methods described in Examples 10 to 14.

Abstract

本公开涉及一种搜索结果的展示方法、装置、可读介质和电子设备。搜索结果的展示方法包括:响应于针对目标实体对象的搜索指令,获取至少一个展示目标实体对象的使用效果的多媒体资源,其中,多媒体资源是基于目标实体对象对应的多媒体内容处理得到的,展示至少一个多媒体资源。

Description

搜索结果的展示方法、装置、可读介质和电子设备
本申请要求在2020年08月27日提交中国专利局、申请号为202010879294.5的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及电子信息技术领域,例如涉及一种搜索结果的展示方法、装置、可读介质和电子设备。
背景技术
随着终端技术和图像处理技术的不断发展,用户通过终端设备拍摄视频时,可以对视频进行多种处理,例如可以在视频中增加字幕、背景音乐,或者对视频使用特效、道具等。用户在选择想使用的目标效果对象时,可以搜索使用了该目标效果对象的视频,以作为参考。用户在发出搜索指令后,所看到的搜索结果通常为上述视频的视频封面,由于视频封面往往是随机生成的,因此用户无法从视频封面中获取到有效的信息,搜索的有效性较低。此外,用户需要完整地观看其他视频,才能了解目标处理的实际效果,消耗了数据流量。
发明内容
本公开提供一种搜索结果的展示方法,所述方法包括:
响应于针对目标实体对象的搜索指令,获取至少一个展示所述目标实体对象的使用效果的多媒体资源,其中,所述多媒体资源是基于所述目标实体对象对应的多媒体内容处理得到的;
展示所述至少一个多媒体资源。
本公开提供一种搜索结果的展示方法,所述方法包括:
接收针对目标实体对象的搜索指令,确定所述目标实体对象对应的多媒体内容;
根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,所述搜索结果信息包括:至少一个展示所述目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息,其中,所述多媒体数据为所述多媒体内容中使用所述目标实体对象的标签的图像帧数据,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;
将所述搜索结果信息发送至终端设备。
本公开提供一种搜索结果的展示装置,所述装置包括:
获取模块,设置为响应于针对目标实体对象的搜索指令,获取至少一个展示所述目标实体对象的使用效果的多媒体资源,其中,所述多媒体资源是基于所述目标实体对象对应的多媒体内容处理得到的;
展示模块,设置为展示所述至少一个多媒体资源。
本公开提供一种搜索结果的展示装置,所述装置包括:
接收模块,设置为接收针对目标实体对象的搜索指令,确定所述目标实体对象对应的多媒体内容;
确定模块,设置为根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,所述搜索结果信息包括:至少一个展示所述目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息,其中,所述多媒体数据为所述多媒体内容中使用所述目标实体对象的标签的图像帧数据,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;
发送模块,设置为将所述搜索结果信息发送至终端设备。
本公开提供一种计算机可读介质,存储有计算机程序,该计算机程序被处理装置执行时实现本公开上述第一种方法。
本公开提供一种电子设备,包括:
存储装置,存储有计算机程序;
处理装置,设置为执行所述存储装置中的所述计算机程序,以实现本公开上述第一种方法。
本公开提供一种计算机可读介质,存储有计算机程序,该程序被处理装置执行时实现本公开上述第二种方法。
本公开提供一种电子设备,包括:
存储装置,存储有计算机程序;
处理装置,设置为执行所述存储装置中的所述计算机程序,以实现本公开上述第二种方法。
附图说明
图1是一种终端设备、服务器部署的示意图;
图2是根据一示例性实施例示出的一种搜索结果的展示方法的流程图;
图3是根据一示例性实施例示出的展示搜索结果的示意图;
图4是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图5是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图6是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图7是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图8是根据一示例性实施例示出的展示搜索结果的示意图;
图9是根据一示例性实施例示出的一种搜索结果的展示方法的流程图;
图10是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图11是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图12是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图13是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图;
图14是根据一示例性实施例示出的一种搜索结果的展示装置的框图;
图15是根据一示例性实施例示出的另一种搜索结果的展示装置的框图;
图16是根据一示例性实施例示出的另一种搜索结果的展示装置的框图;
图17是根据一示例性实施例示出的另一种搜索结果的展示装置的框图;
图18是根据一示例性实施例示出的一种搜索结果的展示装置的框图;
图19是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而本公开可以通过多种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了理解本公开。本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,除非在上下文另有指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
在介绍本公开提供的搜索结果的展示方法、装置、可读介质和电子设备之前,对本公开多个实施例所涉及的应用场景进行介绍。该应用场景可以包括终端设备和服务器,终端设备和服务器之间可以进行数据传输。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Personal Multimedia Player,PMP)、车载终端(例如车载导航终端)等等的移动终端,以及诸如数字电视(television,TV)、台式计算机等等的固定终端。服务器例如可以包括但不限于:实体服务器,服务器集群或云端服务器等。在实现场景中,可以包括一个或多个终端设备,如图1所示。
图2是根据一示例性实施例示出的一种搜索结果的展示方法的流程图,如图2所示,该方法包括以下步骤。
步骤101,响应于针对目标实体对象的搜索指令,获取至少一个展示目标实体对象的使用效果的多媒体资源。其中,多媒体资源是基于目标实体对象对应的多媒体内容处理得到的。
举例来说,该方法的执行主体可以是终端设备,当用户想搜索目标实体对象时,可以通过终端设备输入搜索指令,搜索指令中包括了目标实体对象对应的搜索关键词。其中,实体对象可以理解为多媒体内容(例如:视频)中包括的对象,目标实体对象可以是任一种实体对象。多媒体内容中包括的实体对象可以分为内容实体对象和操作实体对象两类。内容实体对象用于指示多媒体内容中一个或多个图像帧中包括的内容,例如可以是:人物、汽车、山、湖、大海、猫、狗、兔子等。内容实体对象还可以用于指示多媒体内容中一个或多个 图像帧中包括的内容的情感风格,例如可以是:高光内容、温馨内容、感动内容等。例如,多个图像帧中包含的内容属于温馨风格,那么这些图像帧包括的内容实体对象为“温馨内容”,其中,图像帧中包括的内容的情感风格,可以是终端设备自动识别的,也可以是用户根据需求标记出来的,本公开对此不作限定。操作实体对象用于指示多媒体内容中一个或多个图像帧中包括的操作,例如可以包括:美颜特效、磨皮特效、宝宝道具、猫脸道具、指定字幕、指定背景音乐等。以目标实体对象为“宝宝道具”,搜索指令为“如何使用宝宝道具”为例,那么搜索指令中包括了目标实体对象对应的搜索关键词“宝宝道具”。
终端设备可以将搜索指令发送至服务器,服务器在接收到搜索指令后,对搜索指令进行分析,得到搜索指令中包括的搜索关键词,确定与搜索关键词匹配的目标实体对象。可以理解为,服务器上预先存储有多个实体对象,分别确定搜索关键词与每个实体对象的匹配度,将匹配度最高的一个或多个实体对象,作为目标实体对象。搜索关键词可以为一个或多个,同样的,目标实体对象也可以为一个或多个。
服务器在确定目标实体对象之后,可以在服务器上预先存储的大量多媒体内容中,查找目标实体对象对应的多媒体内容,目标实体对象对应的多媒体内容可以理解为使用了目标实体对象的多媒体内容,或者出现了目标实体对象的多媒体内容。服务器可以基于目标实体对象对应的多媒体内容,确定搜索结果信息,并将搜索结果信息发送给终端设备。
终端设备可以根据搜索结果信息,获取至少一个能够展示目标实体对象的使用效果的多媒体资源。其中,搜索结果信息可以包括:至少一个展示目标实体对象的使用效果的多媒体资源,相应的,终端设备可以直接从搜索结果信息中获得多媒体资源。或者,搜索结果可以包括至少一组多媒体数据,多媒体数据为多媒体内容中使用目标实体对象的标签的图像帧数据,相应的,终端设备可以根据多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。再或者,搜索结果可以包括至少一个资源指示信息,资源指示信息用于指示目标实体对象在多媒体内容中出现的片段,相应的,终端设备可以根据资源指示信息从服务器获取多媒体数据,再根据多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。
多媒体内容为内容完整、时长较长的多媒体文件,例如可以是视频、音频等文件,多媒体资源来自多媒体内容中与目标实体对象密切相关的部分,例如可以是动态图,或者时长较短的视频片段。多媒体资源可以是根据多媒体内容中,使用了目标实体对象的一个或多个图像帧生成的,能够直观展示目标实体对象的使用效果。以目标实体对象为“猫脸道具”,多媒体内容为视频来举例, 多媒体内容中从10s开始至27s,使用了“猫脸道具”,那么多媒体资源可以是根据多媒体内容中10s至27s内的图像帧生成的动态图。相比于多媒体内容,多媒体资源包含的数据量很小,并且能够直观地展示目标实体对象的使用效果,减少了数据流量的消耗。
多媒体内容是存储在服务器上的,可以是用户预先上传至服务器的,也可以是服务器从网络上抓取的,本公开对多媒体内容的来源不做限定,多媒体内容的格式例如可以是视频文件(例如:.avi、.mp4、.wmv、.rmvb、.3gp、.mov、.asf文件),也可以是音频文件(例如:.mp3、.wav、.flac文件)。多媒体资源是根据目标实体对象对应的多媒体内容处理得到的,例如可以为动态图,也可以为视频片段。动态图可以理解为从多媒体内容中截取一定数量的图像帧,按照预设的时间顺序拼接得到的动态图,例如可以是.gif格式的动态图。视频片段可以理解为从多媒体内容中截取一定数量的图像帧,按照预设的时间顺序拼接得到的视频,例如可以是.avi、.mp4、.wmv、.rmvb、.3gp、.mov、.asf等格式的视频文件。
步骤102,展示至少一个多媒体资源。
示例的,在得到至少一个多媒体资源之后,可以将至少一个多媒体资源作为搜索指令对应的展示结果,展示在终端设备的展示界面(例如:显示屏)上。展示界面上可以显示全部的多媒体资源,也可以显示指定数量(例如:3)个多媒体资源。针对不同格式的多媒体资源,可以确定不同的展示方式。例如,若多媒体资源为视频,可以将多媒体资源按照第一预设方式进行展示,第一预设方式包括:循环播放、倒序播放、倍速播放、窗口播放中的任一种。其中,循环播放,可以理解为将视频重复播放。倒序播放可以理解为将视频从最后一个图像帧播放到第一个图像帧,还可以将视频重复进行倒序播放。倍速播放可以理解为对视频的播放速度进行调整,例如可以为0.5倍速、1.25倍速、2倍速等。窗口播放可以理解为,在展示界面上新建一个窗口,将视频在这个窗口中进行展示,例如,展示界面上展示有3个视频,当用户手指移动到第1个视频所展示的位置时,弹出一个新的窗口来展示第1个视频。若多媒体资源为动态图,那么可以将多媒体资源按照第二预设方式进行展示,第二预设方式包括:放大展示或者缩小展示,放大展示可以理解为对动态图中包括的每一个图像帧都进行放大,并依次展示,缩小展示可以理解为对动态图中包括的每一个图像帧都进行缩小,并依次展示。例如,展示界面上展示有3个动态图,当用户手指移动到第2个动态图所展示的位置时,对第2个动态图进行放大展示或缩小展示。
以搜索指令为“兔耳朵的效果”来举例,终端设备将搜索指令发送至服务器。服务器对“兔耳朵的效果”进行分析,得到搜索关键词“兔耳朵”。服务 器在预先存储的视频中,找到使用了“兔耳朵”的多媒体内容,包括视频1、视频2和视频3。服务器根据视频1、视频2和视频3中,确定“兔耳朵”的搜索结果信息,并将搜索结果信息发送给终端设备。终端设备根据搜索结果信息获取能够展示“兔耳朵”的使用效果的多媒体资源。例如多媒体资源可以包括视频1对应的A.gif,视频2对应的B.gif,视频3对应的C.gif。在终端设备的显示屏上展示多媒体资源。本公开实施例中,可以在显示屏上展示A.gif,B.gif,和C.gif,如图3所示。由于A.gif,B.gif,和C.gif能够展示“兔耳朵”的使用效果,能够使用户直观地查看到“兔耳朵道具”的使用效果,有效提高了搜索的有效性,并且用户也无需分别点开观看完整的视频1、视频2和视频3,极大减少了数据流量的消耗。
综上所述,本公开响应于针对目标实体对象的搜索指令,获取至少一个展示目标实体对象的使用效果的多媒体资源,多媒体资源是基于目标实体对象对应的多媒体内容处理得到的,对至少一个多媒体资源进行展示。本公开针对不同目标实体对象的搜索指令,根据目标实体对象对应的多媒体内容得到能够展示目标实体对象的使用效果的多媒体资源,并对多媒体资源进行展示,无需展示完整的多媒体内容,能够提高搜索的有效性,减少数据流量的消耗。
图4是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图,如图4所示,该方法还可以包括以下步骤。
步骤103,记录多媒体内容中使用的至少一种实体对象,以及每种实体对象的使用时间信息。
步骤104,将多媒体内容和多媒体内容的属性信息发送至服务器。其中,多媒体内容的属性信息包括每种实体对象的标识信息和使用时间信息。
在一种实现方式中,终端设备在拍摄多媒体内容的过程中,或者多媒体内容拍摄结束后,可以在多媒体内容中使用多种实体对象,例如可以在多媒体内容中增加字幕、背景音乐,或者对视频使用特效、道具等,还可以直接在多媒体内容中增加不同内容对应的内容标签。终端设备可以记录多媒体内容中使用的每种实体对象,和每种实体对象对应的使用时间信息。使用时间信息可以包括:多媒体内容中使用该实体对象的时间(例如:起始时间和结束时间),或者多媒体内容中使用该实体对象的图像帧的序号(例如:起始帧序号和结束帧序号)。终端设备可以将每种实体对象的标识信息(标识信息能够唯一标识对应的实体对象),和每种实体对象的使用时间信息作为多媒体内容的属性信息,与多媒体内容一同发送至服务器。服务器接收后,可以根据每种实体对象的标识信息,和每种实体对象的使用时间信息,确定实体对象标识信息与图像帧信息的映射关系,其中,图像帧信息表示每种实体对象在多媒体内容中出现的片 段。映射关系中存储了多条记录,每条记录中包括了一个实体对象,和该实体对象在多媒体内容中出现的片段。
以下针对获取多媒体资源的不同场景,对步骤101的实现方式进行说明:
场景一,多媒体资源可以是服务器在预先存储的多个实体对象的标识信息中,确定目标实体对象对应的目标实体对象标识信息,根据实体对象标识信息与图像帧信息的映射关系,确定目标实体对象的图像帧信息。目标实体对象的图像帧信息,能够指示目标实体对象在多媒体内容中出现的片段。其中,图像帧信息可以理解为多媒体内容中使用了目标实体对象的图像帧对应的时间信息(例如:5s到25s),也可以是多媒体内容中使用了目标实体对象的图像帧对应的帧序号信息(例如:第20帧至第50帧)。服务器基于目标实体对象的图像帧信息,从多媒体内容中提取目标实体对象在多媒体内容出现的片段,以此生成能够展示目标实体对象的使用效果的多媒体资源。在该种场景下,终端设备可以直接从服务器获取多媒体资源。目标实体对象的图像帧信息,可以指示的是多媒体内容中连续的图像帧(可以理解为使用了目标实体对象的全部图像帧),也可以指示的是非连续的图像帧(可以理解为使用了目标实体对象的关键图像帧)。
以目标视频为视频1来举例,目标实体对象为“磨皮特效”,那么服务器可以从视频1中筛选出“磨皮特效标签”出现的图像帧,利用这些图像帧制作成.gif格式的动态图,或者.wmv格式的视频片段,终端设备可以从服务器获取动态图或者视频片段,并进行展示。
场景二,步骤101的实现方式如图5所示,可以包括:
步骤1011,获取多媒体数据,多媒体数据包括目标实体对象对应的多媒体内容中使用目标实体对象的图像帧数据。
步骤1012,基于多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。
示例的,多媒体资源还可以是终端设备根据服务器发送的多媒体数据,生成的。服务器可以在预先存储的多个实体对象的标识信息中,确定目标实体对象对应的目标实体对象标识信息,根据实体对象标识信息与图像帧信息的映射关系,确定多媒体内容中,使用目标实体对象的图像帧数据,并将图像帧数据作为多媒体数据发送给终端设备。其中,图像帧信息可以理解为多媒体内容中使用了目标实体对象的图像帧对应的时间信息(例如:5s到25s),也可以是多媒体内容中使用了目标实体对象的图像帧对应的帧序号信息(例如:第20帧至第50帧)。多媒体数据中还可以包括多媒体内容中使用目标实体对象的图像帧 的顺序信息。其中,图像帧数据可以理解为,多媒体内容中使用目标实体对象的一个或多个图像帧。顺序信息可以理解为多媒体内容中使用目标实体对象的图像帧的顺序,例如可以是多媒体内容中使用目标实体对象的图像帧的时间顺序(例如:每个图像帧在多媒体内容中出现的时间),也可以是多媒体内容中使用目标实体对象的图像帧对应的帧序号顺序(例如:每个图像帧在多媒体内容中的帧序号)。图像帧数据,可以是多媒体内容中连续的图像帧(可以理解为使用了目标实体对象的全部图像帧),也可以是非连续的图像帧(可以理解为使用了目标实体对象的关键图像帧)。
终端设备在获取到多媒体数据之后,可以基于多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。例如,可以根据多媒体数据中包括的图像帧数据,按照顺序信息进行拼接,生成多媒体资源。
以多媒体内容为视频2来举例,目标实体对象为“美颜特效”,那么服务器可以从视频2中筛选出使用了“美颜特效”的图像帧,将这些图像帧和这些图像帧的顺序信息,作为多媒体数据发送给终端设备。终端设备根据接收到的图像帧和图像帧的顺序信息,制作成.gif格式的动态图,或者.mp4格式的视频片段,作为多媒体资源。
场景三,步骤101的实现方式如图6所示,可以包括:
步骤1013,获取目标实体对象对应的资源指示信息,资源指示信息用于指示目标实体对象在多媒体内容中出现的片段。
步骤1014,根据资源指示信息,获取多媒体数据,多媒体数据包括多媒体内容中使用目标实体对象的图像帧数据。
步骤1015,根据多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。
示例的,多媒体资源还可以是终端设备从服务器获取资源指示信息,并根据资源指示信息从服务器获取多媒体数据,最后根据多媒体数据,生成的。服务器可以在预先存储的多个实体对象的标识信息中,确定目标实体对象对应的目标实体对象标识信息,根据实体对象标识信息与图像帧信息的映射关系,确定资源指示信息。其中,图像帧信息可以理解为多媒体内容中使用了目标实体对象的图像帧对应的时间信息(例如:5s到25s),也可以是多媒体内容中使用了目标实体对象的图像帧对应的帧序号信息(例如:第20帧至第50帧)。资源指示信息能够指示目标实体对象在多媒体内容中出现的片段。例如,资源指示信息可以是,多媒体内容中使用目标实体对象的图像帧对应的时间信息(例如:多媒体内容中的第5s至第10s),也可以是多媒体内容中使用目标实体对 象的图像帧对应的帧序号信息(例如:多媒体内容中的第15帧至第30帧)。
终端设备在获取到资源指示信息之后,可以根据资源指示信息从服务器获取多媒体数据。其中,多媒体数据包括多媒体内容中使用目标实体对象的图像帧数据。多媒体数据中还可以包括多媒体内容中使用目标实体对象的图像帧的顺序信息。其中,图像帧数据可以理解为,多媒体内容中使用目标实体对象的一个或多个图像帧。顺序信息可以理解为多媒体内容中使用目标实体对象的图像帧的顺序,例如可以是多媒体内容中使用目标实体对象的图像帧的时间顺序(例如:每个图像帧在多媒体内容中出现的时间),也可以是多媒体内容中使用目标实体对象的图像帧对应的帧序号顺序(例如:每个图像帧在多媒体内容中的帧序号)。图像帧数据,可以是多媒体内容中连续的图像帧(可以理解为使用了目标实体对象的全部图像帧),也可以是非连续的图像帧(可以理解为使用了目标实体对象的关键图像帧)。终端设备在获取到多媒体数据之后,可以基于多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。例如,可以根据多媒体数据中包括的图像帧数据,按照顺序信息进行拼接,得到多媒体资源。
与场景二不同的是,场景二中是服务器直接确定多媒体内容中使用了目标实体对象的图像帧数据,并直接将图像帧数据发送至终端设备。而场景三中,终端设备可以分段、分帧从服务器获取多媒体内容中使用了目标实体对象的图像帧。
以多媒体内容为视频3来举例,目标实体对象为“大海”,那么服务器可以从视频3中筛选出“大海”出现的片段,将这些片段对应的资源指示信息发送给终端设备,资源指示信息例如可以是视频3中“大海”出现的图像帧的时间信息:15s至100s,表示视频3中从第15s开始,至第100s内的图像帧,均出现了“大海”。终端设备根据资源指示信息从服务器获取视频3中的第15s至第100s内的图像帧和这些图像帧的帧序号,并将这些图像帧按照帧序号的顺序制作成.gif格式的动态图,或者.avi格式的视频片段,作为多媒体资源。
本公开实施例中,终端设备或者服务器,根据多媒体数据中包括的图像帧数据,按照顺序信息进行拼接,得到多媒体资源的方式可以为:将多媒体数据中包括的全部图像帧数据(即多媒体内容中使用目标实体对象的全部图像帧),组合成动态图,或视频片段。也可以将多媒体数据中包括的部分图像帧数据(即多媒体内容中使用目标实体对象的部分图像帧),组成动态图,或视频片段。本公开对此不作限定。
例如,多媒体内容中,使用目标实体对象的图像帧为其中的第21帧至第75帧图像帧,那么可以将这55帧图像帧按照时间顺序,组合成视频片段,也可以 取中间的第47帧至第56帧图像帧,组合成动态图。
使用的目标实体对象的图像帧可以在多媒体内容中不连续,即多媒体内容中可以使用多次目标实体对象。例如,多媒体内容中,使用目标实体对象的图像帧为其中的第10帧至第22帧图像帧,和第153帧至第170帧图像帧。那么可以选择第10帧至第22帧图像帧,和第153帧至第170帧图像帧组合成动态图,也可以选择第15帧至第20帧图像帧组合成动态图,还可以选择第160帧至第170帧图像帧组合成动态图。
图7是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图,如图7所示,在步骤102可以包括以下步骤。
步骤1021,获取目标实体对象对应的对象标识。
步骤1022,在展示界面的第一区域内显示目标实体对象对应的对象标识,在展示界面的第二区域内自动播放至少一个多媒体资源。
举例来说,终端设备除了展示多媒体资源之外,还可以从服务器获取目标实体对象对应的对象标识。对象标识可以理解为目标实体对象对应的图标。终端设备在从服务器获取目标实体对象对应的对象标识的同时,还可以获取目标实体对象对应的链接,对象标识与链接相关联。终端设备可以将多媒体资源和目标实体对象对应的对象标识同时展示在展示界面中。例如,可以在终端设备的展示界面的第一区域内显示目标实体对象对应的对象标识,在展示界面的第二区域内自动播放至少一个多媒体资源。其中,第一区域和第二区域可以为展示界面上的不同区域。
该方法还可以包括:
步骤105,响应于针对对象标识的触发操作,跳转到目标实体对象对应的多媒体拍摄场景。
示例的,用户通过多媒体资源查看到目标实体对象的使用效果后,可以通过点击目标实体对象对应的对象标识,发出针对对象标识的触发操作。终端设备响应于触发操作,可以通过与对象标识相关联的链接,跳转到目标实体对象对应的多媒体拍摄场景。这样,用户既可以通过多媒体资源直观查看目标实体对象的使用效果,又可以通过目标实体对象对应的对象标识快速跳转到目标实体对象对应的多媒体拍摄场景,提高了搜索的有效性,和操作的便捷度。
以目标实体对象为“兔耳朵道具”为例。服务器在预先存储的多媒体内容中,找到使用了“兔耳朵道具”的多媒体内容,包括视频1、视频2和视频3。终端设备获取了能够展示“兔耳朵道具”使用效果的多媒体资源,包括视频1对应的A.gif,视频2对应的B.gif,视频3对应的C.gif。终端设备还获取了“兔 耳朵道具”对应的对象标识。在终端设备的显示屏的第一区域内显示“兔耳朵道具”对应的对象标识,在第二区域内自动播放A.gif,B.gif,和C.gif,如图8所示。当用户点击“兔耳朵道具”对应的对象标识,发出触发操作后,终端设备可以跳转到使用“兔耳朵道具”的多媒体拍摄场景。
在一种应用场景中,步骤101的实现方式可以为:
获取至少一个展示实体对象集的使用效果的多媒体资源,实体对象集包括:目标实体对象和关联实体对象,关联实体对象为与目标实体对象存在关联关系的实体对象。
举例来说,多媒体内容中可以包括多种实体对象,而实体对象之间可能存在有关联关系,那么可以将具有关联关系的两个或多个实体对象进行关联。存在关联关系的实体对象,可以为用户经常同时使用的实体对象,也可以为一组实体对象中的多个实体对象。例如,用户通常会同时使用“宝宝道具”和“兔耳朵道具”,那么可以确定“宝宝道具”和“兔耳朵道具”存在关联关系。或者,“美颜特效”和“磨皮特效”均属于一组用于人像处理的实体对象,也可以确定“美颜特效”和“磨皮特效”存在关联关系。这样,在获取多媒体资源时,可以获取至少一个展示实体对象集的使用效果的多媒体资源,其中实体对象集中包括了目标实体对象,和与目标实体对象存在关联关系的关联实体对象。这样,用户可以通过多媒体资源,同时查看到目标实体对象和关联实体对象的使用效果,提高了搜索的有效性。
在另一种应用场景中,步骤101的实现方式可以为:
获取多媒体资源,和展示关联实体对象的使用效果的关联多媒体资源,关联实体对象为与目标实体对象存在关联关系的实体对象。
步骤102的实现方式可以为:
在展示界面的第三区域内展示多媒体资源,在展示界面的第四区域内展示关联多媒体资源。
示例的,终端设备还可以在获取展示目标实体对象的使用效果的多媒体资源的同时,获取展示关联实体对象的使用效果的关联多媒体资源,关联实体对象为与目标实体对象存在关联关系的实体对象。存在关联关系的实体对象,可以为用户经常同时使用的实体对象,也可以为一组实体对象中的多个实体对象。
这样,终端设备可以将多媒体资源和关联多媒体资源同时展示在展示界面中。例如,可以在终端设备的展示界面的第三区域内展示多媒体资源,在展示界面的第四区域内展示关联多媒体资源。其中,第三区域和第四区域可以为展示界面上的不同区域。这样,用户可以通过多媒体资源查看到目标实体对象的 使用效果,通过关联多媒体资源查看到关联实体对象的使用效果,提高了搜索的有效性。
综上所述,本公开响应于针对目标实体对象的搜索指令,获取至少一个展示目标实体对象的使用效果的多媒体资源,多媒体资源是基于目标实体对象对应的多媒体内容处理得到的,对至少一个多媒体资源进行展示。本公开针对不同目标实体对象的搜索指令,根据目标实体对象对应的多媒体内容得到能够展示目标实体对象的使用效果的多媒体资源,并对多媒体资源进行展示,无需展示完整的多媒体内容,能够提高搜索的有效性,减少数据流量的消耗。
图9是根据一示例性实施例示出的一种搜索结果的展示方法的流程图,如图9所示,该方法包括以下步骤。
步骤201,接收针对目标实体对象的搜索指令,确定目标实体对象对应的多媒体内容。
举例来说,该方法的执行主体可以为服务器,当用户想搜索目标实体对象时,可以通过终端设备输入搜索指令,搜索指令中包括了目标实体对象对应的搜索关键词。终端设备可以将搜索指令发送至服务器,服务器在接收到搜索指令后,对搜索指令进行分析,得到搜索指令中包括的搜索关键词,从而确定与搜索关键词匹配的目标实体对象。可以理解为,服务器上预先存储有多个实体对象,分别确定搜索关键词与每个实体对象的匹配度,将匹配度最高的一个或多个实体对象,作为目标实体对象。搜索关键词可以为一个或多个,同样的,目标实体对象也可以为一个或多个。例如,以目标实体对象为“宝宝道具”,搜索指令为“如何使用宝宝道具”为例,服务器可以对“如何使用宝宝道具”进行分析,确定搜索指令中包括的搜索关键词“宝宝道具”,从而确定目标实体对象为“宝宝道具”。
服务器在确定目标实体对象之后,可以在服务器上预先存储的大量多媒体内容中,查找目标实体对象对应的多媒体内容,可以理解为使用了目标实体对象的多媒体内容,或者出现了目标实体对象的多媒体内容。服务器上存储的多媒体内容,可以是用户预先上传至服务器的,也可以是服务器从网络上抓取的,本公开对多媒体内容的来源不做限定,多媒体内容的格式可以是视频文件(例如:.avi、.mp4、.wmv、.rmvb、.3gp、.mov、.asf文件)。
步骤202,根据目标实体对象对应的多媒体内容,确定搜索结果信息,搜索结果信息包括:至少一个展示目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息。其中,多媒体数据为多媒体内容中使用目标实体对象的标签的图像帧数据,资源指示信息用于指示目标实体对象在多媒体内容中出现的片段。
步骤203,将搜索结果信息发送至终端设备。
示例的,服务器可以基于目标实体对象对应的多媒体内容,确定搜索结果信息,并将搜索结果信息发送给终端设备。其中,搜索结果信息可以分为三类:第一类是至少一个展示目标实体对象的使用效果的多媒体资源,第二类是至少一组多媒体数据,第三类是至少一个资源指示信息。下面针对三类搜索结果信息的确定方式进行说明。
第一类,服务器可以在预先存储的多个实体对象的标识信息中,确定目标实体对象对应的目标实体对象标识信息,根据实体对象标识信息与图像帧信息的映射关系,确定目标实体对象的图像帧信息。目标实体对象的图像帧信息,能够指示目标实体对象在多媒体内容中出现的片段。映射关系可以理解为,其中存储了多条记录,每条记录中包括了一个实体对象,和该实体对象在多媒体内容中出现的片段。其中,图像帧信息可以理解为多媒体内容中使用了目标实体对象的图像帧对应的时间信息(例如:5s到25s),也可以是多媒体内容中使用了目标实体对象的图像帧对应的帧序号信息(例如:第20帧至第50帧)。服务器基于目标实体对象的图像帧信息,从多媒体内容中提取目标实体对象在多媒体内容出现的片段,以此能够生成展示目标实体对象的使用效果的多媒体资源。服务器可以把多媒体资源发送给终端设备,以使终端设备对多媒体资源进行展示。目标实体对象的图像帧信息,可以指示的是多媒体内容中连续的图像帧(可以理解为使用了目标实体对象的全部图像帧),也可以指示的是非连续的图像帧(可以理解为使用了目标实体对象的关键图像帧)。
第二类,服务器可以在预先存储的多个实体对象的标识信息中,确定目标实体对象对应的目标实体对象标识信息,根据实体对象标识信息与图像帧信息的映射关系,确定多媒体内容中,使用目标实体对象的图像帧数据,并将图像帧数据作为多媒体数据发送给终端设备。其中,图像帧信息可以理解为多媒体内容中使用了目标实体对象的图像帧对应的时间信息(例如:5s到25s),也可以是多媒体内容中使用了目标实体对象的图像帧对应的帧序号信息(例如:第20帧至第50帧)。多媒体数据中还可以包括多媒体内容中使用目标实体对象的图像帧的顺序信息。其中,图像帧数据可以理解为,多媒体内容中使用目标实体对象的一个或多个图像帧。顺序信息可以理解为多媒体内容中使用目标实体对象的图像帧的顺序。例如可以是多媒体内容中使用目标实体对象的图像帧的时间顺序(例如:每个图像帧在多媒体内容中出现的时间),也可以是多媒体内容中使用目标实体对象的图像帧对应的帧序号顺序(例如:每个图像帧在多媒体内容中的帧序号)。图像帧数据,可以是多媒体内容中连续的图像帧(可以理解为使用了目标实体对象的全部图像帧),也可以是非连续的图像帧(可以理解为使用了目标实体对象的关键图像帧)。终端设备在获取到多媒体数据 之后,可以基于多媒体数据,生成展示目标实体对象的使用效果的多媒体资源,并进行展示。例如,可以根据多媒体数据中包括的图像帧数据,按照顺序信息进行拼接,生成多媒体资源。
第三类,服务器可以在预先存储的多个实体对象的标识信息中,确定目标实体对象对应的目标实体对象标识信息,根据实体对象标识信息与图像帧信息的映射关系,确定资源指示信息。其中,图像帧信息可以理解为多媒体内容中使用了目标实体对象的图像帧对应的时间信息(例如:5s到25s),也可以是多媒体内容中使用了目标实体对象的图像帧对应的帧序号信息(例如:第20帧至第50帧)。资源指示信息能够指示目标实体对象在多媒体内容中出现的片段。例如,资源指示信息可以是,多媒体内容中使用目标实体对象的图像帧对应的时间信息(例如:多媒体内容中的第5s至第10s),也可以是多媒体内容中使用目标实体对象的图像帧对应的帧序号信息(例如:多媒体内容中的第15帧至第30帧)。终端设备在获取到资源指示信息之后,可以根据资源指示信息从服务器获取多媒体数据。其中,多媒体数据包括多媒体内容中使用目标实体对象的图像帧数据。多媒体数据中还可以包括多媒体内容中使用目标实体对象的图像帧的顺序信息。其中,图像帧数据可以理解为,多媒体内容中使用目标实体对象的一个或多个图像帧。顺序信息可以理解为多媒体内容中使用目标实体对象的图像帧的顺序,例如可以是多媒体内容中使用目标实体对象的图像帧的时间顺序(例如:每个图像帧在多媒体内容中出现的时间),也可以是多媒体内容中使用目标实体对象的图像帧对应的帧序号顺序(例如:每个图像帧在多媒体内容中的帧序号)。图像帧数据,可以是多媒体内容中连续的图像帧(可以理解为使用了目标实体对象的全部图像帧),也可以是非连续的图像帧(可以理解为使用了目标实体对象的关键图像帧)。终端设备在获取到多媒体数据之后,可以基于多媒体数据,生成展示目标实体对象的使用效果的多媒体资源,并进行展示。例如,可以根据多媒体数据中包括的图像帧数据,按照顺序信息进行拼接,得到多媒体资源。
上述实施例中的多媒体内容可以理解为内容完整、时长较长的多媒体文件,例如可以是视频、音频等文件,多媒体资源来自多媒体内容中与目标实体对象密切相关的部分,例如可以是动态图,或者时长较短的视频片段。多媒体资源可以是根据多媒体内容中,使用了目标实体对象的一个或多个图像帧生成的,能够直观展示目标实体对象的使用效果。以目标实体对象为“猫脸道具”,多媒体内容为视频来举例,多媒体内容中从10s开始至27s,使用了“猫脸道具”,那么多媒体资源可以是根据多媒体内容中10s至27s内的图像帧生成的动态图。相比于多媒体内容,多媒体资源包含的数据量很小,并且能够直观地展示目标实体对象的使用效果。这样,终端设备可以对多媒体资源进行展示,无需展示 完整的多媒体内容,能够提高搜索的有效性,减少数据流量的消耗。
图10是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图,如图10所示,该方法还可以包括以下步骤。
步骤204,接收任一实体对象对应的多媒体内容,和每种实体对象对应的多媒体内容的属性信息。其中,属性信息包括每种实体对象的标识信息和使用时间信息。
在一种实现方式中,终端设备在拍摄多媒体内容的过程中,或者多媒体内容拍摄结束后,可以在多媒体内容中使用多种实体对象,例如可以在多媒体内容中增加字幕、背景音乐,或者对视频使用特效、道具等,还可以直接在多媒体内容中增加不同内容对应的内容标签。终端设备可以记录多媒体内容中使用的每种实体对象,和每种实体对象对应的使用时间信息。使用时间信息可以包括:多媒体内容中使用该实体对象的时间(例如:起始时间和结束时间),或者多媒体内容中使用该实体对象的图像帧的序号(例如:起始帧序号和结束帧序号)。终端设备可以将每种实体对象的标识信息(标识信息能够唯一标识对应的实体对象),和每种实体对象的使用时间信息作为多媒体内容的属性信息,与多媒体内容一同发送至服务器。服务器接收到多媒体内容,和多媒体内容的属性信息后,可以根据每种实体对象的标识信息,和每种实体对象的使用时间信息,确定实体对象标识信息与图像帧信息的映射关系,其中,图像帧信息表示每种实体对象在多媒体内容中出现的片段。可以理解为,映射关系中存储了多条记录,每条记录中包括了一个实体对象,和该实体对象在多媒体内容中出现的片段。
图11是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图,如图11所示,步骤202可以通过以下步骤来实现。
步骤2021,获取多媒体内容的实体对象信息。其中,实体对象信息包括:多媒体内容中使用的实体对象标识信息以及实体对象标识信息与图像帧信息的映射关系,图像帧信息表示每种实体对象在多媒体内容中出现的片段。
步骤2022,在实体对象标识信息中选择目标实体对象对应的目标实体对象标识信息,根据映射关系确定目标实体对象的图像帧信息。
步骤2023,基于多媒体内容和目标实体对象的图像帧信息,生成多媒体资源。
服务器可以获取多媒体内容的实体对象信息,其中,实体对象信息包括了多媒体内容中使用的实体对象标识信息以及实体对象标识信息与图像帧信息的映射关系。在实体对象的标识信息中,确定目标实体对象对应的目标实体对象 标识信息,再根据实体对象标识信息与图像帧信息的映射关系,确定目标实体对象的图像帧信息。目标实体对象的图像帧信息,能够指示目标实体对象在多媒体内容中出现的片段。服务器基于目标实体对象的图像帧信息,从多媒体内容中提取目标实体对象在多媒体内容出现的片段,以此生成展示目标实体对象的使用效果的多媒体资源。
图12是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图,如图12所示,步骤202可以通过以下步骤来实现。
步骤2024,获取多媒体内容的实体对象信息。其中,实体对象信息包括:多媒体内容中使用的实体对象标识信息以及实体对象标识信息与图像帧信息的映射关系,图像帧信息表示每种实体对象在多媒体内容中出现的片段。
步骤2025,在实体对象标识信息中选择目标实体对象对应的目标实体对象标识信息,和关联实体对象对应的关联实体对象标识信息,根据映射关系确定目标实体对象的图像帧信息,和关联实体对象的图像帧信息,关联实体对象为与目标实体对象存在关联关系的实体对象。
步骤2026,基于多媒体内容、目标实体对象的图像帧信息和关联实体对象的图像帧信息,生成多媒体资源。
举例来说,多媒体内容中可以包括多种实体对象,而实体对象之间可能存在有关联关系,那么可以将具有关联关系的两个或多个实体对象进行关联。存在关联关系的实体对象,可以为用户经常同时使用的实体对象,也可以为一组实体对象中的多个实体对象。例如,用户通常会同时使用“宝宝道具”和“兔耳朵道具”,那么可以确定“宝宝道具”和“兔耳朵道具”存在关联关系。或者,“美颜特效”和“磨皮特效”均属于一组用于人像处理的实体对象,也可以确定“美颜特效”和“磨皮特效”存在关联关系。
这样,服务器在确定多媒体资源时,可以获取多媒体内容的实体对象信息,其中,实体对象信息包括了多媒体内容中使用的实体对象标识信息以及实体对象标识信息与图像帧信息的映射关系。在实体对象的标识信息中,确定目标实体对象对应的目标实体对象标识信息,和关联实体对象对应的关联实体对象标识信息。根据实体对象标识信息与图像帧信息的映射关系,确定目标实体对象的图像帧信息,和关联实体对象的图像帧信息。目标实体对象的图像帧信息,能够指示目标实体对象在多媒体内容中出现的片段,关联实体对象的图像帧信息,能够指示关联实体对象在多媒体内容中出现的片段。
服务器基于目标实体对象的图像帧信息和关联实体对象的图像帧信息,从多媒体内容中提取目标实体对象出现的片段和关联实体对象在多媒体内容出现 的片段,以此生成展示目标实体对象和关联实体对象的使用效果的多媒体资源。终端设备在展示多媒体资源时,用户可以同时查看目标实体对象和关联实体对象的使用效果,提高了搜索的有效性。在多媒体内容中提取目标实体对象出现的片段和关联实体对象在多媒体内容出现的片段之间,可能存在重复的图像帧,即可能存在既出现了目标实体对象,又出现了关联实体对象的图像帧。
图13是根据一示例性实施例示出的另一种搜索结果的展示方法的流程图,如图13所示,该方法还可以包括以下步骤。
步骤205,确定与目标实体对象存在关联关系的关联实体对象,并确定关联实体对象对应的关联多媒体内容。
在另一种应用场景中,服务器在接收到搜索指令后,对搜索指令进行分析,得到搜索指令中包括的搜索关键词,从而确定与搜索关键词匹配的目标实体对象,再确定与目标实体对象存在关联关系的关联实体对象,并确定关联实体对象对应的关联多媒体内容。关联多媒体内容可以理解为使用了关联实体对象的多媒体内容,或者出现了关联实体对象的多媒体内容。
步骤206,根据关联多媒体内容,确定关联搜索结果信息,关联搜索结果信息包括:至少一个展示关联实体对象的使用效果的关联多媒体资源,或者至少一组关联多媒体数据,或者至少一个关联资源指示信息。其中,关联多媒体数据包括关联多媒体内容中使用关联实体对象的标签的图像帧数据,关联资源指示信息用于指示关联实体对象在关联多媒体内容中出现的片段。
步骤207,将关联搜索结果信息发送至终端设备。
服务器可以基于关联实体对象对应的关联多媒体内容,确定关联搜索结果信息,并将关联搜索结果信息发送给终端设备。其中,关联搜索结果信息可以分为三类:第一类是至少一个展示关联实体对象的使用效果的关联多媒体资源,第二类是至少一组关联多媒体数据,第三类是至少一个关联资源指示信息。其中,关联多媒体资源的确定方式与展示目标实体对象的使用效果的多媒体资源的生成方式相同,关联多媒体数据与目标实体对象对应的多媒体数据的生成方式相同,关联资源指示信息与目标实体对象对应的资源指示信息的生成方式相同,此处不再赘述。
这样,终端设备可以根据搜索结果信息获取用于展示目标实体对象的使用效果的多媒体资源,并根据关联搜索结果信息获取用于展示关联实体对象的使用效果的关联多媒体资源。终端设备在展示多媒体资源和关联多媒体资源时,用户可以通过多媒体资源查看到目标实体对象的使用效果,通过关联多媒体资源查看到关联实体对象的使用效果,提高了搜索的有效性。
综上所述,本公开接收针对目标实体对象的搜索指令,确定目标实体对象对应的多媒体内容,根据目标实体对象对应的多媒体内容,确定搜索结果信息。其中,搜索结果信息可以包括至少一个展示目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息。将搜索结果信息发送至终端设备。本公开针对不同目标实体对象的搜索指令,根据目标实体对象对应的多媒体内容确定搜索结果信息,并将搜索结果信息发送给终端设备,以使终端设备能够根据搜索结果信息获取能够展示目标实体对象的使用效果的多媒体资源,并进行展示,而无需展示完整的多媒体内容,能够提高搜索的有效性,减少数据流量的消耗。
图14是根据一示例性实施例示出的一种搜索结果的展示装置的框图,如图14所示,该装置300可以包括:
获取模块301,设置为响应于针对目标实体对象的搜索指令,获取至少一个展示目标实体对象的使用效果的多媒体资源。其中,多媒体资源是基于目标实体对象对应的多媒体内容处理得到的。
展示模块302,设置为展示至少一个多媒体资源。
在一种应用场景中,展示模块302,可以根据多媒体资源的种类,确定不同的展示方式。例如,若多媒体资源为视频,将多媒体资源按照第一预设方式进行展示,第一预设方式包括:循环播放、倒序播放、倍速播放、窗口播放中的任一种。若多媒体资源为动态图,将多媒体资源按照第二预设方式进行展示,第二预设方式包括:放大展示或者缩小展示。
图15是根据一示例性实施例示出的另一种搜索结果的展示装置的框图,如图15所示,该装置300还包括:
记录模块303,设置为记录多媒体内容中使用的至少一种实体对象,以及每种实体对象的使用时间信息。
发送模块304,设置为将多媒体内容和多媒体内容的属性信息发送至服务器。其中,多媒体内容的属性信息包括每种实体对象的标识信息和使用时间信息。
在一种应用场景中,获取模块301可以设置为执行以下步骤:
步骤1)获取多媒体数据,多媒体数据包括目标实体对象对应的多媒体内容中使用目标实体对象的图像帧数据。
步骤2)基于多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。
在另一种应用场景中,获取模块301可以设置为执行以下步骤:
步骤3)获取目标实体对象对应的资源指示信息,资源指示信息用于指示目标实体对象在多媒体内容中出现的片段。
步骤4)根据资源指示信息,获取多媒体数据,多媒体数据包括多媒体内容中使用目标实体对象的图像帧数据。
步骤5)根据多媒体数据,生成展示目标实体对象的使用效果的多媒体资源。
图16是根据一示例性实施例示出的另一种搜索结果的展示装置的框图,如图16所示,展示模块302可以包括:
获取子模块3021,设置为获取目标实体对象对应的对象标识。
展示子模块3022,设置为在展示界面的第一区域内显示目标实体对象对应的对象标识,在展示界面的第二区域内自动播放至少一个多媒体资源。
图17是根据一示例性实施例示出的另一种搜索结果的展示装置的框图,如图17所示,该装置300还包括:
跳转模块305,设置为响应于针对对象标识的触发操作,跳转到目标实体对象对应的多媒体拍摄场景。
在一种应用场景中,获取模块301可以设置为:
获取至少一个展示实体对象集的使用效果的多媒体资源,实体对象集包括:目标实体对象和关联实体对象,关联实体对象为与目标实体对象存在关联关系的实体对象。
在另一种应用场景中,获取模块301可以设置为:
获取多媒体资源,和展示关联实体对象的使用效果的关联多媒体资源,关联实体对象为与目标实体对象存在关联关系的实体对象。
展示模块302可以设置为:
在展示界面的第三区域内展示多媒体资源,在展示界面的第四区域内展示关联多媒体资源。
关于上述实施例中的装置,其中多个模块执行操作的方式已经在有关该方法的实施例中进行了描述,此处将不做阐述说明。
综上所述,本公开响应于针对目标实体对象的搜索指令,获取至少一个展示目标实体对象的使用效果的多媒体资源,多媒体资源是基于目标实体对象对应的多媒体内容处理得到的,对至少一个多媒体资源进行展示。本公开针对不同目标实体对象的搜索指令,根据对应的多媒体内容得到能够展示目标实体对象的使用效果的多媒体资源,并对多媒体资源进行展示,无需展示完整的多媒 体内容,能够提高搜索的有效性,减少数据流量的消耗。
图18是根据一示例性实施例示出的一种搜索结果的展示装置的框图,如图18所示,该装置400可以应用于服务器,包括:
接收模块401,设置为接收针对目标实体对象的搜索指令,确定目标实体对象对应的多媒体内容。
确定模块402,设置为根据目标实体对象对应的多媒体内容,确定搜索结果信息,搜索结果信息包括:至少一个展示目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息。其中,多媒体数据为多媒体内容中使用目标实体对象的标签的图像帧数据,资源指示信息用于指示目标实体对象在多媒体内容中出现的片段。
发送模块403,设置为将搜索结果信息发送至终端设备。
在一种应用场景中,接收模块401,还设置为接收任一实体对象对应的多媒体内容,和每种实体对象对应的多媒体内容的属性信息。其中,属性信息包括每种实体对象的标识信息和使用时间信息。
在另一种应用场景中,确定模块402可以设置为执行以下步骤:
步骤A)获取多媒体内容的实体对象信息。其中,实体对象信息包括:多媒体内容中使用的实体对象标识信息以及实体对象标识信息与图像帧信息的映射关系,图像帧信息表示每种实体对象在多媒体内容中出现的片段。
步骤B)在实体对象标识信息中选择目标实体对象对应的目标实体对象标识信息,根据映射关系确定目标实体对象的图像帧信息。
步骤C)基于多媒体内容和目标实体对象的图像帧信息,生成多媒体资源。
在另一种应用场景中,确定模块402可以设置为执行以下步骤:
步骤D)获取多媒体内容的实体对象信息。其中,实体对象信息包括:多媒体内容中使用的实体对象标识信息以及实体对象标识信息与图像帧信息的映射关系,图像帧信息表示每种实体对象在多媒体内容中出现的片段。
步骤E)在实体对象标识信息中选择目标实体对象对应的目标实体对象标识信息,和关联实体对象对应的关联实体对象标识信息,根据映射关系确定目标实体对象的图像帧信息,和关联实体对象的图像帧信息,关联实体对象为与目标实体对象存在关联关系的实体对象。
步骤F)基于多媒体内容、目标实体对象的图像帧信息和关联实体对象的图像帧信息,生成多媒体资源。
在另一种应用场景中,确定模块402,还可以设置为:
确定与目标实体对象存在关联关系的关联实体对象,并确定关联实体对象对应的关联多媒体内容。根据关联多媒体内容,确定关联搜索结果信息,关联搜索结果信息包括:至少一个展示关联实体对象的使用效果的关联多媒体资源,或者至少一组关联多媒体数据,或者至少一个关联资源指示信息。其中,关联多媒体数据包括关联多媒体内容中使用关联实体对象的标签的图像帧数据,关联资源指示信息用于指示关联实体对象在关联多媒体内容中出现的片段。
发送模块403,还可以设置为:
将关联搜索结果信息发送至终端设备。
关于上述实施例中的装置,其中多个模块执行操作的方式已经在有关该方法的实施例中进行了描述,此处将不做阐述说明。
综上所述,本公开接收针对目标实体对象的搜索指令,确定目标实体对象对应的多媒体内容,根据目标实体对象对应的多媒体内容,确定搜索结果信息。其中,搜索结果信息可以包括至少一个展示目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息。将搜索结果信息发送至终端设备。本公开针对不同目标实体对象的搜索指令,根据对应的多媒体内容确定搜索结果信息,并将搜索结果信息发送给终端设备,以使终端设备能够根据搜索结果信息获取能够展示目标实体对象的使用效果的多媒体资源,并进行展示,而无需展示完整的多媒体内容,能够提高搜索的有效性,减少数据流量的消耗。
下面参考图19,示出了适于用来实现本公开实施例的电子设备(例如图1中的终端设备或服务器)600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA、PAD、PMP、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图19示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图19所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,处理装置601可以根据存储在只读存储器(Read-only Memory,ROM)602中的程序或者从存储装置608加载到随机访问存储器(Random Access Memory,RAM)603中的程序而执行多种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的多种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(Input/Output,I/O)接口605也连接至总线604。
以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、 摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图19示出了具有多种装置的电子设备600,但是并不要求实施或具备所有示出的装置,可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,终端设备、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc 端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:响应于针对目标实体对象的搜索指令,获取至少一个展示所述目标实体对象的使用效果的多媒体资源,其中,所述多媒体资源是基于所述目标实体对象对应的多媒体内容处理得到的;展示所述至少一个多媒体资源。
或者,上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:接收针对目标实体对象的搜索指令,确定所述目标实体对象对应的多媒体内容;根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,所述搜索结果信息包括:至少一个展示所述目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息,其中,所述多媒体数据为所述多媒体内容中使用所述目标实体对象的标签的图像帧数据,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;将所述搜索结果信息发送至终端设备。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机 指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在一种情况下并不构成对该模块本身的限定,例如,获取模块还可以被描述为“获取多媒体资源的模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM、快闪存储器、光纤、便捷式CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种搜索结果的展示方法,包括:响应于针对目标实体对象的搜索指令,获取至少一个展示所述目标实体对象的使用效果的多媒体资源;其中,所述多媒体资源是基于所述目标实体对象对应的多媒体内容处理得到的;展示所述至少一个多媒体资源。
根据本公开的一个或多个实施例,示例2提供了示例1的方法,还包括:记录多媒体内容中使用的至少一种实体对象,以及每种实体对象的使用时间信息;将所述多媒体内容和所述多媒体内容的属性信息发送至服务器;其中,所述多媒体内容的属性信息包括所述每种实体对象的标识信息和使用时间信息。
根据本公开的一个或多个实施例,示例3提供了示例1的方法,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:获取多媒体数据,所述多媒体数据包括所述目标实体对象对应的多媒体内容中使用所述目标实体对象的图像帧数据;基于所述多媒体数据,生成所述展示所述目标实体对象的使用效果的多媒体资源。
根据本公开的一个或多个实施例,示例4提供了示例1的方法,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:获取所述目 标实体对象对应的资源指示信息,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;根据所述资源指示信息,获取多媒体数据,所述多媒体数据包括所述多媒体内容中使用所述目标实体对象的图像帧数据;根据所述多媒体数据,生成所述展示所述目标实体对象的使用效果的多媒体资源。
根据本公开的一个或多个实施例,示例5提供了示例1至示例4中任一种方法,所述展示所述至少一个多媒体资源,包括:获取所述目标实体对象对应的对象标识;在展示界面的第一区域内显示所述目标实体对象对应的对象标识,在所述展示界面的第二区域内自动播放至少一个所述多媒体资源。
根据本公开的一个或多个实施例,示例6提供了示例5的方法,还包括:响应于针对所述对象标识的触发操作,跳转到所述目标实体对象对应的多媒体拍摄场景。
根据本公开的一个或多个实施例,示例7提供了示例1至示例4中任一种方法,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:获取至少一个展示实体对象集的使用效果的所述多媒体资源,所述实体对象集包括:所述目标实体对象和关联实体对象,所述关联实体对象为与所述目标实体对象存在关联关系的实体对象。
根据本公开的一个或多个实施例,示例8提供了示例1至示例4中任一种方法,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:获取所述多媒体资源,和展示关联实体对象的使用效果的关联多媒体资源,所述关联实体对象为与所述目标实体对象存在关联关系的实体对象;所述展示所述至少一个多媒体资源,包括:在展示界面的第三区域内展示所述多媒体资源,在所述展示界面的第四区域内展示所述关联多媒体资源。
根据本公开的一个或多个实施例,示例9提供了示例1至示例4中任一种方法,所述展示所述至少一个多媒体资源,包括:若所述多媒体资源为视频,将所述多媒体资源按照第一预设方式进行展示,所述第一预设方式包括:循环播放、倒序播放、倍速播放、窗口播放中的任一种;若所述多媒体资源为动态图,将所述多媒体资源按照第二预设方式进行展示,所述第二预设方式包括:放大展示或者缩小展示。
根据本公开的一个或多个实施例,示例10提供了一种搜索结果的展示方法,包括:接收针对目标实体对象的搜索指令,确定所述目标实体对象对应的多媒体内容;根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,所述搜索结果信息包括:至少一个展示所述目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息;其中,所述多媒体数 据为所述多媒体内容中使用所述目标实体对象的标签的图像帧数据,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;将所述搜索结果信息发送至终端设备。
根据本公开的一个或多个实施例,示例11提供了示例10的方法,还包括:接收任一实体对象对应的多媒体内容,和每种实体对象对应的多媒体内容的属性信息;其中,所述属性信息包括所述每种实体对象的标识信息和使用时间信息。
根据本公开的一个或多个实施例,示例12提供了示例10的方法,所述根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,包括:获取所述多媒体内容的实体对象信息;其中,所述实体对象信息包括:所述多媒体内容中使用的实体对象标识信息以及所述实体对象标识信息与图像帧信息的映射关系,所述图像帧信息表示每种实体对象在所述多媒体内容中出现的片段;在所述实体对象标识信息中选择所述目标实体对象对应的目标实体对象标识信息,根据所述映射关系确定所述目标实体对象的图像帧信息;基于所述多媒体内容和所述目标实体对象的图像帧信息,生成所述多媒体资源。
根据本公开的一个或多个实施例,示例13提供了示例10的方法,所述根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,包括:获取所述多媒体内容的实体对象信息;其中,所述实体对象信息包括:所述多媒体内容中使用的实体对象标识信息以及所述实体对象标识信息与图像帧信息的映射关系,所述图像帧信息表示每种实体对象在所述多媒体内容中出现的片段;在所述实体对象标识信息中选择所述目标实体对象对应的目标实体对象标识信息,和关联实体对象对应的关联实体对象标识信息,根据所述映射关系确定所述目标实体对象的图像帧信息,和所述关联实体对象的图像帧信息,所述关联实体对象为与所述目标实体对象存在关联关系的实体对象;基于所述多媒体内容、所述目标实体对象的视频帧图像和所述关联实体对象的图像帧信息,生成所述多媒体资源。
根据本公开的一个或多个实施例,示例14提供了示例10至示例13中任一种方法,在所述接收针对目标实体对象的搜索指令之后,还包括:确定与所述目标实体对象存在关联关系的关联实体对象,并确定所述关联实体对象对应的关联多媒体内容;根据所述关联多媒体内容,确定关联搜索结果信息,所述关联搜索结果信息包括:至少一个展示所述关联实体对象的使用效果的关联多媒体资源,或者至少一组关联多媒体数据,或者至少一个关联资源指示信息;其中,所述关联多媒体数据包括所述关联多媒体内容中使用所述关联实体对象的标签的图像帧数据,所述关联资源指示信息用于指示所述关联实体对象在所述 关联多媒体内容中出现的片段;将所述关联搜索结果信息发送至所述终端设备。
根据本公开的一个或多个实施例,示例15提供了一种搜索结果的展示装置,包括:获取模块,设置为响应于针对目标实体对象的搜索指令,获取至少一个展示所述目标实体对象的使用效果的多媒体资源;其中,所述多媒体资源是基于所述目标实体对象对应的多媒体内容处理得到的;展示模块,设置为展示所述至少一个多媒体资源。
根据本公开的一个或多个实施例,示例16提供了一种搜索结果的展示装置,包括:接收模块,设置为接收针对目标实体对象的搜索指令,确定所述目标实体对象对应的多媒体内容;确定模块,设置为根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,所述搜索结果信息包括:至少一个展示所述目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息;其中,所述多媒体数据为所述多媒体内容中使用所述目标实体对象的标签的图像帧数据,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;发送模块,设置为将所述搜索结果信息发送至终端设备。
根据本公开的一个或多个实施例,示例17提供了一种计算机可读介质,存储有计算机程序,该计算机程序被处理装置执行时实现示例1至示例9,或者,示例10至示例14中所述方法。
根据本公开的一个或多个实施例,示例18提供了一种电子设备,包括:存储装置,存储有计算机程序;处理装置,设置为执行所述存储装置中的所述计算机程序,以实现示例1至示例9,或者,示例10至示例14中所述方法。
此外,虽然采用特定次序描绘了多个操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了多个实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的一些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (18)

  1. 一种搜索结果的展示方法,包括:
    响应于针对目标实体对象的搜索指令,获取至少一个展示所述目标实体对象的使用效果的多媒体资源,其中,所述多媒体资源是基于所述目标实体对象对应的多媒体内容处理得到的;
    展示所述至少一个多媒体资源。
  2. 根据权利要求1所述的方法,还包括:
    记录多媒体内容中使用的至少一种实体对象,以及每种实体对象的使用时间信息;
    将所述多媒体内容和所述多媒体内容的属性信息发送至服务器,其中,所述多媒体内容的属性信息包括所述每种实体对象的标识信息和使用时间信息。
  3. 根据权利要求1所述的方法,其中,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:
    获取多媒体数据,其中,所述多媒体数据包括所述目标实体对象对应的多媒体内容中使用所述目标实体对象的图像帧数据;
    基于所述多媒体数据,生成所述展示所述目标实体对象的使用效果的多媒体资源。
  4. 根据权利要求1所述的方法,其中,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:
    获取所述目标实体对象对应的资源指示信息,其中,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;
    根据所述资源指示信息,获取多媒体数据,其中,所述多媒体数据包括所述多媒体内容中使用所述目标实体对象的图像帧数据;
    根据所述多媒体数据,生成所述展示所述目标实体对象的使用效果的多媒体资源。
  5. 根据权利要求1-4中任一项所述的方法,其中,所述展示所述至少一个多媒体资源,包括:
    获取所述目标实体对象对应的对象标识;
    在展示界面的第一区域内显示所述目标实体对象对应的对象标识,在所述展示界面的第二区域内自动播放所述至少一个多媒体资源。
  6. 根据权利要求5所述的方法,还包括:
    响应于针对所述对象标识的触发操作,跳转到所述目标实体对象对应的多媒体拍摄场景。
  7. 根据权利要求1-4中任一项所述的方法,其中,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:
    获取至少一个展示实体对象集的使用效果的多媒体资源,所述实体对象集包括:所述目标实体对象和关联实体对象,所述关联实体对象为与所述目标实体对象存在关联关系的实体对象。
  8. 根据权利要求1-4中任一项所述的方法,其中,所述获取至少一个展示所述目标实体对象的使用效果的多媒体资源,包括:
    获取所述多媒体资源,和展示关联实体对象的使用效果的关联多媒体资源,所述关联实体对象为与所述目标实体对象存在关联关系的实体对象;
    所述展示所述至少一个多媒体资源,包括:
    在展示界面的第三区域内展示所述多媒体资源,在所述展示界面的第四区域内展示所述关联多媒体资源。
  9. 根据权利要求1-4中任一项所述的方法,其中,所述展示所述至少一个多媒体资源,包括:
    在所述多媒体资源为视频的情况下,将所述多媒体资源按照第一预设方式进行展示,其中,所述第一预设方式包括:循环播放、倒序播放、倍速播放、窗口播放中的任一种;
    在所述多媒体资源为动态图的情况下,将所述多媒体资源按照第二预设方式进行展示,其中,所述第二预设方式包括:放大展示或者缩小展示。
  10. 一种搜索结果的展示方法,包括:
    接收针对目标实体对象的搜索指令,确定所述目标实体对象对应的多媒体内容;
    根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,所述搜索结果信息包括:至少一个展示所述目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息,其中,所述多媒体数据为所述多媒体内容中使用所述目标实体对象的标签的图像帧数据,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;
    将所述搜索结果信息发送至终端设备。
  11. 根据权利要求10所述的方法,还包括:
    接收任一种实体对象对应的多媒体内容,和每种实体对象对应的多媒体内容的属性信息;
    其中,所述属性信息包括所述每种实体对象的标识信息和使用时间信息。
  12. 根据权利要求10所述的方法,其中,所述根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,包括:
    获取所述多媒体内容的实体对象信息,其中,所述实体对象信息包括:所述多媒体内容中使用的实体对象标识信息以及所述实体对象标识信息与图像帧信息的映射关系,所述图像帧信息表示每种实体对象在所述多媒体内容中出现的片段;
    在所述实体对象标识信息中选择所述目标实体对象对应的目标实体对象标识信息,根据所述映射关系确定所述目标实体对象的图像帧信息;
    基于所述多媒体内容和所述目标实体对象的图像帧信息,生成所述多媒体资源。
  13. 根据权利要求10所述的方法,其中,所述根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,包括:
    获取所述多媒体内容的实体对象信息,其中,所述实体对象信息包括:所述多媒体内容中使用的实体对象标识信息以及所述实体对象标识信息与图像帧信息的映射关系,所述图像帧信息表示每种实体对象在所述多媒体内容中出现的片段;
    在所述实体对象标识信息中选择所述目标实体对象对应的目标实体对象标识信息,和关联实体对象对应的关联实体对象标识信息,根据所述映射关系确定所述目标实体对象的图像帧信息,和所述关联实体对象的图像帧信息,所述关联实体对象为与所述目标实体对象存在关联关系的实体对象;
    基于所述多媒体内容、所述目标实体对象的图像帧信息和所述关联实体对象的图像帧信息,生成所述多媒体资源。
  14. 根据权利要求10-13中任一项所述的方法,其中,在所述接收针对目标实体对象的搜索指令之后,所述方法还包括:
    确定与所述目标实体对象存在关联关系的关联实体对象,并确定所述关联实体对象对应的关联多媒体内容;
    根据所述关联多媒体内容,确定关联搜索结果信息,所述关联搜索结果信息包括:至少一个展示所述关联实体对象的使用效果的关联多媒体资源,或者至少一组关联多媒体数据,或者至少一个关联资源指示信息;其中,所述关联 多媒体数据包括所述关联多媒体内容中使用所述关联实体对象的标签的图像帧数据,所述关联资源指示信息用于指示所述关联实体对象在所述关联多媒体内容中出现的片段;
    将所述关联搜索结果信息发送至所述终端设备。
  15. 一种搜索结果的展示装置,包括:
    获取模块,设置为响应于针对目标实体对象的搜索指令,获取至少一个展示所述目标实体对象的使用效果的多媒体资源,其中,所述多媒体资源是基于所述目标实体对象对应的多媒体内容处理得到的;
    展示模块,设置为展示所述至少一个多媒体资源。
  16. 一种搜索结果的展示装置,包括:
    接收模块,设置为接收针对目标实体对象的搜索指令,确定所述目标实体对象对应的多媒体内容;
    确定模块,设置为根据所述目标实体对象对应的多媒体内容,确定搜索结果信息,所述搜索结果信息包括:至少一个展示所述目标实体对象的使用效果的多媒体资源,或者至少一组多媒体数据,或者至少一个资源指示信息,其中,所述多媒体数据为所述多媒体内容中使用所述目标实体对象的标签的图像帧数据,所述资源指示信息用于指示所述目标实体对象在所述多媒体内容中出现的片段;
    发送模块,设置为将所述搜索结果信息发送至终端设备。
  17. 一种计算机可读介质,存储有计算机程序,其中,所述计算机程序被处理装置执行时实现权利要求1-9或10-14中任一项所述的方法。
  18. 一种电子设备,包括:
    存储装置,存储有计算机程序;
    处理装置,设置为执行所述存储装置中的所述计算机程序,以实现权利要求1-9或10-14中任一项所述的方法。
PCT/CN2021/113162 2020-08-27 2021-08-18 搜索结果的展示方法、装置、可读介质和电子设备 WO2022042389A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21860227.4A EP4148597A4 (en) 2020-08-27 2021-08-18 SEARCH RESULT DISPLAY METHOD AND APPARATUS, READABLE MEDIUM AND ELECTRONIC DEVICE
JP2022577246A JP2023530964A (ja) 2020-08-27 2021-08-18 検索結果の表示方法、装置、可読媒体および電子機器
US18/060,543 US11928152B2 (en) 2020-08-27 2022-11-30 Search result display method, readable medium, and terminal device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010879294.5A CN112015926B (zh) 2020-08-27 2020-08-27 搜索结果的展示方法、装置、可读介质和电子设备
CN202010879294.5 2020-08-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/060,543 Continuation US11928152B2 (en) 2020-08-27 2022-11-30 Search result display method, readable medium, and terminal device

Publications (1)

Publication Number Publication Date
WO2022042389A1 true WO2022042389A1 (zh) 2022-03-03

Family

ID=73503757

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113162 WO2022042389A1 (zh) 2020-08-27 2021-08-18 搜索结果的展示方法、装置、可读介质和电子设备

Country Status (5)

Country Link
US (1) US11928152B2 (zh)
EP (1) EP4148597A4 (zh)
JP (1) JP2023530964A (zh)
CN (1) CN112015926B (zh)
WO (1) WO2022042389A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015926B (zh) * 2020-08-27 2022-03-04 北京字节跳动网络技术有限公司 搜索结果的展示方法、装置、可读介质和电子设备
CN115269889A (zh) * 2021-04-30 2022-11-01 北京字跳网络技术有限公司 剪辑模板搜索方法及装置
CN113778285A (zh) * 2021-09-28 2021-12-10 北京字跳网络技术有限公司 道具处理方法、装置、设备及介质
CN116257309A (zh) * 2021-12-10 2023-06-13 北京字跳网络技术有限公司 内容展示方法、装置、电子设备、存储介质和程序产品
CN114357202A (zh) * 2022-01-04 2022-04-15 北京字节跳动网络技术有限公司 教程数据的展示方法、装置、计算机设备以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909538A (zh) * 2015-12-21 2017-06-30 腾讯科技(北京)有限公司 使用效果展示方法及装置
CN109189987A (zh) * 2017-09-04 2019-01-11 优酷网络技术(北京)有限公司 视频搜索方法和装置
CN110019933A (zh) * 2018-01-02 2019-07-16 阿里巴巴集团控股有限公司 视频数据处理方法、装置、电子设备和存储介质
CN110121093A (zh) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 视频中目标对象的搜索方法及装置
US20190384789A1 (en) * 2018-06-18 2019-12-19 Rovi Guides, Inc. Methods and systems for sharing a user interface of a search engine
CN112015926A (zh) * 2020-08-27 2020-12-01 北京字节跳动网络技术有限公司 搜索结果的展示方法、装置、可读介质和电子设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006048605A (ja) * 2004-08-09 2006-02-16 Toshiba Corp 取扱説明検索装置及び取扱説明検索方法
US9210313B1 (en) * 2009-02-17 2015-12-08 Ikorongo Technology, LLC Display device content selection through viewer identification and affinity prediction
US9141281B2 (en) * 2012-09-28 2015-09-22 Fabrizio Ferri Photography System and method for controlling the progression of multmedia assets on a computing device
CN107071542B (zh) * 2017-04-18 2020-07-28 百度在线网络技术(北京)有限公司 视频片段播放方法及装置
WO2018194611A1 (en) * 2017-04-20 2018-10-25 Hewlett-Packard Development Company, L.P. Recommending a photographic filter
CN108932253B (zh) * 2017-05-25 2021-01-26 阿里巴巴(中国)有限公司 多媒体搜索结果展示方法及装置
US20190005699A1 (en) * 2017-06-30 2019-01-03 Intel Corporation Technologies for generating a motion model for a virtual character
CN109729426B (zh) * 2017-10-27 2022-03-01 优酷网络技术(北京)有限公司 一种视频封面图像的生成方法及装置
US20200150765A1 (en) * 2018-11-14 2020-05-14 Immersion Corporation Systems and methods for generating haptic effects based on visual characteristics
CN109996091A (zh) * 2019-03-28 2019-07-09 苏州八叉树智能科技有限公司 生成视频封面的方法、装置、电子设备和计算机可读存储介质
CN110602554B (zh) * 2019-08-16 2021-01-29 华为技术有限公司 封面图像确定方法、装置及设备
CN110856037B (zh) * 2019-11-22 2021-06-22 北京金山云网络技术有限公司 一种视频封面确定方法、装置、电子设备及可读存储介质
CN110909205B (zh) * 2019-11-22 2023-04-07 北京金山云网络技术有限公司 一种视频封面确定方法、装置、电子设备及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909538A (zh) * 2015-12-21 2017-06-30 腾讯科技(北京)有限公司 使用效果展示方法及装置
CN109189987A (zh) * 2017-09-04 2019-01-11 优酷网络技术(北京)有限公司 视频搜索方法和装置
CN110019933A (zh) * 2018-01-02 2019-07-16 阿里巴巴集团控股有限公司 视频数据处理方法、装置、电子设备和存储介质
CN110121093A (zh) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 视频中目标对象的搜索方法及装置
US20190384789A1 (en) * 2018-06-18 2019-12-19 Rovi Guides, Inc. Methods and systems for sharing a user interface of a search engine
CN112015926A (zh) * 2020-08-27 2020-12-01 北京字节跳动网络技术有限公司 搜索结果的展示方法、装置、可读介质和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4148597A4

Also Published As

Publication number Publication date
EP4148597A4 (en) 2023-11-29
US11928152B2 (en) 2024-03-12
US20230093621A1 (en) 2023-03-23
CN112015926A (zh) 2020-12-01
EP4148597A1 (en) 2023-03-15
CN112015926B (zh) 2022-03-04
JP2023530964A (ja) 2023-07-20

Similar Documents

Publication Publication Date Title
WO2022042389A1 (zh) 搜索结果的展示方法、装置、可读介质和电子设备
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
CN109640129B (zh) 视频推荐方法、装置,客户端设备、服务器及存储介质
WO2020233142A1 (zh) 多媒体文件播放方法、装置、电子设备和存储介质
WO2023088442A1 (zh) 一种直播预览方法、装置、设备、程序产品及介质
CN109684589B (zh) 客户端的评论数据的处理方法、装置及计算机存储介质
WO2023051294A1 (zh) 道具处理方法、装置、设备及介质
WO2023165515A1 (zh) 拍摄方法、装置、电子设备和存储介质
WO2021197023A1 (zh) 多媒体资源筛选方法、装置、电子设备及计算机存储介质
WO2023005831A1 (zh) 一种资源播放方法、装置、电子设备和存储介质
WO2023151589A1 (zh) 视频显示方法、装置、电子设备和存储介质
WO2023016349A1 (zh) 一种文本输入方法、装置、电子设备和存储介质
WO2023088006A1 (zh) 云游戏交互方法、装置、可读介质和电子设备
WO2023169356A1 (zh) 图像处理方法、装置、设备及存储介质
CN111818383B (zh) 视频数据的生成方法、系统、装置、电子设备及存储介质
CN109635131B (zh) 多媒体内容榜单显示方法、推送方法,装置及存储介质
JP2023525091A (ja) 画像特殊効果の設定方法、画像識別方法、装置および電子機器
CN110381356B (zh) 音视频生成方法、装置、电子设备及可读介质
WO2023134617A1 (zh) 一种模板选择方法、装置、电子设备及存储介质
WO2023279951A1 (zh) 录屏视频的处理方法、装置、可读介质和电子设备
WO2023098576A1 (zh) 图像处理方法、装置、设备及介质
KR20090035989A (ko) 컨텐츠 획득 시스템 및 그 방법
WO2022218109A1 (zh) 交互方法, 装置, 电子设备及计算机可读存储介质
WO2021031909A1 (zh) 数据内容的输出方法、装置、电子设备及计算机可读介质
WO2022042398A1 (zh) 用于确定对象添加方式的方法、装置、电子设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860227

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022577246

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021860227

Country of ref document: EP

Effective date: 20221208

NENP Non-entry into the national phase

Ref country code: DE