WO2024099101A1 - 素材数据处理方法及相关产品 - Google Patents

素材数据处理方法及相关产品 Download PDF

Info

Publication number
WO2024099101A1
WO2024099101A1 PCT/CN2023/127013 CN2023127013W WO2024099101A1 WO 2024099101 A1 WO2024099101 A1 WO 2024099101A1 CN 2023127013 W CN2023127013 W CN 2023127013W WO 2024099101 A1 WO2024099101 A1 WO 2024099101A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
multimodal
material data
data
Prior art date
Application number
PCT/CN2023/127013
Other languages
English (en)
French (fr)
Inventor
刘畅
王海涵
刘森
储鹏飞
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2024099101A1 publication Critical patent/WO2024099101A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9035Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof

Definitions

  • the present application relates to the technical field of electronic equipment, and in particular to a material data processing method and related products.
  • Users often choose to watch all the video data and/or image data and other material data in the albums of their own or other users' electronic devices such as family and friends, and then manually select the favorite images and/or video materials in other users' electronic devices, and obtain the above material data through the file transfer function of the electronic device and/or other users, and then the user can perform secondary editing on the above material data. In this way, users often waste a lot of time and energy when selecting the above material data, and may even fail to select satisfactory images and/or videos and other materials, resulting in low user experience.
  • the embodiments of the present application provide a material data processing method and related products.
  • an embodiment of the present application provides a material data processing method, which is applied to a first device, wherein the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device; the method includes:
  • determining multimodal information corresponding to the first material data In response to a preset instruction for first material data triggered by a user, determining multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • the material acquisition request includes the multimodal information
  • the multimodal information is used by the second device to screen target multimodal information matching the multimodal information
  • the target multimodal information corresponds to second material data
  • the second material data is determined by the corresponding second device according to the multimodal information
  • the preset operation is performed on the first material data and the second material data.
  • an embodiment of the present application provides a material data processing method, which is applied to a second device, where the second device establishes a communication connection with a first device, and the second device is a slave device of the first device; the method includes:
  • the material acquisition request includes multimodal information
  • the multimodal information is determined by the first device according to the first material data
  • an embodiment of the present application provides a material data processing method, which is applied to a first device, wherein the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device; the method includes:
  • determining multimodal information corresponding to the first material data In response to a preset instruction for first material data triggered by a user, determining multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • each second device corresponds to one material scene detection request
  • the material scene detection request includes the multimodal information
  • the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as that of the first device, and the second material data is determined by the corresponding second device according to matching the multimodal information with the multimodal database;
  • each of the second devices corresponds to one material scene detection result
  • any of the material scene detection results indicates that the corresponding second device has second material data of the same material application scene as that of the first device, sending the material acquisition request to the corresponding second device;
  • the preset operation is performed on the first material data and the second material data.
  • an embodiment of the present application provides a material data processing method, which is applied to a second device, where the second device establishes a communication connection with a first device, and the second device is a slave device of the first device; the method includes:
  • second material data corresponding to the multimodal information is determined, and second material data having the same material application scenario as that of the first device is determined;
  • a material acquisition result is sent to the first device, wherein the material acquisition result includes or does not include the second material data.
  • an embodiment of the present application provides a material data processing device, which is applied to a first device, wherein the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device; the device includes: a determining unit, a sending unit, a receiving unit, and an executing unit, wherein:
  • the determining unit is configured to determine, in response to a preset instruction for first material data triggered by a user, multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • the sending unit is configured to send a material acquisition request to the at least one second device, wherein the material acquisition request includes the multimodal information, the multimodal information is used by the second device to screen target multimodal information matching the multimodal information, the target multimodal information corresponds to second material data, and the second material data is determined by the corresponding second device according to the multimodal information;
  • the receiving unit is configured to receive at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result;
  • the execution unit is configured to execute the preset operation on the first material data and the second material data if any one of the material acquisition results indicates that the corresponding second device has the second material data.
  • an embodiment of the present application provides a material data processing device, which is applied to a second device, the second device establishes a communication connection with a first device, and the second device is a slave device of the first device; the device includes: a receiving unit, a determining unit and a sending unit, wherein:
  • the receiving unit is configured to receive a material acquisition request sent by the first device, wherein the material acquisition request includes multimodal information, and the multimodal information is determined by the first device according to the first material data;
  • the determining unit is used to determine whether there is target multimodal information matching the multimodal information in the multimodal database;
  • the determining unit is further configured to determine second material data corresponding to the multimodal information if the target multimodal information exists in the multimodal database;
  • the sending unit is configured to send a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • an embodiment of the present application provides a material data processing device, which is applied to a first device, wherein the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device; the device includes: a determining unit, a sending unit, a receiving unit, and an executing unit, wherein:
  • the determining unit is configured to determine, in response to a preset instruction for first material data triggered by a user, multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • the sending unit is used to send at least one material scene detection request to the at least one second device, wherein each second device corresponds to one material scene detection request, the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as the first device, and the second material data is determined by the corresponding second device according to the multimodal information matching the multimodal database;
  • the receiving unit is configured to receive at least one material scene detection result sent by the at least one second device, wherein each of the second devices corresponds to one material scene detection result;
  • the sending unit is further configured to send the material acquisition request to the corresponding second device if any of the material scene detection results indicates that the corresponding second device has second material data of the same material application scene as that of the first device;
  • the receiving unit is further configured to receive at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result;
  • the execution unit is configured to execute the preset operation on the first material data and the second material data if any one of the material acquisition results indicates that the corresponding second device has the second material data.
  • an embodiment of the present application provides a material data processing device, which is applied to a second device, the second device establishes a communication connection with a first device, and the second device is a slave device of the first device; the device includes: a receiving unit, a determining unit, a sending unit and a display unit, wherein:
  • the receiving unit is configured to receive a material scene detection request sent by the first device, wherein the material scene request includes multimodal information;
  • the determining unit is used to determine whether there is target multimodal information matching the multimodal information in the multimodal database;
  • the determining unit is further configured to determine, if target multimodal information matching the multimodal information exists in the multimodal database, second material data corresponding to the multimodal information, and determine that second material data having the same material application scenario as that of the first device exists;
  • the sending unit is used to send a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data of the same material application scene as the first device;
  • the receiving unit is further configured to receive a material acquisition request sent by the first device, wherein the material acquisition request is used by the first device to acquire the second material data;
  • the display unit is used to display prompt information, wherein the prompt information is used to instruct the user to choose to send or not to send the second material data;
  • the sending unit is configured to send a material acquisition result to the first device in response to the user's determination to send the second material data, wherein the material acquisition result includes or does not include the second material data.
  • an embodiment of the present application provides an electronic device, comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and are configured to be executed by the processor, and the program includes instructions for executing the steps of any method in the first aspect and/or the second aspect and/or the third aspect and/or the fourth aspect of the embodiment of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the above-mentioned computer-readable storage medium stores a computer program for electronic data exchange, wherein the above-mentioned computer program enables the computer to execute part or all of the steps described in any method of the first aspect and/or the second aspect and/or the third aspect and/or the fourth aspect of the embodiment of the present application.
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute some or all of the steps described in any of the methods of the first aspect and/or the second aspect and/or the third aspect and/or the fourth aspect of the embodiment of the present application.
  • the computer program product may be a software installation package.
  • the first device sends at least one material scene detection request to the at least one second device, wherein each second device corresponds to a material scene detection request, the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as the first device, and the second material data is determined by the corresponding second device according to the multimodal information matching the multimodal database; the second device receives the material scene detection request sent by the first device, wherein the material scene request includes the multimodal information; if there is target multimodal information matching the multimodal information in the multimodal database, the second device determines the second material data corresponding to the multimodal information, and determines that there is second material data of the same material application scene as the first device; the second device sends a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data of the same material application scene as the first device; the first device receives at least one material scene
  • the step of sending the material acquisition result to the first device wherein the material acquisition result includes or does not include the second material data; the first device receives at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result; if any of the material acquisition results indicates that the corresponding second device has the second material data, the first device performs the preset operation on the first material data and the second material data.
  • FIG1 is a schematic diagram of the structure of a material data processing system provided in an embodiment of the present application.
  • FIG2 is a schematic diagram of the architecture of an authoring engine layer provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of an architecture of a communication network provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of a flow chart of a material data processing method provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of a flow chart of a material data processing method provided in an embodiment of the present application.
  • FIG6 is a schematic diagram of a flow chart of a material data processing method provided in an embodiment of the present application.
  • FIG7 is a schematic diagram of a flow chart of a material data processing method provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of a flow chart of a material data processing method provided in an embodiment of the present application.
  • FIG9A is a schematic diagram of a scenario of an intelligent creation method provided in an embodiment of the present application.
  • FIG9B is a schematic diagram of an operation of an intelligent creation method provided by an embodiment of the present application.
  • FIG9C is a schematic diagram of a scenario of an intelligent creation method provided in an embodiment of the present application.
  • FIG9D is a schematic diagram of a scenario of an intelligent creation method provided in an embodiment of the present application.
  • FIG9E is a schematic diagram of a scenario of an intelligent creation method provided in an embodiment of the present application.
  • FIG9F is a schematic diagram of a scenario of an intelligent creation method provided in an embodiment of the present application.
  • FIG9G is a schematic diagram of a scenario of an intelligent creation method provided in an embodiment of the present application.
  • FIG10 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • FIG11 is a block diagram of functional units of a material data processing device provided in an embodiment of the present application.
  • FIG12A is a block diagram of functional units of a material data processing device provided in an embodiment of the present application.
  • FIG12B is a block diagram of functional units of a material data processing device provided in an embodiment of the present application.
  • FIG. 13 is a block diagram of functional units of a material data processing device provided in an embodiment of the present application.
  • FIG. 14 is a block diagram showing the composition of functional units of a material data processing device provided in an embodiment of the present application.
  • the electronic device may be a portable electronic device that also includes other functions such as a personal digital assistant and/or a music player function, such as a mobile phone, a tablet computer, a wearable electronic device with a wireless communication function (such as a smart watch, smart glasses), a vehicle-mounted device, etc.
  • portable electronic devices include but are not limited to portable electronic devices equipped with an IOS system, an Android system, a Microsoft system, or other operating systems.
  • the above-mentioned portable electronic device may also be other portable electronic devices, such as a laptop computer (Laptop), etc. It should also be understood that in some other embodiments, the above-mentioned electronic device may not be a portable electronic device, but a desktop computer.
  • FIG. 1 shows a schematic diagram of the structure of a material data processing system applicable to the present application.
  • the schematic diagram of the structure may include: an application layer, a creative engine layer, and a system service layer.
  • the above-mentioned application layer can be used to support different software applications in electronic devices, for example, it can include a photo album, which may include images or video data from different applications; the application layer can be used to receive preset instructions initiated by users, including preset operations such as intelligent creation operation instructions and storage operation instructions, and the preset instructions are used to instruct the first device to perform corresponding preset operations.
  • a photo album which may include images or video data from different applications
  • the application layer can be used to receive preset instructions initiated by users, including preset operations such as intelligent creation operation instructions and storage operation instructions, and the preset instructions are used to instruct the first device to perform corresponding preset operations.
  • the above-mentioned creation engine layer may include a data detection module, a multimodal database, an intelligent creation module and a data optimization module; wherein the above-mentioned data detection module can be used for scene matching, detection of multimodal information, etc., which is not limited here; the multimodal database is mainly used for the management of material data and multi-module data, etc., which is not limited here; the intelligent creation module is mainly used for cropping, special effects beautification, synthesis editing, etc.
  • the data optimization module is used to screen and match material data, etc., and the module may include a variety of threshold parameters, such as: preset information difference, preset matching rate, threshold of aesthetic evaluation operation, and can also be used to set indicators for storing quality evaluation values, etc., which are not limited here; the data optimization module can also filter information or compare data on multimodal data, such as filtering out material data that does not meet the above indicators according to multimodal information, and comparing intelligent creation scenes, etc., which are not limited here.
  • the system service layer is mainly used for detection, identification, connection, and communication between multiple devices in the network.
  • a schematic diagram of the architecture of a communication network may include a first device and At least one second device, wherein the first device is the master device of at least one second device, and the second device is the slave device of the first device.
  • the system service module shown in Figure 2 can support the communication connection between the first device and at least one second device, and detect other devices in the same communication network, etc., which is not limited here.
  • first device and the second device may be in the same local area network, that is, they may be devices in the same communication network, or they may be in different local area networks, that is, they may not be devices in the same communication network.
  • the first device sends at least one material scene detection request to the at least one second device, wherein each second device corresponds to a material scene detection request, the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as the first device, and the second material data is determined by the corresponding second device according to matching the multimodal information with the multimodal database; the second device receives the material scene detection request sent by the first device, wherein the material scene request includes the multimodal information; if there is target multimodal information matching the multimodal information in the multimodal database, the second device determines the second material data corresponding to the multimodal information, and determines that there is second material data of the same material application scene as the first device; the second device sends a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data of the same material application scene as the first device; the first device receives at least one material scene detection result sent by the at least
  • the second device can complete the optimization of material data in the process of determining the target multimodal information that matches the multimodal information to obtain the second material data. In this way, the preferred material data can be automatically shared among multiple devices, and users do not need to manually select and implement the cumbersome process of transferring materials between devices, which is conducive to improving user experience.
  • the first device when it is a slave device, it can also execute the same material intelligent creation method as the above-mentioned second device, which will not be elaborated here.
  • the above-mentioned multiple may refer to two or more than two, which will not be repeated later.
  • Figure 4 is a flow chart of a material data processing method provided in an embodiment of the present application, which is applied to a first device, and the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device; as shown in the figure, the material data processing method includes the following operations.
  • the preset instructions can be set by the user or by the system default, which is not limited here.
  • the preset instructions can be used to trigger or issue by the user, and can be used to instruct the first device to perform a corresponding preset operation, which can include intelligent creation operation, storage operation, material data sharing operation, etc., which is not limited here.
  • the intelligent creation operation may include at least one of the following: cropping operation, special effects beautification operation, synthesis editing operation, etc., which are not limited here.
  • the above-mentioned multimodal information can correspond to the first material data, and can be used to indicate the Global Positioning System (GPS) information, face information, scene information, subject information, aesthetic evaluation information, highlight segment information, character relationship information and text information corresponding to the first material data, etc., which are not limited here.
  • GPS Global Positioning System
  • the first material data can be acquired by the first device or selected by the user. After acquiring the first material data, multimodal information analysis can be performed on the first material data to obtain multimodal information corresponding to each image data and/or video data in the first material data.
  • the first device may include a multimodal database, which may store multimodal information corresponding to each image data and/or video data.
  • the multimodal information corresponding to the first material data may be directly adapted from the multimodal database.
  • the multimodal information may include at least one of the following: Global Positioning System (GPS) information, face information, scene information, subject information, aesthetic evaluation information, highlight information, character relationship information, and text information, etc., which are not limited here.
  • GPS Global Positioning System
  • the multimodal information is used to represent the detail information corresponding to the image data or video data in the first material data, the highlight information It can be used to characterize the most interesting or exciting highlight moment obtained by processing the first material data. For example, it can be video data corresponding to the highlight moment of a user receiving a trophy, or it can be image data corresponding to the highlight moment of a sunset glow all over the sky, etc., which is not limited here.
  • the above-mentioned aesthetic evaluation information is an aesthetic evaluation score obtained by the first device according to aesthetic evaluation indicators and dimensions, etc., which can be used to screen image data and/or video data that conforms to the public aesthetic or user aesthetic.
  • the above-mentioned aesthetic evaluation criteria may include at least one of the following: color, composition, professional photography skills, content semantics, etc., which are not limited here.
  • the electronic device can receive a preset instruction for the first material data triggered by the user from the application layer as shown in Figure 1, and the preset instruction can be used to perform an intelligent creation operation on the first material data specified or selected by the user.
  • the first device may include an intelligent creation operation page.
  • the above-mentioned first material data may also be specified by the first device.
  • the application layer of the first device may identify at least one image operated by the user before entering the intelligent creation operation page.
  • the first device may determine the intelligent creation scene based on the above-mentioned at least one image, and then adapt from the application layer to obtain at least one other image of the same intelligent creation scene according to the intelligent creation scene, and then combine at least one image and at least one other image to obtain the above-mentioned first material data, and obtain the multimodal information corresponding to each image in the first material data from the multimodal database, and obtain the multimodal information corresponding to the first material data.
  • At least one other image of the same smart creation scene can be adapted for each smart creation scene, and then at least one other image and at least one image corresponding to the multiple smart scenes are combined to obtain the first material data. The details will not be repeated here.
  • S402. Send a material acquisition request to the at least one second device, wherein the material acquisition request includes the multimodal information, and the multimodal information is used by the second device to screen target multimodal information that matches the multimodal information, and the target multimodal information corresponds to second material data, and the second material data is determined by the corresponding second device according to the multimodal information.
  • the material acquisition request may be used by the first device to acquire second material data matching the multimodal information stored in each of at least one second device, and the second material data may also include image data and/or video data.
  • the second material data may be used by the first device to perform a preset operation in combination with the first material data.
  • the multimodal information is used by the second device to filter target multimodal information that matches the multimodal information of the first device to obtain second material data.
  • the multimodal information is also used by the second device to determine whether there is target multimodal information that matches the multimodal information in the first device, so as to determine whether the second material data and the first material data in the second device and the first device match. For example, it can also be used to determine whether the two devices are in the same material application scenario.
  • S403 Receive at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result.
  • the above-mentioned material acquisition result may include any one of the following: the existence of second material data matching the first material data, the non-existence of second material data matching the first material data, and the like.
  • the second device has the authority to choose whether to send the second material data to the first device.
  • the material acquisition result received by the first device contains second material data that matches the first material data or matches the multimodal information, it indicates that the second device allows the first device to obtain the second material data; conversely, when the material acquisition result received by the first device does not contain the second material data, it indicates that the second device does not allow the first device to obtain the second material data.
  • the second device can determine whether it has second material data that matches the multimodal information through multimodal information.
  • the material acquisition result received by the first device contains second material data that matches the first material data or matches the multimodal information, it indicates that the second device has second material data that matches the multimodal information.
  • the material acquisition result received by the first device does not contain the second material data, it indicates that the second device does not have second material data that matches the multimodal information.
  • the electronic device can select the second material data corresponding to the second device corresponding to the material acquisition result of "second material data exists”, obtain at least one second material data, and perform preset operations on the first material data and at least one second material data.
  • an intelligent creation operation can be performed on the first material data and at least one second material data corresponding to at least one second device to obtain a new target material data, which can include image data and/or video data for displaying the image data and/or video data of the first device and the image data and/or video data of at least one second device.
  • the first material data includes a first highlight clip for a meteor captured by a first device, and the first highlight clip is a video of the first meteor falling.
  • the second material data includes a second highlight clip for a meteor captured by a second device, and the second highlight clip is a video of the second meteor falling.
  • the first device can perform intelligent creation operations such as cropping and synthesis editing operations based on the first highlight clip and the second highlight clip to fuse the first highlight clip and the second highlight clip to obtain a complete target highlight clip including the first meteor falling video and the second meteor falling video.
  • the target highlight clip can also include special effects, music, filters, etc. that are different from the first highlight clip and/or the second highlight clip, which are not limited here.
  • the material data processing method described in the embodiment of the present application determines the multimodal information corresponding to the first material data in response to a preset instruction for the first material data triggered by the user, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation; sends a material acquisition request to the at least one second device, wherein the material acquisition request includes the multimodal information, and the multimodal information is used by the second device to screen target multimodal information matching the multimodal information, the target multimodal information corresponds to the second material data, and the second material data is determined by the corresponding second device according to the multimodal information; receives at least one material acquisition result sent by the at least one second device, wherein each second device corresponds to one material acquisition result; if any one of the material acquisition results indicates that the corresponding second device has the second material data, the preset operation is performed on the first material data and the second material data.
  • the multimodal information includes at least one of the following: GPS information, face information, scene information, subject information, aesthetic evaluation information and highlight fragment information;
  • the preset operation includes at least one of the following: storage operation, intelligent creation operation, wherein the intelligent creation operation includes at least one of the following: cropping operation, special effects beautification operation, synthesis editing operation.
  • the above-mentioned storage operation can be used to store the first material data and at least one second material data at the same time after the first device receives the second material data, so as to facilitate the next use, so as to realize the sharing of material data between the first device and the second device;
  • the above-mentioned intelligent creation operation is used to perform cropping operations, special effects beautification operations and/or synthesis editing operations on the first material data and at least one second material data, so as to realize secondary creation of the first material data and/or at least one second material data, so as to realize the optimization of the first material data.
  • the material data after secondary creation is conducive to improving user satisfaction and user experience.
  • Figure 5 is a flow chart of a material data processing method provided in an embodiment of the present application, which is applied to a second device, and the second device establishes a communication connection with the first device, and the second device is a slave device of the first device; as shown in the figure, the material data processing method includes the following operations.
  • S501 Receive a material acquisition request sent by the first device, wherein the material acquisition request includes multimodal information, and the multimodal information is determined by the first device according to first material data.
  • the multimodal information carried in the material acquisition request is used by the second device to determine whether there is target multimodal information matching the multimodal information in the multimodal database.
  • S502 Determine whether there is target multimodal information matching the multimodal information in the multimodal database.
  • the second device may include a multimodal database, and the multimodal database may be used to store multimodal data corresponding to all material data corresponding to the second device.
  • the second device can match all multimodal information in the multimodal database with the multimodal information corresponding to the first device. If any one of the multimodal information matches successfully, it can be determined that there is matching target multimodal information. If all the multimodal information matches unsuccessfully, it is determined that there is no matching target multimodal data.
  • S504 Send a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • the above-mentioned material acquisition result may include a first material acquisition result and a second material acquisition result.
  • a first material acquisition result is sent, and the first material acquisition result may include the second material data; if the target multimodal information matching the multimodal information does not exist in the multimodal database, a second material acquisition result is sent, and the second material acquisition result does not include the second material data.
  • the second material acquisition result can be sent to the first device, and the second material acquisition result may include a preset identifier, which is used to indicate that there is no target multimodal data in the second device that matches the multimodal data in the first device.
  • the above-mentioned preset identifier may be set by the user himself or by the system default, which is not limited here, and may also be agreed upon with the first device in advance.
  • steps S501 to S504 can refer to the corresponding steps of steps S401 to S404 of the material data processing method described in FIG. 4 , and will not be repeated here.
  • the second device receives the material acquisition request sent by the first device, wherein the material acquisition request includes multimodal information, and the multimodal information is determined by the first device according to the first material data; determines whether there is target multimodal information matching the multimodal information in the multimodal database; if the target multimodal information exists in the multimodal database, determines the second material data corresponding to the multimodal information; and sends the material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • the second device can receive the multimodal information sent by the first device in the same network, and obtain the second material data corresponding to the first material data according to the matching of the multimodal information, which is conducive to providing data reference and data support for the preset operation of the first device, and is conducive to improving the satisfaction of the first device in completing the preset operation.
  • the multimodal information includes a plurality of target first submodal information
  • the multimodal database includes a plurality of second submodal information
  • the target multimodal information includes a plurality of target second submodal information
  • the target second sub-multimodal information includes any one of the following: GPS information, face information, scene information, subject information, aesthetic evaluation information and highlight fragment information, and any one of the target second sub-modal information has the target first sub-modal information corresponding thereto
  • the determining whether there is target multimodal information matching the multimodal information in the multimodal database may include the following steps: determining the matching logic corresponding to each of the second sub-modal information according to a preset mapping relationship between the second sub-submodal information and the matching logic; determining the priority corresponding to each of the second sub-modal information according to a preset mapping relationship between the second sub-submodal information and the priority; determining the target priority corresponding to each of the second sub-submodal information according to the target priority corresponding to each of the second sub-submodal
  • the above-mentioned target first submodal information and/or second submodal information and/or target second submodal information can be any one of GPS information, face information, scene information, subject information, aesthetic evaluation information and highlight fragment information, character relationship information and text information, etc.
  • a mapping relationship between the second sub-modal information and the matching logic can be preset in the second device, and the mapping relationship is used to characterize the screening or confirmation method of each second sub-modal information.
  • each second sub-modal information may correspond to a matching logic. After the second sub-modal information and its corresponding target first sub-modal information are screened through the matching logic, if the match is successful, it indicates that there is target multi-modal information matching the multi-modal information in the multi-modal database, and the successfully matched target second sub-modal information can be obtained. Furthermore, the material data corresponding to the target second sub-modal information can be determined, and the data volume of the material data is less than the data volume of the material data corresponding to the second sub-modal information.
  • the corresponding second sub-modal information is screened through matching logic, if any of the second sub-modal information does not contain target second sub-modal information that matches the target first sub-modal information, it is determined that the match fails, indicating that the target multi-modal information that matches the multi-modal information does not exist in the multi-modal database.
  • the target first submodal information includes multiple first GPS information, that is, the target first submodal information includes the GPS information corresponding to each first image data
  • the target second submodal information includes multiple second GPS information.
  • the matching logic corresponding to the second GPS information which is the second submodal information, can be determined based on the mapping relationship between the preset second submodal information and the matching logic.
  • at least one target second GPS information that is, at least one target second submodal information
  • the second device can determine the second material data corresponding to the target first submodal information corresponding to the first material data based on the at least one target second GPS information.
  • the second device can also preset the mapping relationship between the second submodal information and the priority.
  • the second submodal information includes multiple, that is, multiple second submodal information
  • the multiple second submodal information corresponds to multiple matching logics
  • the second device can set the priority between the multiple matching logics for the multiple second submodal information, that is, the screening order of the multiple second submodal information.
  • the above-mentioned priority can be used to confirm the order of the second device to execute the matching and screening between the above-mentioned multiple second submodal information and the multiple target first submodal information, that is, to determine the order in which the second device executes the matching logic corresponding to each second submodal information.
  • the second submodal information includes three types of multimodal information, namely, second scene information, second subject information, and second GPS information of the image data;
  • the second submodal information is data stored in the multimodal database and has not yet been filtered or matched;
  • the target second submodal information includes target second scene information, target second subject information, and target second GPS information of the image data; wherein the priority of the matching logic corresponding to the scene information can be set higher than the priority of the subject information, and the priority of the matching logic of the subject information can be set higher than the priority of the matching logic of the GPS information.
  • the second device may preferentially filter out the target second scene information that matches the first scene information and the material data corresponding to the target second scene information according to the matching logic corresponding to the scene information; then, from the material data corresponding to the target second scene information, filter out the target second subject information that matches the first subject information from the second subject information according to the matching logic corresponding to the subject information, and then the material data corresponding to the target second subject information; finally, from the material data corresponding to the target second subject information, filter out the target second GPS information that matches the first GPS information and the material data corresponding to the target second GPS information from the second GPS information according to the matching logic of the GPS information, and use them as the above-mentioned target material data, the data amount of the material data corresponding to the target second GPS information is smaller than the data amount of the material data corresponding to the target second subject information, and the data amount of the material data corresponding to the target second subject information is smaller than the data amount corresponding to the second scene information.
  • the gallery of the second device may include a large amount of image data
  • the shooting scene and the shooting scene in the first device may be matched by the matching logic corresponding to the shooting scene.
  • a large amount of image data is also firstly filtered, and then the image data matching the shooting scene in the first device is also filtered out.
  • the subject is matched with the subject in the first device through the matching logic corresponding to the subject.
  • a large amount of image data is also secondarily filtered, and the image data matching the subject and the shooting scene are obtained from the image data matching the shooting scene.
  • the large amount of image data may be filtered three times, that is, the time information of the shooting is matched with the time information corresponding to the first device, and the target time information is filtered out.
  • the image data matching the shooting scene, subject and time information in the first device are also filtered out, and the image data matching the shooting scene, subject and time information in the first device are used as the second material data.
  • the second device after receiving the multimodal data sent by the first device, the second device can select the multimodal data from the multimodal data according to the priority and matching logic.
  • the second material data matching the multimodal data is matched in the modal database, and the above-mentioned priority can be used to accurately locate the data range, that is, to meet the high matching degree with multiple types of first material data in the multimodal data, and the above-mentioned matching logic can be used to accurately screen each second sub-modal data to optimize the processing of the second sub-modal data in the second device, which is conducive to improving the screening accuracy; and, in the process of matching again and again through priority, data that better meets the user standards can be preferentially screened out, and it is conducive to confirming that more accurate second material data is obtained, which is conducive to improving user experience.
  • the target first submodal information is the first GPS information
  • the target second submodal information is the target second GPS information
  • the second submodal information includes multiple second GPS information
  • the target second GPS information is any one of the multiple second GPS information
  • each of the second submodal information is screened to obtain the target second submodal information
  • the above method may include the following steps: selecting the target second GPS information that is in the same preset interval as the first GPS information from the multiple second GPS information as the target second submodal information; and/or determining the corresponding GPS information of each second GPS information the second GPS accuracy and the first GPS accuracy of the first GPS information; if the first GPS accuracy and/or any one of the second GPS accuracy is greater than a preset accuracy threshold, determining the information difference between the first GPS accuracy and the second GPS accuracy; if the information difference is less than or equal to the preset information difference, taking the target second GPS information corresponding to the second GPS accuracy as the target second submodal information; if the first GPS accuracy and/
  • GPS information can be used to confirm the latitude and longitude information, location information and other data of the captured material data; the above-mentioned preset interval can be set by the user and is not limited here.
  • the preset interval may refer to the interval corresponding to the location information indicated by the first GPS information.
  • the above-mentioned preset accuracy threshold and/or preset information difference value may be set by the user or by the system by default, and are not limited here; the preset accuracy threshold may be used to characterize the GPS accuracy of two GPS hardware.
  • the first GPS accuracy and/or any one of the second GPS accuracy is greater than a preset accuracy threshold, it may indicate that the accuracy of the GPS hardware of the first device and/or the second device is high; conversely, if the first GPS accuracy and/or any one of the second GPS accuracy is less than or equal to the preset accuracy threshold, it may indicate that the accuracy of the GPS hardware of the first device and/or the second device is low.
  • the above-mentioned information difference value may refer to the difference of the data represented by the GPS information, and the above-mentioned preset information difference value is used to represent the matching degree between the first GPS information and the second GPS information.
  • the GPS information is used to confirm the longitude and latitude information
  • the first longitude and latitude information corresponding to the first device and the second longitude and latitude information corresponding to the second device can be determined; and the difference between the two longitude and latitude information can be obtained, and the information difference value can be preset to 0.01°. If the obtained information difference value is less than 0.01°, it is confirmed that the first GPS information matches the second GPS information, and then the second material data corresponding to the second GPS information can be determined.
  • the second device may determine the first GPS accuracy corresponding to the first GPS information and the second GPS accuracy corresponding to the second GPS information through the latitude and longitude information and location information of the corresponding material data respectively.
  • the second device when the first GPS information and/or the second GPS information are used to indicate location information, it can be determined that the first GPS accuracy and/or the second GPS accuracy is high, and the second device can execute the step of selecting, according to the location information, target second GPS information in the same preset area as the first GPS information from multiple second GPS information as the target second sub-modal information.
  • the second device can determine the first longitude and longitude information corresponding to the first GPS, the second longitude and longitude information corresponding to each second GPS information, and compare the information difference between the first longitude and longitude information and each second GPS longitude and longitude information. If any information difference is less than or equal to the preset information difference, the target second GPS information corresponding to the corresponding second longitude and longitude information is used as the target second submodal information.
  • the first GPS accuracy and/or the second GPS accuracy can be determined to be high based on the longitude and latitude, and the same steps as described above can be performed.
  • the second device executes the step of selecting, based on the location information, target second GPS information in the same preset area as the first GPS information from multiple second GPS information as the target second sub-modal information.
  • the second GPS information is determined to be the target second GPS information.
  • the second device can filter each second submodal information according to indicator data such as the preset information difference and the preset accuracy threshold, and obtain the target second submodal information that matches any target first submodal information, thereby realizing the filtering of the second submodal information.
  • indicator data such as the preset information difference and the preset accuracy threshold
  • the target first submodal information is first facial information
  • the target second submodal information is target second facial information
  • the second submodal information includes multiple second facial information
  • the target second facial information is any one of the multiple second facial information
  • each of the second submodal information is screened to obtain the target second submodal information
  • the above method may include the following steps: determining a character profile corresponding to the first facial information; and selecting, from the multiple second facial information, the target second facial information that matches the character profile as the target second submodal information.
  • the second device can set character profiles for different facial images; the above-mentioned facial information may include pixel points used to represent facial features, expressions and other information in the face.
  • the second device can determine any target second facial information among multiple second facial information that matches the first facial information based on the set character profile, which is conducive to the screening of multimodal information such as facial information.
  • the second device may match the first facial information and each second facial information one by one to obtain a matching target second facial image, and establish a character profile for the target second facial image to facilitate the next matching of facial information.
  • the scene information includes at least one of the following: time, season, weather, festival, and space.
  • the second device can characterize the scene information through time (year/month/day/morning, noon, evening/%), season (spring, summer, autumn, winter/%), weather (sunny/cloudy/rainy/snowy/%), festivals (Spring Festival/birthday/anniversary/%), space (indoor/outdoor/attractions/%), broad subject types (people/objects/animals/scenery/%), etc.
  • the second device can prioritize the above-mentioned scene information, and then can match it with the target first sub-modal information corresponding to the first device one by one according to the priority, so as to select the scene information in the matching second sub-modal information as the second sub-modal information.
  • the second device can filter multimodal information such as facial information according to scene information, which is beneficial to improving the accuracy of subsequent second material data.
  • the second submodal information or the target first submodal information is subject information
  • the target first submodal information is first subject information
  • the target second submodal information is target second subject information
  • the second submodal information includes multiple second subject information
  • the target second subject information is any one of the multiple second subject information
  • each of the second submodal information is screened to obtain the target second submodal information
  • the above method may include the following steps: determining multiple first label information corresponding to the first subject information, wherein the first label information is used to characterize the type of the first subject information in the first material data; determining multiple second label information corresponding to any one of the second subject information; matching the multiple first label information with the multiple second label information to obtain multiple matching rates, wherein each matching rate corresponds to a first label information; determining the number of matches of the multiple matching rates that is greater than the preset matching rate; if the matching number is greater than the preset matching number, determining that the second subject information is the target second subject information, and using the target second subject information as the target second submodal information.
  • the subject information can be used to characterize the subject in the image data or video data, and the subject can include at least one of the following: high-rise buildings, residents, ancient buildings, grasslands, forests, sky, rivers, lakes, hot pots, barbecues, tables, chairs, cats, dogs, computers, user faces, etc., which are not limited here.
  • the first label information is used to characterize the types of multiple subjects in the first material data
  • the second label information is used to characterize the types of multiple subjects in the second material data.
  • the first label information and/or the second label information may include at least one of the following: buildings (high-rise buildings/residential houses/ Egyptian buildings/...), sculptures, scenery (grasslands/forests/rivers/lakes/sky/9), food (hot pot/barbecue/western food/desserts/snacks/%), natural objects (flowers/grass/trees/%), animals (cats/dogs/birds/%), daily necessities (computers/mobile phones/tables/chairs/%), etc., which are not limited here; the first label information is used to characterize the type of the first subject information in the first material data; the second label information is used to characterize the type of the second subject information in the second material data.
  • the above-mentioned preset matching number can be set by the user or by system default, and is not limited here; the preset matching number can be set to 2 or 3, etc.
  • the first main information includes 7 first label information such as high-rise buildings, sky, lakes, flowers, grass, kittens, and puppies
  • the second main information includes 6 second label information such as high-rise buildings, sky, lakes, flowers, puppies, flowers, and grass. It can be obtained that 6 labels in the first main information and the second main information are matched (the same), and 6 is greater than 4. Therefore, it can be determined that the second main information matches the first main information, and it can be determined that the second material data corresponding to the second main information matches the first material data.
  • the second device can filter the multimodal information of the subject information type according to the number of matches, which is conducive to improving the accuracy of subsequent determination of the second material data.
  • the target first submodal information and the target first submodal information are both aesthetic evaluation information
  • the target first submodal information is first aesthetic evaluation information
  • the first aesthetic evaluation information includes a first aesthetic evaluation score
  • the target second submodal information is target second aesthetic evaluation information
  • the second submodal information includes multiple second aesthetic evaluation information
  • the target second aesthetic evaluation information is any one of the multiple second aesthetic evaluation information
  • each of the second submodal information is screened to obtain the target second submodal information
  • the above method may include the following steps: if the first material data includes a target image frame, then the aesthetic evaluation score corresponding to the target image frame is used as the first aesthetic evaluation score; if the first material data includes target video data, then the average value of the aesthetic evaluation scores corresponding to the multiple image frames included in the target video data is determined, and the average value is used as the first aesthetic evaluation score; the second aesthetic evaluation scores corresponding to the multiple second aesthetic evaluation information are determined to obtain multiple second aesthetic evaluation scores; a target second aesthetic evaluation score greater than or equal to the first aesthetic evaluation score is selected from the multiple second aesthetic evaluation
  • the target image frame may be any one of a plurality of image data in the first material data.
  • the target video data may be any one of at least one video data in the first material data.
  • the second aesthetic evaluation score may be calculated in advance by the second device, and the first aesthetic evaluation score may be calculated by the first device.
  • the second device can compare the image data and/or video data in the first material data to determine the first aesthetic evaluation The first aesthetic evaluation score corresponding to the information; and using the first aesthetic evaluation score as the evaluation standard, selecting a target second aesthetic evaluation score that is greater than or equal to the first aesthetic evaluation score from multiple second aesthetic evaluation scores, so as to achieve screening of better or more optimal image data and/or video data as second material data according to the aesthetic evaluation score, which is conducive to improving the accuracy of subsequent determination of the second material data, and is conducive to obtaining target material data that is more in line with public standards or aesthetics.
  • the method may further include the following steps: filtering out highlight segment information in the target multimodal information from the second material data; and sending the target second material data corresponding to the highlight segment information to the first device.
  • the above-mentioned highlight segment is information that can be a video segment composed of image frames with highlight moments in the video data corresponding to the target multimodal information.
  • the memory of the second device occupied by the highlight segment is much smaller than the second material data, that is, all the video data.
  • the second device can send the target second material data corresponding to the highlight segment information to the first device.
  • the second device can send the second material data corresponding to all the target multimodal information, or it can send only the highlight fragments, which is conducive to improving transmission efficiency and saving the time of secondary cropping of the video material data by the first device (main device), and directly obtaining the available video material highlight fragments, which is conducive to improving user experience.
  • the above method may include the following steps: determining a target aesthetic evaluation score corresponding to each frame of the video data; selecting a video frame whose target aesthetic evaluation score is greater than or equal to a preset score value as a target video frame, and obtaining multiple target video frames; combining the multiple target video frames into a target video, and using the target video as a highlight segment.
  • the above-mentioned preset score value can be set by the user or by the system default, and is not limited here. If 5 points is the highest aesthetic evaluation score, the preset score value can be set to 4 points or 5 points.
  • the second device can select target video frames that meet the highlight moment evaluation criteria from the video data corresponding to the second material data, according to the target aesthetic evaluation score and the preset score value corresponding to each frame of the video frame, and combine them to obtain highlight clips, which is conducive to the generation of highlight clips.
  • the method for determining the target aesthetic evaluation score corresponding to each video frame in the video data may include the following steps: obtaining a preset quality evaluation index for a low-quality image frame and a quality evaluation parameter corresponding to each of the quality evaluation indexes; determining the target evaluation parameter corresponding to each of the video frame images based on the quality evaluation index; comparing each of the target evaluation parameters with the quality evaluation parameters; if any one of the target evaluation parameters and the quality evaluation parameter is consistent, deleting the video frame image corresponding to the target evaluation parameter to obtain a plurality of first video frames other than the video frame image; obtaining a preset aesthetic evaluation index and an aesthetic evaluation parameter corresponding to each of the aesthetic evaluation indicators; performing aesthetic evaluation on the plurality of first video frames based on the aesthetic evaluation index and the aesthetic evaluation parameter corresponding to each of the aesthetic evaluation indicators to obtain a target aesthetic evaluation score corresponding to each first video frame.
  • the second device may be configured with a quality evaluation index and an aesthetic evaluation index, and evaluate the aesthetic evaluation score of the video frame or the image frame according to the two evaluation indexes.
  • the second device can set evaluation values for the evaluation parameters in the quality evaluation standard of each category (artifacts, exposure problems, clarity and color). For example, for the color category, if the evaluation parameters in a video frame image include color cast and color overflow, it can be determined that the video frame image is a low-quality image frame, and it can be determined to delete the video frame image, and the remaining video frame images can be confirmed as the first video frame to obtain multiple first video frames.
  • category artifacts, exposure problems, clarity and color.
  • the aesthetic evaluation indicators may include at least one of the following: content semantics, color, composition and professional photography skills, etc., which are not limited here; each aesthetic evaluation indicator may correspond to 0 or at least one aesthetic evaluation parameter; the aesthetic evaluation parameters can be set according to the public aesthetics, or customized by the user, etc., which are not limited here.
  • the above mapping relationship may also include the score corresponding to each aesthetic evaluation parameter.
  • the aesthetic evaluation parameter corresponding to the first video frame can be determined to be 1 point; if the aesthetic evaluation parameter corresponding to its corresponding content semantics is "unclear content semantics", and other aesthetic evaluation indicators are 0 or none, then the aesthetic evaluation parameter corresponding to the first video frame can be determined to be 2 points, and so on, to obtain the aesthetic evaluation score corresponding to each first video frame.
  • the quality of video image frames or video frames or image frames can be evaluated first, and then image frames with poor quality can be screened out; for video frames, each of the multiple screened out first video frames can be aesthetically evaluated to obtain its corresponding target aesthetic evaluation score, which is conducive to realizing the aesthetic evaluation of video frame images, improving the accuracy of subsequent determination of the second material data, and obtaining target material data that is more in line with public standards or aesthetics.
  • the above method may also include the following steps: determining privacy information; analyzing the second material data, deleting the material data related to the privacy information, and obtaining target second material data; sending the target second material data to the first device; determining a privacy tag corresponding to the privacy information, and synchronizing the privacy tag to the multimodal database to perform privacy settings on the multimodal information included therein.
  • the above-mentioned privacy information can be set by the user himself or by the system default, which is not limited here; the privacy information can be screenshots of user chat information, or information including the user’s mobile phone number, etc., which is not limited here.
  • the above-mentioned privacy labels can be labels such as prohibited sharing and private.
  • the image/video material on the second device is analyzed, and the material data with privacy labels (forbidden to share, private, etc.) added to the picture/video material containing the user's personal privacy information is removed to obtain the target second material data, which is beneficial to protecting the user's privacy security.
  • the privacy label and privacy information can be added to the multimodal database.
  • the material containing the privacy label can be automatically filtered out to prevent the picture/video material from being accidentally transmitted, which is beneficial to preventing the picture/video material containing the user's personal privacy information from being accidentally transmitted, which is beneficial to improving user information security and data screening efficiency.
  • the second material data and/or the target second material data are displayed in the second device interface in the form of thumbnails.
  • displaying the second material data and/or target second material data in the form of thumbnails is beneficial to user viewing and helps improve user experience.
  • the method may further include the following steps: displaying the material acquisition request in a pop-up box; in response to a user's selection operation in the pop-up box, executing the step of determining whether there is target multimodal information matching the multimodal information in the multimodal database.
  • the above-mentioned pop-up box method can be set by the user himself or by the system default, and is not limited here; a material selection channel can be provided for the user, so that the second device will not start detecting the second material data until the corresponding user of the second device clicks to agree, thereby ensuring material detection between devices and helping to improve the security of transmission.
  • Figure 6 is a flow chart of a material data processing method provided in an embodiment of the present application, which is applied to a first device, and the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device; as shown in the figure, the material data processing method includes the following operations.
  • S601 In response to a preset instruction for first material data triggered by a user, determine multimodal information corresponding to the first material data.
  • the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation.
  • each second device corresponds to one material scene detection request
  • the material scene detection request includes the multimodal information
  • the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as the first device, and the second material data is determined by the corresponding second device according to matching the multimodal information with the multimodal database.
  • S603 Receive at least one material scene detection result sent by the at least one second device, wherein each of the second devices corresponds to one material scene detection result.
  • S605 Receive at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result.
  • steps S601 to S606 may refer to the corresponding steps of steps S401 to S404 of the material data processing method described in FIG. 4 .
  • the material application scenario may refer to a scenario in which the first material data is applied, which may be an intelligent creation scenario, a material data storage scenario, a material data sharing scenario, etc., which is not limited here.
  • the material application scenario may also correspond to a preset operation corresponding to a preset instruction. For example, when the preset instruction is used to instruct the first device to perform an intelligent creation operation, the material application scenario is an intelligent creation scenario; when the preset instruction is used to instruct the first device to perform a storage operation, the material application scenario is a material data storage scenario, etc., which is not limited here.
  • the above-mentioned material scene detection request can be used by the first device to determine whether each second device among at least one second device has second material data with the same material application scenario as the current device. For example, when the preset instruction is used to instruct the first device to perform an intelligent creation operation, the material scene detection request is used to determine whether the intelligent creation scene corresponding to the second device is consistent with the intelligent creation scene corresponding to the first device.
  • the above-mentioned material scene detection request may include multimodal information, which can be used by the second device to determine whether its corresponding material application scenario is consistent with the material application scenario of the first device. Specifically, when there is target multimodal information matching the multimodal information of the first device in the multimodal database corresponding to the second device, it is determined that the material application scenario corresponding to the second device is consistent with that of the first device.
  • the material scene detection result may include: the second device has second material data of the same material application scene as the first device, and the second device does not have second material data of the same material application scene as the first device.
  • the second material data is obtained by the second device according to the multimodal information matching the multimodal database. Specifically, when there is target multimodal information matching the multimodal information in the multimodal database, the second device determines that the material data corresponding to the target multimodal information is the second material data.
  • any material scene detection result indicates that the corresponding second device has second material data with the same material application scenario as the first device, it is determined that the material application scenario corresponding to the second device is consistent with that of the first device, and the first device can continue to send a material acquisition request to the corresponding second device, and the material acquisition request is used to obtain the second material data corresponding to the second device.
  • the first device does not send a subsequent material acquisition request, and displays a first prompt message to prompt the user that material data for the same material application scenario does not exist in at least one of the current second devices.
  • the first device may continue to display the second prompt message, and the second prompt message is used to indicate whether the user performs a preset operation based on the local first material data, or to prompt the user to reselect the first material data to confirm whether material data for the same material application scenario as the reselected first material data exists in at least one second device.
  • the above-mentioned first material data can be applied to different material application scenarios, and the first device can determine whether it is necessary to perform subsequent preset operations based on the material application scenario.
  • the first device can first send a material scene detection request to the second device to determine whether the second device is in the same material application scenario as the first device.
  • the second device has the second material data required by the first device, it is determined that the second device is in the same material application scenario as the first device before acquiring the second material data.
  • the above-mentioned material acquisition request can be terminated.
  • the second device may not allow the first device to obtain the second material data. Therefore, the first device can send a material acquisition request to the second device to ask whether the second device agrees to send the second material data to help the first device further obtain the second material data. This is conducive to protecting the user's privacy and improving the user experience.
  • the material application scenario is an intelligent creation scenario; after responding to the preset instruction for the first material data triggered by the user, the first device may also determine the intelligent creation scenario corresponding to the first material data.
  • the intelligent creation scenario corresponds to the first material data and also corresponds to the multimodal information corresponding to the first material data, that is, the first material data may be image data and/or video data of the same intelligent creation scenario.
  • the multimodal information corresponding to the first material data is obtained in the multimodal database.
  • the first device may also select, according to a mapping relationship between preset multimodal information and the intelligent creation scene, the multimodal information corresponding to the first device.
  • the multimodal information corresponding to the intelligent creation scene is matched in the modal database.
  • the above-mentioned intelligent creation scene may include at least one of the following: scenery during the journey, trivial scenes in life, a sky full of stars, etc., which are not limited here; the intelligent creation scene can be used to indicate the scene information corresponding to the image or video that the user wants to create, and the intelligent creation scene may correspond to theme information.
  • the corresponding theme information is the journey theme
  • the corresponding theme information is the life theme
  • the life theme can be determined based on the first material data.
  • the scene information included in the multimodal information is different from the intelligent creation scene.
  • the intelligent creation scene is a scene with the subjective creative ideas of the user corresponding to the first device.
  • the scene information refers to the single scene information corresponding to the material data, and does not include the behavior that the user wants to create in the scene.
  • the intelligent creation scene is determined by the first device; in a specific implementation, if the first material data includes multiple images, feature recognition can be performed on each of the multiple images to determine the feature set corresponding to each image, and multiple feature sets can be obtained.
  • Each feature set may include multiple features of the corresponding image; the first device can classify the multiple feature sets according to the multiple features included in each feature set to determine the category of each feature set, and obtain multiple categories corresponding to each feature set.
  • the above categories may include at least one of the following: places, people, objects, animals, landscapes, states corresponding to people or animals or objects or landscapes, etc., which are not limited here; places may include scenic spots, parks, offices, office buildings, communities, etc.
  • the first device may integrate information of multiple categories corresponding to each of the above-mentioned feature sets according to a preset combination logic to determine the combination category corresponding to the feature set, and then obtain one or more combination categories corresponding to the feature set, and select the combination category with the most complete hierarchical structure among the multiple combination categories as the target combination category corresponding to the feature set, and then obtain a target combination category set corresponding to the feature set, which may include at least one target combination category, and then obtain each target combination category set corresponding to each feature set.
  • the first device may integrate at least one target combination category in the target combination category set corresponding to each feature set into the same set, and determine the target combination category with the largest number of occurrences, determine the theme information corresponding to the first material data according to the target combination category with the largest number of occurrences, and determine the intelligent creation scene according to the theme information.
  • the first device can pre-set the combination logic, which can be understood as a random combination of one or more of the three levels, such as scenes, people or objects or animals or landscape environments, and the states corresponding to people or animals or objects or landscape environments.
  • people or objects or animals or landscape environments belong to one level
  • the states corresponding to people or animals or objects or landscape environments belong to one level
  • the states of people correspond to people
  • the states of objects correspond to the states of objects, and so on.
  • the above-mentioned hierarchical structure can be scene + person, scene + person + person state, person + person state, object, etc., which can be one layer, two layers or three layers.
  • the most complete hierarchical structure can be understood as the one with the most layers, for example, a three-layer structure of scene + person or object or animal or landscape environment + the state corresponding to person or animal or object or landscape environment.
  • the above-mentioned theme information may include at least one of the following: travel theme, life theme, work theme, sports theme, etc., which are not limited here.
  • the above-mentioned different theme information can be set by the user or by the system default, which are not limited here.
  • Each theme information may correspond to its theme range, for example, the setting of office supplies and business wear is set as a work theme, the setting of sports items and sports wear is set as a sports theme, etc., the setting of tourist attractions is set as a travel theme, etc., which are not limited here.
  • the first device may integrate information of multiple categories corresponding to a feature set A of a certain image A according to the preset combination logic, and the obtained combined categories may be a kitten basking in the sun in a park, a child swinging on a swing in a park, a bird flying in the sky, a fish swimming in the water, etc., which are not limited here.
  • a kitten basking in the sun in a park and a child swinging on a swing in a park may be used as a target combined category A corresponding to the feature set A.
  • the first device may integrate information of multiple categories corresponding to a feature set B of a certain image B according to the preset combination logic, and the obtained combined categories may be butterflies collecting nectar in a park, children swinging on a swing in a park, a mother talking to a child, birds flying in the sky, fish swimming in the water, etc., which are not limited here. In this way, butterflies collecting nectar in a park and children swinging on a swing in a park may be used as target combined categories B corresponding to the feature set B.
  • the first device may select the target combination category that appears most frequently as children swinging on swings in the park, and determine that the theme information corresponding to the target combination category is a life theme, and then determine that the "trivial scenes in life" corresponding to the life theme are used as the intelligent creation scenes in this intelligent creation operation.
  • the manner in which the first device determines the intelligent creation scene according to the at least one image for the first material data in the above step S401 is the same as the above method and will not be repeated here.
  • first device and second device may have deviations in the definition of the intelligent creation scene within the device
  • the second device directly matches the target multimodal data through the intelligent creation scene and then obtains the second material data
  • the second material data may not be what the first device wants, which is not conducive to subsequent intelligent creation and is not conducive to improving user experience.
  • the above-mentioned intelligent creation scenario can also be sent to the second device through a material scene detection request, and the multimodal information in the multimodal database corresponding to the second device can be classified according to the material application scenario. Then, when the second device receives a material scene detection request including an intelligent creation scenario, the second device can determine from the corresponding multimodal database whether there is target multimodal information that matches the current intelligent creation scene based on the intelligent creation scenario. If there is target multimodal information, the second material data corresponding to the target multimodal information can be directly determined. The second material data and the first material data are material data in the same intelligent creation scene. In this way, it is beneficial to improve the accuracy of the target material data obtained by intelligent creation, and there is no need to execute the multimodal information in subsequent steps. The matching of state information is conducive to improving the matching efficiency.
  • the first device detects that the material data selected by the user includes multiple images, but there may be a situation where the material application scenarios corresponding to the multiple material data are different, then the target type of the material application scenario corresponding to each image in the multiple images can be determined, and the probability of each target type can be calculated.
  • the material application scenario corresponding to the maximum probability is selected as the target material application scenario, and the material data corresponding to the target material application scenario is determined to be the first material data. In this way, when the second material data is matched, data of the same material application as the first material data is obtained.
  • the first device detects that the material data selected by the user includes multiple images, and the multiple images correspond to different material application scenarios, that is, the first material data may correspond to multiple material application scenarios
  • the user may want to perform intelligent creation for multiple material application scenarios, that is, the first device needs to perform intelligent creation for the material data of multiple material application scenarios.
  • the above method is also applicable to the matching method of the material data for each material application scenario, and will not be repeated here.
  • the second device may also include material data corresponding to multiple material application scenarios.
  • the above-mentioned first device can directly select multimodal information corresponding to multiple images from the multimodal database, and send the multimodal information to multiple second devices through material acquisition requests such as in steps S401-S404, or send it to multiple second devices through material scene detection requests in steps S601-S606, and each second device obtains the target multimodal information according to the multimodal information matching. At this time, it is no longer concerned whether there are several identical material matching scenes.
  • the second device can match the target multimodal information that matches the multimodal information, it indicates that the second device has data of the same material application scenario.
  • the second material data with the same material application scenario as the first device can be matched from multiple second devices to help the first device complete the intelligent creation for multiple material application scenarios, without having to pay attention to whether the material application scenarios are the same, which is conducive to improving the efficiency of intelligent creation.
  • Figure 7 is an interactive schematic diagram of a material data processing method provided in an embodiment of the present application, which is applied to a second device, and the second device establishes a communication connection with the first device, and the second device is a slave device of the first device; as shown in the figure, the material data processing method includes the following operations.
  • S701 Receive a material scene detection request sent by the first device, wherein the material scene request includes multimodal information.
  • S702 Determine whether there is target multimodal information matching the multimodal information in the multimodal database.
  • S703 If target multimodal information matching the multimodal information exists in the multimodal database, determine second material data corresponding to the multimodal information, and determine whether second material data having the same material application scenario as that of the first device exists.
  • S704 Send a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data of the same material application scene as that of the first device.
  • S705 Receive a material acquisition request sent by the first device, where the material acquisition request is used by the first device to acquire the second material data.
  • S706 Display prompt information, wherein the prompt information is used to instruct the user to choose to send or not send the second material data.
  • S707 In response to the user's determination and sending operation on the second material data, send a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • steps S701 to S707 can refer to the corresponding steps of steps S401 to S404 of the material data processing method described in Figure 4, and the corresponding steps of steps S501 to S504 of the material processing method described in Figure 5, which will not be repeated here.
  • the material scene detection request may include multimodal information, and the multimodal information is used by the second device to determine whether there is target multimodal information matching the multimodal information in the multimodal database.
  • the second device can also determine that there is second material data with the same material application scenario as the first device, that is, it is determined that the material application scenario corresponding to the second device is consistent with that of the first device.
  • the material scene detection result includes any one of the following: the second device has second material data of the same material application scene as the first device, and the second device does not have second material data of the same material application scene as the first device.
  • the second device may send a material scene detection result, which is used to indicate that the second device has second material data for the same material application scenario as the first device, and further, the second device may accept a material acquisition request sent by the first device, which is used to acquire the second material data corresponding to the second device.
  • a material scene detection result may also be sent to the first device.
  • the material scene detection result is used to indicate that the second device does not have second material data for the same material application scenario as the first device, and terminate subsequent processes.
  • the above prompt information can be set by the user or by the system default, which is not limited here; the prompt information can be used by the second device to remind the user to determine whether the user agrees to send the second material data to the first device.
  • the second device detects the user's confirmation sending instruction for the second material data, the target second material data can be sent to the first device.
  • step S502 the above steps of determining whether there is target multimodal information matching the multimodal information in the multimodal database are similar to The above step S502 and its corresponding embodiments are the same and will not be described again.
  • the material data processing method described in the embodiment of the present application receives a material scene detection request sent by the first device, wherein the material scene request includes multimodal information; determines whether there is target multimodal information matching the multimodal information in the multimodal database; if there is target multimodal information matching the multimodal information in the multimodal database, determines the second material data corresponding to the multimodal information, and determines whether there is second material data with the same material application scene as the first device; sends a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data with the same material application scene as the first device; receives a material acquisition request sent by the first device, wherein the material acquisition request is used for the first device to acquire the second material data; displays prompt information, wherein the prompt information is used to indicate that the user chooses to send or not send the second material data; in response to the user's determination and sending operation for the second material data, sends a material acquisition result to the first device, wherein the material acquisition result includes
  • the above multimodal information is used to confirm whether the material application scene corresponding to the second device is consistent with the material application scene in the first device, and send the material scene detection result to the first device to prompt the second device corresponding to the first device whether the second material data of the same material application scene exists or does not exist, so that the first device can determine whether it needs to continue to obtain the second material data in the future. At this time, there is no need to send the second material data. Further, after receiving the material acquisition request, it means that the first device needs the second material data.
  • the material acquisition request can also avoid the useless work of the second device sending the second material data to the first device when the first device does not need to obtain the second material data, that is, occupying bandwidth, and not conducive to ensuring the user privacy of the second device.
  • FIG 8 is an interactive schematic diagram of a material data processing method provided in an embodiment of the present application.
  • the first device establishes a communication connection with at least one second device.
  • the first device and the at least one second device are devices of the same communication network.
  • the at least one second device is a slave device of the first device.
  • the second device in the embodiment of the present application is any one of the at least one second device.
  • the material data processing method includes the following operations.
  • the first device sends at least one material scene detection request to the at least one second device, wherein each second device corresponds to one material scene detection request, the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as the first device, and the second material data is determined by the corresponding second device according to matching the multimodal information with the multimodal database.
  • the second device receives a material scene detection request sent by the first device, wherein the material scene request includes the multimodal information.
  • the second device determines second material data corresponding to the multimodal information, and determines that second material data having the same material application scenario as that of the first device exists.
  • the second device sends a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data of the same material application scene as that of the first device.
  • the first device receives at least one material scene detection result sent by the at least one second device, where each of the second devices corresponds to one material scene detection result.
  • the second device receives the material acquisition request sent by the first device, where the material acquisition request is used by the first device to acquire the second material data.
  • S808 The second device displays prompt information, wherein the prompt information is used to instruct the user to choose to send or not send the second material data.
  • the second device executes the step of sending a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • the first device receives at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result.
  • steps S801-S811 can refer to the corresponding steps of steps S601-S606 of the material data processing method described in Figure 6, and the corresponding steps of steps S701-S707 of the material data processing method described in Figure 7, which are not repeated here.
  • the first device sends at least one material scene detection request to the at least one second device, wherein each second device corresponds to a material scene detection request, the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as the first device, and the second material data is the corresponding second device according to the multimodal information Matching the multimodal database to confirm
  • the method comprises the steps of: the first device receiving a material scene detection request sent by the first device, wherein the material scene request includes the multimodal information; if the multimodal database contains target multimodal information matching the multimodal information, the second device determines the second material data corresponding to the multimodal information, and determines that there is second material data with the same material application scene as the first device; the second device sends a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data with the same material application scene as the first
  • the step of obtaining a request wherein the material obtaining request is used to obtain the second material data; the second device receives the material obtaining request sent by the first device, wherein the material obtaining request is used for the first device to obtain the second material data; the second device displays a prompt message, wherein the prompt message is used to indicate that the user chooses to send or not send the second material data; the second device responds to the user's determination to send the second material data, and executes the step of sending the material obtaining result to the first device, wherein the material obtaining result includes or does not include the second material data; the first device receives at least one material obtaining result sent by at least one second device, wherein each second device corresponds to one material obtaining result; if any of the material obtaining results indicates that the corresponding second device has the second material data, the first device performs the preset operation on the first material data and the second material data.
  • the second material data of other second devices in the same network as the first device can be automatically adapted, and the first material data and the second material data are subjected to the preset operation, which is conducive to improving the selection efficiency, and can ensure the user satisfaction of the data processed by the first material data and the second material data, which is conducive to improving the user experience.
  • the second device can complete the optimization of the material data in the process of determining the target multimodal information matching the multimodal information to obtain the second material data. In this way, the preferred material data can be automatically shared among multiple devices, and users do not need to manually select the tedious process of transferring materials between devices, which is conducive to improving the user experience.
  • Figures 9A and 9B they are respectively scene schematic diagrams of an intelligent creation method.
  • the first device is a main device, and the user can trigger the intelligent creation instruction by clicking the "Intelligent Creation" module in the UI interface or display desktop of the first device. Then, the first device can respond to the intelligent creation instruction, determine the first material data selected by the user, and determine the multimodal information corresponding to the first material data, and send a material scene detection request and the multimodal information corresponding to the first material data to at least one second device in the network.
  • the first device may receive a material scene detection request sent by the second device, and after the material scene detection result indicates that the second device has the second material data of the same intelligent creation scene as the first device, a dialog box pops up, displaying the words [Detecting the existence of similar scene materials on other networking devices, whether to obtain], and in response to the user's confirmation operation on the display desktop, a material acquisition request is sent to the corresponding second device to obtain the second material data in the second device.
  • the first device may also receive the material scene detection result sent by the second device, and the material scene detection result indicates that the second device does not have the second material data corresponding to the intelligent creation scene.
  • the first device pops up a dialog box [Detecting the existence of similar scene materials on other networking devices].
  • Figures 9C to 9E they are respectively scene schematic diagrams of an intelligent creation method, corresponding to Figure 9B.
  • the second device receives the material scene detection request from the first device (as shown in Figure 9C), if the second device in the same network has target multimodal information matching the multimodal information in the multimodal database of the second device, that is, the second device has second material data of the same intelligent creation scene as the first device, a material scene detection result is sent to the first device, and the material scene detection result is used to indicate that the second device has second material data of the same material application scene as the first device.
  • the second device may pop up a dialog box including the words [Other networking devices request to obtain materials].
  • a dialog box pops up and displays [Do you agree to the request of other networking devices to obtain materials? ]
  • the second device sends the material acquisition result to the first device.
  • the material acquisition result can be used to indicate that the second device has second material data corresponding to the same smart creation scene as the first device, and the material acquisition result includes the second material data.
  • the dialog box including [Other networking devices request to obtain material] as shown in Figure 9C will not pop up, and the material scene detection result is directly fed back to the first device.
  • the material scene detection result can be used to indicate that the second device does not have the second material data corresponding to the intelligent creation scene of the first device.
  • the material scene detection result is sent to the first device.
  • the second device may pop up a dialog box including the words [Other networking devices request to obtain materials].
  • a dialog box pops up and displays [Do you agree to the request of other networking devices to obtain materials? ]
  • the second device receives the cancellation selected by the user, it sends the material acquisition result, which is used to indicate that the second device does not have the second material data corresponding to the first material data.
  • FIG9F-9G it is a schematic diagram of a scenario of a first device.
  • the first device can receive second material data sent by at least one second device, and perform intelligent creation operations on one or more second material data and first material data, and display a dialog box in the display interface, including [Creating]. If the second device does not receive the user's cancellation selection, the target material data shown in FIG9G can be obtained. For example, So it is a highlight clip.
  • Figure 10 is a structural diagram of an electronic device provided in an embodiment of the present application.
  • the electronic device includes a processor, a memory, a communication interface and one or more programs, which are applied to the electronic device, and the electronic device includes a first device and/or a second device.
  • the electronic device is a first device
  • the first device establishes a communication connection with at least one second device
  • the at least one second device is a slave device of the first device
  • the one or more programs are stored in the memory, and the one or more programs are configured to execute the following steps by the processor:
  • determining multimodal information corresponding to the first material data In response to a preset instruction for first material data triggered by a user, determining multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • the material acquisition request includes the multimodal information
  • the multimodal information is used by the second device to screen target multimodal information matching the multimodal information
  • the target multimodal information corresponds to second material data
  • the second material data is determined by the corresponding second device according to the multimodal information
  • the preset operation is performed on the first material data and the second material data.
  • the electronic device described in the embodiment of the present application determines the multimodal information corresponding to the first material data in response to a preset instruction for the first material data triggered by the user, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation; sends a material acquisition request to the at least one second device, wherein the material acquisition request includes the multimodal information, and the multimodal information is used by the second device to screen target multimodal information matching the multimodal information, the target multimodal information corresponds to the second material data, and the second material data is determined by the corresponding second device according to the multimodal information; receives at least one material acquisition result sent by the at least one second device, wherein each second device corresponds to one material acquisition result; if any one of the material acquisition results indicates that the corresponding second device has the second material data, the preset operation is performed on the first material data and the second material data.
  • the multimodal information includes at least one of the following: GPS information, face information, scene information, subject information, aesthetic evaluation information and highlight fragment information;
  • the preset operation includes at least one of the following: storage operation, intelligent creation operation, wherein the intelligent creation operation includes at least one of the following: cropping operation, special effects beautification operation, synthesis editing operation.
  • the electronic device is a second device
  • the second device establishes a communication connection with the first device
  • the second device is a slave device of the first device
  • the one or more programs are stored in the memory, and the one or more programs are configured to execute the following steps by the processor:
  • the material acquisition request includes multimodal information
  • the multimodal information is determined by the first device according to the first material data
  • the electronic device described in the embodiment of the present application receives a material acquisition request sent by the first device, wherein the material acquisition request includes multimodal information, and the multimodal information is determined by the first device based on the first material data; determines whether there is target multimodal information matching the multimodal information in the multimodal database; if the target multimodal information exists in the multimodal database, determines the second material data corresponding to the multimodal information; and sends a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • the second device can receive the multimodal information sent by the first device in the same network, and obtain the second material data corresponding to the first material data according to the matching of the multimodal information, which is conducive to providing data reference and data support for the preset operation of the first device, and is conducive to improving the satisfaction of the first device in completing the preset operation.
  • the program before sending the material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data, the program includes instructions for performing the following steps:
  • target multimodal information matching the multimodal information exists in the multimodal database, determining second material data corresponding to the multimodal information, and determining that second material data having the same material application scenario as that of the first device exists;
  • the step of sending the material acquisition result to the first device is performed, wherein the material acquisition result includes or does not include the second material data.
  • the multimodal information includes multiple target first submodal information
  • the multimodal database includes multiple second submodal information
  • the target multimodal information includes multiple target second submodal information
  • the second submodal information or the target first submodal information or the target second submodal information includes any one of the following: GPS information, face information, scene information, subject information, aesthetic evaluation information and highlight fragment information, and any one of the target second submodal information has the target first submodal information corresponding thereto;
  • the program includes instructions for performing the following steps:
  • any of the second sub-modal information is filtered to obtain corresponding target second sub-modal information, it is determined that there is target multi-modal information matching the multi-modal information in the multi-modal database;
  • any of the second sub-modal information is not filtered to obtain the corresponding target second sub-modal information, it is determined that the target multi-modal information matching the multi-modal information does not exist in the multi-modal database.
  • the target first submodal information and the target first submodal information are GPS information
  • the target first submodal information is the first GPS information
  • the target second submodal information is the target second GPS information
  • the second submodal information includes a plurality of second GPS information
  • the target second GPS information is any one of the plurality of second GPS information
  • the program includes instructions for executing the following steps:
  • first GPS accuracy and/or any one of the second GPS accuracy is greater than a preset accuracy threshold, determining an information difference between the first GPS accuracy and the second GPS accuracy;
  • the target second GPS information corresponding to the second GPS accuracy is used as the target second submodal information
  • the step of selecting the target second GPS information in the same preset interval as the first GPS information from the multiple second GPS information is executed as the target second submodal information.
  • the target first submodal information is first facial information
  • the target second submodal information is target second facial information
  • the second submodal information includes multiple second facial information
  • the target second facial information is any one of the multiple second facial information
  • the program includes instructions for executing the following steps:
  • Target second facial information matching the character profile is selected from the multiple second facial information as the target second sub-modal information.
  • the scene information includes at least one of the following: time, season, weather, festival, and space.
  • the target first submodal information is first subject information
  • the target second submodal information is target second subject information
  • the second submodal information includes multiple second subject information
  • the target second subject information is any one of the multiple second subject information
  • the program includes instructions for executing the following steps:
  • the second subject information is determined to be the target second subject information, and the target second subject information is used as the target second sub-modal information.
  • the target first submodal information is first aesthetic evaluation information
  • the first aesthetic evaluation information includes a first aesthetic evaluation score
  • the target second submodal information is target second aesthetic evaluation information
  • the second submodal information includes multiple second aesthetic evaluation information
  • the target second aesthetic evaluation information is any one of the multiple second aesthetic evaluation information
  • the program includes instructions for executing the following steps:
  • the first material data includes a target image frame, taking the aesthetic evaluation score corresponding to the target image frame as the first aesthetic evaluation score;
  • the first material data includes target video data, determining an average value of aesthetic evaluation scores corresponding to a plurality of image frames included in the target video data, and using the average value as the first aesthetic evaluation score;
  • a target second aesthetic evaluation score greater than or equal to the first aesthetic evaluation score is selected from the plurality of second aesthetic evaluation scores, and the target second aesthetic evaluation score is used as the target second submodality information.
  • the program further includes instructions for executing the following steps:
  • the target second material data corresponding to the highlight segment information is sent to the first device.
  • the highlight segment information includes a highlight segment
  • the program includes instructions for performing the following steps:
  • a plurality of the target video frames are combined into a target video, and the target video is used as the highlight segment.
  • the program includes instructions for performing the following steps:
  • the video frame image corresponding to the target evaluation parameter is deleted to obtain a plurality of first video frames other than the video frame image;
  • an aesthetic evaluation is performed on the multiple first video frames to obtain a target aesthetic evaluation score corresponding to each of the first video frames.
  • the program further includes instructions for executing the following steps:
  • Analyze the second material data delete the material data related to the privacy information, and obtain target second material data
  • a privacy tag corresponding to the private information is determined, and the privacy tag is synchronized to the multimodal database to perform privacy settings on the multimodal information included therein.
  • the second material data and/or the target second material data are displayed in the second device interface in the form of thumbnails.
  • the program after receiving the material acquisition request sent by the first device, the program further includes instructions for executing the following steps:
  • the step of determining whether there is target multimodal information matching the multimodal information in the multimodal database is performed.
  • the electronic device is a first device
  • the first device establishes a communication connection with at least one second device
  • the at least one second device The second device is a slave device of the first device; wherein the one or more programs are stored in the memory, and the one or more programs are configured to execute the following steps by the processor:
  • determining multimodal information corresponding to the first material data In response to a preset instruction for first material data triggered by a user, determining multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • each second device corresponds to one material scene detection request
  • the material scene detection request includes the multimodal information
  • the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as that of the first device, and the second material data is determined by the corresponding second device according to matching the multimodal information with the multimodal database;
  • each of the second devices corresponds to one material scene detection result
  • any of the material scene detection results indicates that the corresponding second device has second material data of the same material application scene as that of the first device, sending the material acquisition request to the corresponding second device;
  • the preset operation is performed on the first material data and the second material data.
  • the electronic device described in the embodiment of the present application determines the multimodal information corresponding to the first material data in response to a preset instruction for the first material data triggered by the user, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform the corresponding preset operation; sends at least one material scene detection request to the at least one second device, wherein each second device corresponds to a material scene detection request, the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data with the same material application scene as the first device, and the second material data is determined by the corresponding second device according to the multimodal information matching the multimodal database; receives at least one material scene detection result sent by the at least one second device, wherein each second device corresponds to a material scene detection result; if any one of the material scene detection results indicates that the corresponding second device has second material data with the same material application scene as the first device, then send
  • the above-mentioned first material data can be applied to different material application scenarios, and the first device can determine whether it is necessary to perform subsequent preset operations according to the material application scenario.
  • the first device can first send a material scene detection request to the second device to determine whether the second device is in the same material application scenario as the first device.
  • the second device has the second material data required by the first device, it is determined that the second device is in the same material application scenario as the first device before acquiring the second material data.
  • the above-mentioned material acquisition request can be terminated.
  • the second device may not allow the first device to acquire the second material data. Therefore, the first device can send a material acquisition request to the second device to inquire whether the second device agrees to send the second material data to help the first device further acquire the second material data. This is conducive to protecting the privacy of the user and improving the user experience.
  • the electronic device is a second device
  • the second device establishes a communication connection with the first device
  • the second device is a slave device of the first device
  • the one or more programs are stored in the memory, and the one or more programs are configured to execute the following steps by the processor:
  • target multimodal information matching the multimodal information exists in the multimodal database, determining second material data corresponding to the multimodal information, and determining that second material data having the same material application scenario as that of the first device exists;
  • a material acquisition result is sent to the first device, wherein the material acquisition result includes or does not include the second material data.
  • the electronic device described in the embodiment of the present application receives a material scene detection request sent by the first device, wherein the material scene request includes multimodal information; determines whether there is target multimodal information matching the multimodal information in the multimodal database; if there is target multimodal information matching the multimodal information in the multimodal database, determines second material data corresponding to the multimodal information, and determines whether there is second material data with the same material application scene as the first device; sends a request to the first device.
  • Send a material scene detection result wherein the material scene detection result is used to indicate that the second device has a second material data with the same material application scene as the first device; receive a material acquisition request sent by the first device, wherein the material acquisition request is used for the first device to acquire the second material data; display prompt information, wherein the prompt information is used to indicate that the user chooses to send or not send the second material data; in response to the user's determination to send the second material data, send a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • the above multimodal information is used to confirm whether the material application scene corresponding to the second device is consistent with the material application scene in the first device, and send the material scene detection result to the first device to prompt the first device to have or not have the second material data of the same material application scene in the second device corresponding to the first device, so that the first device can determine whether it is necessary to continue to acquire the second material data in the future. At this time, it is not necessary to send the second material data. Furthermore, after receiving the material acquisition request, it indicates that the first device needs the second material data.
  • choosing to send or not send the second material data is conducive to providing more options for the second device, for example, to confirm with the user corresponding to the second device whether to allow the second material data to be sent, which is conducive to protecting the user's privacy and improving the user experience.
  • the material acquisition request can also avoid the second device sending the second material data to the first device in vain when the first device does not need to obtain the second material data, that is, occupying bandwidth and not conducive to ensuring the user privacy of the second device.
  • the electronic device includes a hardware structure and/or software module corresponding to the execution of each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to exceed the range of the present application.
  • the embodiment of the present application can divide the electronic device into functional units according to the above method example.
  • each functional unit can be divided according to each function, or two or more functions can be integrated into one processing unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of software functional units. It should be noted that the division of units in the embodiment of the present application is schematic and is only a logical function division. There may be other division methods in actual implementation.
  • FIG11 shows a schematic diagram of a material data processing device.
  • the device is applied to a first device, the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device; the device includes: a determining unit, a sending unit, a receiving unit and an executing unit.
  • the material data processing device 1100 may include: a determining unit 1101, a sending unit 1102, a receiving unit 1103 and an executing unit 1104, wherein,
  • the determining unit 1101 is configured to determine, in response to a preset instruction for first material data triggered by a user, multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • the sending unit 1102 is configured to send a material acquisition request to the at least one second device, wherein the material acquisition request includes the multimodal information, the multimodal information is used by the second device to screen target multimodal information matching the multimodal information, the target multimodal information corresponds to second material data, and the second material data is determined by the corresponding second device according to the multimodal information;
  • the receiving unit 1103 is configured to receive at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result;
  • the execution unit 1104 is configured to execute the preset operation on the first material data and the second material data if any one of the material acquisition results indicates that the corresponding second device has the second material data.
  • the material data processing device described in the embodiment of the present application determines the multimodal information corresponding to the first material data in response to a preset instruction for the first material data triggered by the user, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation; sends a material acquisition request to the at least one second device, wherein the material acquisition request includes the multimodal information, and the multimodal information is used by the second device to screen target multimodal information matching the multimodal information, the target multimodal information corresponds to the second material data, and the second material data is determined by the corresponding second device according to the multimodal information; receives at least one material acquisition result sent by the at least one second device, wherein each second device corresponds to one material acquisition result; if any one of the material acquisition results indicates that the corresponding second device has the second material data, the preset operation is performed on the first material data and the second material data.
  • the execution unit 1104 before sending the material acquisition request to the at least one second device, is further configured to:
  • each second device corresponds to a material scene detection request
  • the material scene detection request includes the multimodal information
  • the material scene detection request is used to determine whether the corresponding second device is whether there is second material data with the same material application scenario as that of the first device, the second material data being determined by the corresponding second device according to matching the multimodal information with a multimodal database;
  • each of the second devices corresponds to one material scene detection result
  • the step of sending the material acquisition request to the corresponding second device is executed, wherein the material acquisition request is used to acquire the second material data.
  • FIG. 12A shows a schematic diagram of a material data processing device.
  • the device is applied to a second device, the second device establishes a communication connection with a first device, and the second device is a slave device of the first device;
  • the material data processing device 1200 may include: a receiving unit 1201, a determining unit 1202, and a sending unit 1203, wherein:
  • the receiving unit 1201 is configured to receive a material acquisition request sent by the first device, wherein the material acquisition request includes multimodal information, and the multimodal information is determined by the first device according to the first material data;
  • the determining unit 1202 is used to determine whether there is target multimodal information matching the multimodal information in the multimodal database;
  • the determining unit 1202 is further configured to determine second material data corresponding to the multimodal information if the target multimodal information exists in the multimodal database;
  • the sending unit 1203 is further used to send a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data, and wherein the second material data is used by the first device to intelligently create the first material data to obtain target material data.
  • the material data processing device described in the embodiment of the present application receives a material acquisition request sent by the first device, wherein the material acquisition request includes multimodal information, and the multimodal information is determined by the first device based on the first material data; determines whether there is target multimodal information matching the multimodal information in the multimodal database; if the target multimodal information exists in the multimodal database, determines the second material data corresponding to the multimodal information; and sends a material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data.
  • the second device can receive the multimodal information sent by the first device in the same network, and obtain the second material data corresponding to the first material data according to the matching of the multimodal information, which is conducive to providing data reference and data support for the preset operation of the first device, and is conducive to improving the satisfaction of the first device in completing the preset operation.
  • the sending unit before sending the material acquisition result to the first device, wherein the material acquisition result includes or does not include the second material data, the sending unit is further configured to:
  • second material data corresponding to the multimodal information is determined, and second material data having the same material application scenario as that of the first device is determined;
  • the step of sending the material acquisition result to the first device is performed, wherein the material acquisition result includes or does not include the second material data.
  • the multimodal information includes multiple target first submodal information
  • the multimodal database includes multiple second submodal information
  • the target multimodal information includes multiple target second submodal information
  • the second submodal information or the target first submodal information or the target second submodal information includes any one of the following: GPS information, face information, scene information, subject information, aesthetic evaluation information and highlight fragment information, and any one of the target second submodal information has the target first submodal information corresponding thereto; in terms of determining whether there is target multimodal information matching the multimodal information in the multimodal database, the above-mentioned determination unit 1202 is specifically used to:
  • any of the second sub-modal information is filtered to obtain corresponding target second sub-modal information, it is determined that there is target multi-modal information matching the multi-modal information in the multi-modal database;
  • any of the second sub-modal information is not filtered to obtain the corresponding target second sub-modal information, then determine the multi-modal database There is no target multimodal information matching the multimodal information.
  • the target first submodal information and the target first submodal information are GPS information
  • the target first submodal information is the first GPS information
  • the target second submodal information is the target second GPS information
  • the second submodal information includes a plurality of second GPS information
  • the target second GPS information is any one of the plurality of second GPS information
  • the determining unit 1202 is specifically used for:
  • first GPS accuracy and/or any one of the second GPS accuracy is greater than a preset accuracy threshold, determining an information difference between the first GPS accuracy and the second GPS accuracy;
  • the target second GPS information corresponding to the second GPS accuracy is used as the target second submodal information
  • the step of selecting the target second GPS information in the same preset interval as the first GPS information from the multiple second GPS information is executed as the target second submodal information.
  • the target first submodal information is first facial information
  • the target second submodal information is target second facial information
  • the second submodal information includes multiple second facial information
  • the target second facial information is any one of the multiple second facial information
  • the determining unit 1202 is specifically used for:
  • Target second facial information matching the character profile is selected from the multiple second facial information as the target second sub-modal information.
  • the scene information includes at least one of the following: time, season, weather, festival, and space.
  • the target first submodal information is first subject information
  • the target second submodal information is target second subject information
  • the second submodal information includes multiple second subject information
  • the target second subject information is any one of the multiple second subject information
  • the determination unit 1202 is specifically used to:
  • the second subject information is determined to be the target second subject information, and the target second subject information is used as the target second sub-modal information.
  • the target first submodal information is first aesthetic evaluation information
  • the first aesthetic evaluation information includes a first aesthetic evaluation score
  • the target second submodal information is target second aesthetic evaluation information
  • the second submodal information includes multiple second aesthetic evaluation information
  • the target second aesthetic evaluation information is any one of the multiple second aesthetic evaluation information
  • the determining unit 1202 is specifically used for:
  • the first material data includes a target image frame, taking the aesthetic evaluation score corresponding to the target image frame as the first aesthetic evaluation score;
  • the first material data includes target video data, determining an average value of aesthetic evaluation scores corresponding to a plurality of image frames included in the target video data, and using the average value as the first aesthetic evaluation score;
  • a target second aesthetic evaluation score greater than or equal to the first aesthetic evaluation score is selected from the plurality of second aesthetic evaluation scores, and the target second aesthetic evaluation score is used as the target second submodality information.
  • the sending unit 1203 is specifically configured to:
  • the target second material data corresponding to the highlight segment information is sent to the first device.
  • the highlight segment information includes highlight segments; consistent with the above-mentioned Figure 12A, as shown in Figure 12B, it is a schematic diagram of a material data processing device, and the material data processing device 1200 may also include: a combination unit 1204, the combination unit 1204 is used to: determine the target aesthetic evaluation score corresponding to each frame of the video data; select the video frame whose target aesthetic evaluation score is greater than or equal to the preset score value as the target video frame, and obtain multiple target video frames; combine the multiple target video frames into a target video, and use the target video as the highlight segment.
  • a combination unit 1204 is used to: determine the target aesthetic evaluation score corresponding to each frame of the video data; select the video frame whose target aesthetic evaluation score is greater than or equal to the preset score value as the target video frame, and obtain multiple target video frames; combine the multiple target video frames into a target video, and use the target video as the highlight segment.
  • the determining unit 1202 is specifically configured to:
  • the video frame image corresponding to the target evaluation parameter is deleted to obtain a plurality of first video frames other than the video frame image;
  • an aesthetic evaluation is performed on the multiple first video frames to obtain a target aesthetic evaluation score corresponding to each of the first video frames.
  • the determining unit 1202 is further configured to:
  • Analyze the second material data delete the material data related to the privacy information, and obtain target second material data
  • a privacy tag corresponding to the private information is determined, and the privacy tag is synchronized to the multimodal database to perform privacy settings on the multimodal information included therein.
  • the second material data and/or the target second material data are displayed in the second device interface in the form of thumbnails.
  • the determining unit 1202 is further configured to: display the material acquisition request in a pop-up window;
  • the step of determining whether there is target multimodal information matching the multimodal information in the multimodal database is performed.
  • FIG. 13 shows a schematic diagram of a material data processing device.
  • the device is applied to a first device, the first device establishes a communication connection with at least one second device, and the at least one second device is a slave device of the first device;
  • the material data processing device 1300 may include: a determining unit 1301, a sending unit 1302, a receiving unit 1303 and an executing unit 1304, wherein:
  • the determining unit 1301 is configured to determine, in response to a preset instruction for first material data triggered by a user, multimodal information corresponding to the first material data, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation;
  • the sending unit 1302 is configured to send at least one material scene detection request to the at least one second device, wherein each second device corresponds to one material scene detection request, the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data of the same material application scene as the first device, and the second material data is determined by the corresponding second device according to matching the multimodal information with the multimodal database;
  • the receiving unit 1303 is configured to receive at least one material scene detection result sent by the at least one second device, wherein each of the second devices corresponds to one material scene detection result;
  • the sending unit 1302 is further configured to send the material acquisition request to the corresponding second device if any of the material scene detection results indicates that the corresponding second device has second material data of the same material application scene as the first device;
  • the receiving unit 1303 is further configured to receive at least one material acquisition result sent by the at least one second device, wherein each of the second devices corresponds to one material acquisition result;
  • the execution unit 1304 is configured to execute the preset operation on the first material data and the second material data if any one of the material acquisition results indicates that the corresponding second device has the second material data.
  • the material data processing device described in the embodiment of the present application determines the multimodal information corresponding to the first material data in response to a preset instruction for the first material data triggered by the user, wherein the first material data includes at least one of the following: image data and video data, and the preset instruction is used to instruct the first device to perform a corresponding preset operation; sends at least one material scene detection request to the at least one second device, wherein each second device corresponds to a material scene detection request, and the material scene detection request includes the multimodal information, and the material scene detection request is used to determine whether the corresponding second device has second material data with the same material application scene as the first device, and the second material data is determined by the corresponding second device according to the multimodal information matching the multimodal database; receives at least one material scene detection result sent by the at least one second device, wherein each second device corresponds to a pixel The first device detects the material scene detection result; if any of the material scene detection results indicates that the corresponding second device has the second material data of
  • the above-mentioned first material data can be applied to different material application scenes, and the first device can determine whether to perform subsequent preset operations according to the material application scene. Considering that the material data required by the first device may not exist in the second device, the first device can first send a material scene detection request to the second device to determine whether the second device is in the same material application scene as the first device. When the second device has the second material data required by the first device, it is determined that the second device is in the same material application scene as the first device before the second material data is acquired. When the second device does not have the second material data required by the first device, the above-mentioned material acquisition request can be terminated.
  • the second device may not allow the first device to obtain the second material data. Therefore, the first device can send a material acquisition request to the second device to ask whether the second device agrees to send the second material data to help the first device further obtain the second material data. This is conducive to protecting the user's privacy and improving the user experience.
  • FIG. 14 shows a schematic diagram of a material data processing device.
  • the device is applied to a second device, the second device establishes a communication connection with a first device, and the second device is a slave device of the first device;
  • the material data processing device 1400 may include: a receiving unit 1401, a determining unit 1402, a sending unit 1403 and a display unit 1404, wherein:
  • the receiving unit 1401 is configured to receive a material scene detection request sent by the first device, wherein the material scene request includes multimodal information;
  • the determining unit 1402 is configured to determine whether there is target multimodal information matching the multimodal information in the multimodal database;
  • the determining unit 1402 is further configured to determine, if target multimodal information matching the multimodal information exists in the multimodal database, second material data corresponding to the multimodal information, and determine whether second material data having the same material application scenario as that of the first device exists;
  • the sending unit 1403 is used to send the material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data of the same material application scene as the first device;
  • the receiving unit 1401 is further configured to receive a material acquisition request sent by the first device, wherein the material acquisition request is used by the first device to acquire the second material data;
  • the display unit 1404 is used to display prompt information, wherein the prompt information is used to instruct the user to choose to send or not to send the second material data;
  • the sending unit 1403 is configured to send a material acquisition result to the first device in response to the user's determination to send the second material data, wherein the material acquisition result includes or does not include the second material data.
  • the material data processing device described in the embodiment of the present application receives a material scene detection request sent by the first device, wherein the material scene request includes multimodal information; determines whether there is target multimodal information matching the multimodal information in the multimodal database; if there is target multimodal information matching the multimodal information in the multimodal database, determines the second material data corresponding to the multimodal information, and determines whether there is second material data with the same material application scene as the first device; sends a material scene detection result to the first device, wherein the material scene detection result is used to indicate that the second device has second material data with the same material application scene as the first device; receives a material acquisition request sent by the first device, wherein the material acquisition request is used for the first device to acquire the second material data; displays prompt information, wherein the prompt information is used to indicate that the user chooses to send or not send the second material data; in response to the user's determination to send the second material data, sends a material acquisition result to the first device, wherein the material acquisition result includes or does
  • the above multimodal information is used to confirm whether the material application scene corresponding to the second device is consistent with the material application scene in the first device, and send the material scene detection result to the first device to prompt the second device corresponding to the first device whether the second material data of the same material application scene exists or does not exist, so that the first device can determine whether it is necessary to continue to obtain the second material data in the future. At this time, there is no need to send the second material data. Further, after receiving the material acquisition request, it means that the first device needs the second material data.
  • the material acquisition request can also avoid the useless work of the second device sending the second material data to the first device when the first device does not need to obtain the second material data, that is, occupying bandwidth, and not conducive to ensuring the user privacy of the second device.
  • the electronic device provided in this embodiment is used to execute the above-mentioned material data processing method, and thus can achieve the same effect as the above-mentioned implementation method.
  • the electronic device may include a processing module, a storage module and a communication module.
  • the processing module may be used to control and manage the actions of the electronic device, for example, it may be used to support the electronic device in executing the above-mentioned determining unit 1101, the sending unit 1102, and the like.
  • the storage module can be used to support the electronic device to execute the stored program code and data, etc.
  • the communication module can be used to support the electronic device to communicate with other devices.
  • the processing module can be a processor or a controller. It can implement or execute various exemplary logic boxes, modules and circuits described in conjunction with the disclosure of this application.
  • the processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, etc.
  • the storage module can be a memory.
  • the communication module can specifically be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
  • An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute part or all of the steps of any method recorded in the above method embodiments, and the above computer includes an electronic device.
  • the present application also provides a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute some or all of the steps of any method described in the method embodiment.
  • the computer program product may be a software installation package, and the computer includes an electronic device.
  • the disclosed device can be implemented in other ways.
  • the device embodiments described above are only schematic, such as the division of the above-mentioned units, which is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, and the indirect coupling or communication connection of devices or units can be electrical or other forms.
  • the units described above as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the above-mentioned integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable memory.
  • the computer software product is stored in a memory, including several instructions for a computer device (which can be a personal computer, server or network device, etc.) to execute all or part of the steps of the above-mentioned methods of each embodiment of the present application.
  • the aforementioned memory includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • a person skilled in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by instructing the relevant hardware through a program, and the program may be stored in a computer-readable memory, which may include: a flash drive, a read-only memory, a random access memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请实施例公开了一种素材数据处理方法及相关产品,方法包括:响应于用户触发的针对第一素材数据的预设指令,确定第一素材数据对应的多模态信息,预设指令用于指示第一设备执行对应的预设操作;向至少一个第二设备发送素材获取请求;接收至少一个第二设备发送的至少一个素材获取结果,其中,每一第二设备对应一个素材获取结果;若任意一个素材获取结果指示对应的第二设备存在第二素材数据,则对第一素材数据和第二素材数据执行预设操作。

Description

素材数据处理方法及相关产品
本申请要求于2022年11月11日提交中国专利局、申请号为2022114175364,发明名称为“素材数据处理方法及相关产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,具体涉及一种素材数据处理方法及相关产品。
背景技术
随着电子设备技术的发展,电子设备的应用区间越来越广,其对应的功能也越来越丰富,例如,用户可以根据自己电子设备中的视频数据和/或图像数据等素材数据进行分享或者二次编辑以后分享给其他用户。
用户往往选择的都是在自己或者家人朋友等其他用户的电子设备的相册中观看了所有的视频数据和/或图像数据等素材数据,然后自己在其他用户的电子设备中手动选中喜欢的图像和/或视频素材,并通过电子设备和/或其他用户自带的文件传输功能获得上述素材数据,进而,用户可对上述素材数据进行二次编辑。如此,用户在选择上述素材数据时往往会浪费大量的时间和精力,甚至可能选择不出满意的图像和/或视频等素材,用户体验低。
发明内容
本申请实施例提供了一种素材数据处理方法及相关产品。
第一方面,本申请实施例提供一种素材数据处理方法,应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述方法包括:
响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;
接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
第二方面,本申请实施例提供一种素材数据处理方法,应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述方法包括:
接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;
确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;
向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
第三方面,本申请实施例提供一种素材数据处理方法,应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述方法包括:
响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;
接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;
若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;
接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
第四方面,本申请实施例提供一种素材数据处理方法,应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述方法包括:
接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;
确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
第五方面,本申请实施例提供一种素材数据处理装置,应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述装置包括:确定单元、发送单元、接收单元和执行单元,其中,
所述确定单元,用于响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
所述发送单元,用于向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;
所述接收单元,用于接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
所述执行单元,用于若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
第六方面,本申请实施例提供一种素材数据处理装置,应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述装置包括:接收单元、确定单元和发送单元,其中,
所述接收单元,用于接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;
所述确定单元,用于确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
所述确定单元,还用于若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;
所述发送单元,用于向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
第七方面,本申请实施例提供一种素材数据处理装置,应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述装置包括:确定单元、发送单元、接收单元和执行单元,其中,
所述确定单元,用于响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
所述发送单元,用于向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;
所述接收单元,用于接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;
所述发送单元,还用于若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;
所述接收单元,还用于接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
所述执行单元,用于若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
第八方面,本申请实施例提供一种素材数据处理装置,应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述装置包括:接收单元、确定单元、发送单元和显示单元,其中,
所述接收单元,用于接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;
所述确定单元,用于确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
所述确定单元,还用于若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
所述发送单元,用于向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
所述接收单元,还用于接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
所述显示单元,用于显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
所述发送单元,用于响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
第九方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面和/或第二方面和/或第三方面和/或第四方面任一方法中的步骤的指令。
第十方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面和/或第二方面和/或第三方面和/或第四方面任一方法中所描述的部分或全部步骤。
第十一方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面和/或第二方面和/或第三方面和/或第四方面任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
可以看出,本申请实施例中,第一设备向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;第二设备接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括所述多模态信息;若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则第二设备确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;第二设备向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;第一设备接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则第一设备执行向所述对应的第二设备发送所述素材获取请求的步骤,其中,所述素材获取请求用于获取所述第二素材数据;第二设备接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;第二设备显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;第二设备响应于所述用户针对所述第二素材数据的确定发送操作,执行所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据的步骤;第一设备接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则第一设备对所述第一素材数据和所述第二素材数据执行所述预设操作。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种素材数据处理系统的结构示意图;
图2是本申请实施例提供的一种创作引擎层的架构示意图;
图3是本申请实施例提供的一种通信组网的架构示意图;
图4是本申请实施例提供的一种素材数据处理方法的流程示意图;
图5是本申请实施例提供的一种素材数据处理方法的流程示意图;
图6是本申请实施例提供的一种素材数据处理方法的流程示意图;
图7是本申请实施例提供的一种素材数据处理方法的流程示意图;
图8是本申请实施例提供的一种素材数据处理方法的流程示意图;
图9A是本申请实施例提供的一种智能创作方法的场景示意图;
图9B是本申请实施例提供的一种智能创作方法的操作示意图;
图9C是本申请实施例提供的一种智能创作方法的场景示意图;
图9D是本申请实施例提供的一种智能创作方法的场景示意图;
图9E是本申请实施例提供的一种智能创作方法的场景示意图;
图9F是本申请实施例提供的一种智能创作方法的场景示意图;
图9G是本申请实施例提供的一种智能创作方法的场景示意图;
图10是本申请实施例提供的一种电子设备的结构示意图;
图11是本申请实施例提供的一种素材数据处理装置的功能单元组成框图;
图12A是本申请实施例提供的一种素材数据处理装置的功能单元组成框图;
图12B是本申请实施例提供的一种素材数据处理装置的功能单元组成框图;
图13是本申请实施例提供的一种素材数据处理装置的功能单元组成框图;
图14是本申请实施例提供的一种素材数据处理装置的功能单元组成框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
电子设备可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式电子设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴电子设备(如智能手表、智能眼镜)、车载设备等。便携式电子设备的示例性实施例包括但不限于搭载IOS系统、Android系统、Microsoft系统或者其它操作系统的便携式电子设备。上述便携式电子设备也可以是其它便携式电子设备,诸如膝上型计算机(Laptop)等。还应当理解的是,在其他一些实施例中,上述电子设备也可以不是便携式电子设备,而是台式计算机。
本申请实施例所公开的示例应用场景介绍如下。
请参阅图1,图1示出了本申请所适用的素材数据处理系统的结构示意图,该结构示意图中可包括:应用层、创作引擎层和系统服务层。
其中,上述应用层可用于支撑电子设备中不同的软件应用程序,例如,可以包括相册,该相册中可包括来源于不同的应用程序的图像或者视频数据;该应用层可用于接收用户发起的包括智能创作操作指令、存储操作指令等预设操作的预设指令,预设指令用于指示第一设备执行对应的预设操作。
在一种可能的示例中,若上述预设指令包括智能创作操作指令,如图2所示,为一种创作引擎层的架构示意图,上述创作引擎层可包括数据检测模块、多模态数据库、智能创作模块和数据优选模块;其中,上述数据检测模块可用于场景匹配、多模态信息的检测等,在此不作限定;多模态数据库主要用于素材数据和多模块数据的管理等,在此不作限定;智能创作模块主要用于对素材数据的裁剪、特效美化、合成编辑等等,在此不作限定;数据优选模块用于对素材数据进行筛选和匹配等,该模块中可包含多种阈值参数,例如:预设信息差值、预设匹配率、美学评价操作的阈值,还可以用于存储质量评价值的指标的设定等等,在此不作限定;该数据优选模块还可以对多模态数据进行信息过滤或者数据对比等等,如根据多模态信息过滤掉不符合上述指标的素材数据,以及智能创作场景的比对等等,在此不作限定。
其中,系统服务层主要用于组网内多设备间的检测、识别、连接、通信等。
在一种可能的示例中,如图3所示,为一种通信组网的架构示意图,该通组网中可包括第一设备和至 少一个第二设备,其中,第一设备为至少一个第二设备的主设备,第二设备为第一设备的从设备,可通过如图2所示的系统服务模块支持第一设备与至少一个第二设备之间的通信连接,并检测得到处于同一个通信组网的其他设备等等,在此不作限定。
需要说明的是,上述第一设备和第二设备可以在同一个局域网络中,即可以是同一通信组网中的设备,也可以在不同的局域网络中,即可以不是同一个通信组网的设备。
在一个可能的示例中,第一设备向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;第二设备接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括所述多模态信息;若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则第二设备确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;第二设备向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;第一设备接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则第一设备执行向所述对应的第二设备发送所述素材获取请求的步骤,其中,所述素材获取请求用于获取所述第二素材数据;第二设备接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;第二设备显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;第二设备响应于所述用户针对所述第二素材数据的确定发送操作,执行所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据的步骤;第一设备接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则第一设备对所述第一素材数据和所述第二素材数据执行所述预设操作。可自动适配与该第一设备在同一组网的其他的第二设备的第二素材数据,并将第一素材数据和第二素材数据进行预设操作,有利于提高选择效率,并能够保证由第一素材数据和第二素材数据处理以后的数据的用户满意度,不需要用户二次手动编辑或者操作,有利于提高用户体验。且第二设备可在确定与多模态信息匹配的目标多模态信息的过程中完成对于素材数据的优选,以得到第二素材数据,如此,可实现中多设备之间自动共享优选的素材数据,用户不需要手动挑选实现设备间素材互传的繁琐流程,有利于提高用户体验。
需要说明的是,当第一设备为从设备时,也可以执行与上述第二设备相同的素材智能创作方法,在此不作赘述。
需要说明的是,在本申请中,上述多个可指两个或两个以上,后续不再赘述。
本申请实施例所公开的权要保护区间介绍如下。
请参阅图4,图4是本申请实施例提供的一种素材数据处理方法的流程示意图,应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;如图所示,本素材数据处理方法包括以下操作。
S401、响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作。
其中,上述预设指令可为用户自行设置或者系统默认,在此不作限定。该预设指令进可用于用户触发或者下发,可用于指示第一设备执行对应的预设操作,该预设操作可包括智能创作操作、存储操作、素材数据共享操作等等,在此不作限定。
其中,智能创作操作可包括以下至少一种:裁剪操作、特效美化操作、合成编辑操作等等,在此不作限定。
其中,上述多模态信息可与第一素材数据对应,可用于指示第一素材数据对应的全球定位系统(Global Positioning System,GPS)信息、人脸信息、场景信息、主体信息、美学评价信息、高光片段信息、人物关系信息和文本信息等等,在此不作限定。
其中,第一素材数据可由第一设备获取或者用户自行选择得到,可在获取第一素材数据以后,对第一素材数据进行多模态信息解析,以得到第一素材数据中每一图像数据和/或视频数据对应的多模态信息。
当然,第一设备可包括多模态数据库,该多模态数据库中可存储每一图像数据和/或视频数据对应的多模态信息,当用户或者第一设备确定第一素材数据以后,可直接从该多模态数据库中适配得到第一素材数据对应的多模态信息。
其中,上述多模态信息可包括以下至少一种:全球定位系统(Global Positioning System,GPS)信息、人脸信息、场景信息、主体信息、美学评价信息、高光片段信息、人物关系信息和文本信息等等,在此不作限定。该多模态信息用于表征第一素材数据中的图像数据或者视频数据所对应的细节信息,高光片段信息 可用于表征第一素材数据处理得到的最有看点或者最精彩的高光时刻,例如,可以是用户领取奖杯的高光时刻对应的视频数据,也可以是漫天晚霞的高光时刻对应的图像数据等等,在此不作限定;上述美学评价信息为第一设备根据美学评价指标和维度等打分得到的美学评价分数,可用于筛选符合大众审美或者用户审美的图像数据和/或视频数据,其中,上述美学评价标准可包括以下至少一种:色彩、构图、专业摄影技巧、内容语义等等,在此不作限定。
示例地,若上述预设操作为智能创作操作,电子设备可以从如图1所示的应用层接收用户触发的针对第一素材数据的预设指令,该预设指令可用于对用户指定或者选择的第一素材数据进行智能创作操作。
示例地,若上述预设指令用于指示第一设备执行智能创作操作,第一设备可包括智能创作操作页面,除去用户指定或者选择得到第一素材数据之外,上述第一素材数据也可以由第一设备指定。例如,第一设备的应用层可识别用户在进入智能创作操作页面之前操作的至少一张图像。若第一设备识别到用户选择进入智能创作操作页面,一般这种情况下,用户可能对上述至少一张图像有创作想法,则第一设备可根据上述至少一张图像确定智能创作场景,进而根据该智能创作场景从应用层适配得到同一智能创作场景至少一张其他图像,进而组合至少一张图像和至少一张其他图像,得到上述第一素材数据,并从多模态数据库中获取第一素材数据中每一图像对应的多模态信息,得到第一素材数据对应的多模态信息。
需要说明的是,当智能创作场景有多个时,针对每一智能创作场景均可适配得到同一智能创作场景的至少一张其他图像,进而将多个智能场景对应的至少一种其他图像和至少一张图像进行组合,得到第一素材数据,具体地在此不再赘述。
S402、向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定。
其中,上述素材获取请求可用于第一设备获取至少一个第二设备中每一第二设备存储的与多模态信息匹配的第二素材数据,该第二素材数据也可以包括图像数据和/或视频数据。该第二素材数据可用于第一设备结合第一素材数据执行预设操作。
其中,上述多模态信息用于第二设备筛选与第一设备多模态信息匹配的目标多模态信息,以得到第二素材数据。
其中,该多模态信息还用于第二设备确定是否存在与第一设备中多模态信息匹配的目标多模态信息,以确定第二设备与第一设备中的第二素材数据和第一素材数据匹配。示例的,还可以用于确定两个设备是否处于同一素材应用场景。
S403、接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果。
其中,上述素材获取结果可包括以下任意一种:存在与第一素材数据匹配的第二素材数据、不存在与第一素材数据匹配的第二素材数据等等。
在一种可能的示例中,第二设备有权限选择是否向第一设备发送第二素材数据,当第一设备接收得到的素材获取结果中存在与第一素材数据匹配,或者与多模态信息匹配的第二素材数据时,表明第二设备允许第一设备获取第二素材数据;反之,当第一设备接收得到的素材获取结果中不存在与第二素材数据时,表明第二设备不允许第一设备获取第二素材数据。
在一种可能的示例中,第二设备可通过多模态信息确定其是否存在与多模态信息匹配的第二素材数据,当第一设备接收得到的素材获取结果中存在与第一素材数据匹配,或者与多模态信息匹配的第二素材数据时,表明第二设备存在与多模态信息匹配的第二素材数据;反之,当第一设备接收得到的素材获取结果中不存在与第二素材数据时,表明第二设备不存在与多模态信息匹配的第二素材数据。
S404、若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
其中,电子设备可在接收至少一个素材获取结果以后,选取“存在第二素材数据”的素材获取结果对应的第二设备对应的第二素材数据,得到至少一个第二素材数据,并对第一素材数据和至少一个第二素材数据执行预设操作。
举例来说,可以对第一素材数据和至少一个第二设备对应的至少一个第二素材数据执行智能创作操作,以得到一个全新的目标素材数据,该目标素材数据可以包括图像数据和/或视频数据,以用于展示第一设备的图像数据和/或视频数据和至少一个第二设备的图像数据和/或视频数据。
举例来说,第一素材数据包括第一设备拍摄得到的针对流星的第一高光片段,该第一高光片段是第一颗流星的落下视频,第二素材数据包括第二设备的针对流星的第二高光片段,第二高光片段是第二颗流星的落下视频,则第一设备可根据第一高光片段和第二高光片段执行裁剪操作、合成编辑操作等智能创作操作,以融合第一高光片段和第二高光片段,得到一个完整的包括第一颗流星落下视频和第二颗流星落下视频的目标高光片段,且该目标高光片段还可以包括不同于第一高光片段和/或第二高光片段的特效、音乐、滤镜等等,在此不作限定。
可以看出,本申请实施例中所描述的素材数据处理方法,响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。如此,可通过多模态信息的匹配,实现从设备(第二设备)的第二素材数据的自动适配;且得到的第二素材数据和第一素材数据对应,并将第一素材数据和第二素材数据进行预设操作,不需要用户手动二次编辑,有利于提高选择效率,并能够保证得到的素材数据的高满意度,有利于提高用户体验。
在一个可能的示例中,所述多模态信息包括以下至少一种:GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息;所述预设操作包括以下至少一种:存储操作、智能创作操作,其中,所述智能创作操作包括以下至少一种:裁剪操作、特效美化操作、合成编辑操作。
其中,上述存储操作可用于第一设备接收第二素材数据以后,将第一素材数据和至少一个第二素材数据同时存储,以便于下一次使用,以实现第一设备和第二设备之间的素材数据的共享;上述智能创作操作用于将第一素材数据和至少一个第二素材数据进行裁剪操作、特效美化操作和/或合成编辑操作等,以实现对于第一素材数据和/或至少一个第二素材数据的二次创作,以实现对于第一素材数据的优选,二次创作以后的素材数据有利于提高用户的满意度,有利于提高用户体验。
请参阅图5,图5是本申请实施例提供的一种素材数据处理方法的流程示意图,应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;如图所示,本素材数据处理方法包括以下操作。
S501、接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定。
其中,上述素材获取请求中携带的多模态信息用于第二设备确定多模态数据库中是否存在与多模态信息匹配的目标多模态信息。
S502、确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息。
其中,第二设备中可包括多模态数据库,该多模态数据库可用于保存该第二设备对应的所有的素材数据对应的多模态数据。
其中,第二设备可将多模态数据库中的所有多模态信息与第一设备对应的多模态信息进行匹配,如果任意一个多模态信息匹配成功,则可确定存在匹配的目标多模态信息,如果全部的多模态信息匹配不成功,则确定不存在匹配的目标多模态数据。
S503、若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据。
S504、向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
其中,上述素材获取结果可包括第一素材获取结果和第二素材获取结果。
示例的,若多模态数据库中存在目标多模态信息,则发送第一素材获取结果,第一素材获取结果中可包括第二素材数据;若多模态数据库中不存在与多模态信息匹配的目标多模态信息,则发送第二素材获取结果,第二素材获取结果中不包括第二素材数据。
其中,若第二设备不存在与目标多模态信息与多模态信息匹配,则可向第一设备发送上述第二素材获取结果,该第二素材获取结果中可包括预设标识,该预设标识用于指示第二设备中不存在与第一设备中多模态数据匹配的目标多模态数据。
其中,上述预设标识可为用户自行设定或者系统默认,在此不作限定,还可事先与第一设备约定得到。
需要说明的是,上述步骤S501~S504的具体描述可参照图4所描述的素材数据处理方法的步骤S401-步骤S404的对应步骤,在此不再赘述。
可以看出,本申请所描述的素材数据处理方法,第二设备接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。如此,第二设备可接收同一组网中第一设备发送的多模态信息,并根据该多模态信息匹配得到与第一素材数据对应的第二素材数据,有利于为第一设备的预设操作提供数据参考和数据支持,并有利于提高第一设备完成预设操作的满意度。
在一个可能的示例中,所述多模态信息包括多个目标第一子模态信息,所述多模态数据库包括多个第二子模态信息,所述目标多模态信息包括多个目标第二子模态信息,所述目标第一子模态信息和/或所述目 标第二子多模态信息包括以下任意一种:GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息,任意一个所述目标第二子模态信息存在与其对应的所述目标第一子模态信息;所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息,可包括如下步骤:根据预设的所述第二子模态信息和匹配逻辑之间的映射关系,确定每一所述第二子模态信息对应的匹配逻辑;根据预设的所述第二子模态信息和优先级之间的映射关系,确定每一所述第二子模态信息对应的优先级;根据所述每一第二子模态信息对应的目标优先级,确定所述多个第二子模态信息的筛选顺序;根据所述筛选顺序和所述每一第二子模态信息对应的匹配逻辑,对所述每一第二子模态信息进行筛选,得到与任意一个所述目标第一子模态信息匹配的所述目标第二子模态信息,得到多个目标第二子模态信息;若任意一个所述第二子模态信息筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息;若任意一个所述第二子模态信息未筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中不存在与所述多模态信息匹配的所述目标多模态信息。
其中,上述目标第一子模态信息和/或第二子模态信息和/或目标第二子模态信息可以为GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息、人物关系信息和文本信息等等任意一种。
其中,第二设备中可预设第二子模态信息与匹配逻辑之间的映射关系,该映射关系用于表征每一第二子模态信息的筛选或者确认方式。
具体实现中,每一第二子模态信息可对应一个匹配逻辑,在通过匹配逻辑对第二子模态信息与其对应的目标第一子模态信息进行筛选以后,若匹配成功,表明多模态数据库中存在与多模态信息匹配的目标多模态信息,可得到匹配成功的目标第二子模态信息,进而,可确定目标第二子模态信息对应的素材数据,该素材数据的数据量是少于第二子模态信息对应的素材数据的数据量的。
进一步地,在通过匹配逻辑对与其对应的第二子模态信息进行筛选以后,如果任一个第二子模态信息都不存在与目标第一子模态信息匹配的目标第二子模态信息,则确定匹配失败,表明所述多模态数据库中不存在与所述多模态信息匹配的所述目标多模态信息。
例如,当第一素材数据为多个第一图像数据时,若多模态信息为GPS信息,目标第一子模态信息包括多个第一GPS信息,即目标第一子模态信息包括每一第一图像数据对应GPS信息,目标第二子模态信息包括多个第二GPS信息,可根据预设的第二子模态信息和匹配逻辑之间的映射关系,确定第二GPS信息这一第二子模态信息对应的匹配逻辑,进而,可根据该匹配逻辑从多个第二GPS信息中筛选得到至少一个目标第二GPS信息,即至少一个目标第二子模态信息,进而,第二设备可根据至少一个目标第二GPS信息确定与第一素材数据对应的目标第一子模态信息对应的第二素材数据。
其中,第二设备中还可预设述第二子模态信息和优先级之间的映射关系,在第二子模态信息包括多个时,即多种第二子模态信息,多个第二子模态信息对应有多个匹配逻辑,第二设备可以针对多个第二子模态信息设定多个匹配逻辑之间的优先级,即多个第二子模态信息的筛选顺序。上述优先级可用于确认第二设备执行上述多个第二子模态信息与多个目标第一子模态信息之间匹配筛选的先后关系,即确定第二设备执行每一第二子模态信息分别对应的匹配逻辑的顺序。
例如,若第二素材数据为图像数据,第二子模态信息包括图像数据的第二场景信息、第二主体信息、第二GPS信息等三种多模态信息,第二子模态信息为多模态数据库中存储,还未进行筛选或者匹配的数据;目标第二子模态信息包括图像数据的目标第二场景信息、目标第二主体信息、目标第二GPS信息;其中,可设定场景信息对应的匹配逻辑的优先级高于主体信息的优先级,主体信息的匹配逻辑的优先级高于GPS信息的匹配逻辑的优先级。
具体实现中,第二设备可优先根据场景信息对应的匹配逻辑从第二场景信息中筛选出与第一场景信息匹配的目标第二场景信息和目标第二场景信息对应的素材数据;继而从目标第二场景信息对应的素材数据中,根据主体信息对应的匹配逻辑从第二主体信息中筛选出与第一主体信息匹配的目标第二主体信息,继而和目标第二主体信息对应的素材数据;最后,从目标第二主体信息对应的素材数据中,根据GPS信息的匹配逻辑从第二GPS信息中筛选出与第一GPS信息匹配的目标第二GPS信息和目标第二GPS信息对应的素材数据,并作为上述目标素材数据,目标第二GPS信息对应的素材数据的数据量小于目标第二主体信息对应的素材数据的数据量,目标第二主体信息对应的素材数据的数据量小于第二场景信息对应的数据量。
举例来说,第二设备的图库中可包括大量的图像数据,可优先通过拍摄场景对应的匹配逻辑,匹配拍摄场景和第一设备中的拍摄场景,筛选得到目标拍摄场景的同时,也对大量的图像数据进行了首次筛选,进而,也筛选得到了与第一设备中拍摄场景匹配的图像数据。继而,通过主体对应的匹配逻辑,匹配主体和第一设备中的主体,筛选得到目标主体的同时,也对大量的图像数据进行了二次筛选,从拍摄场景匹配的图像数据中筛选得到的主体和拍摄场景同时匹配的图像数据。最后,可继续对大量的图像数据进行三次筛选,即通过拍摄的时间信息,匹配时间信息和第一设备对应的时间信息,筛选得到目标时间信息的同时,也筛选得到了与第一设备中拍摄场景、主体和时间信息同时匹配的图像数据,并将与第一设备中拍摄场景、主体和时间信息同时匹配的图像数据作为第二素材数据。
可见,本示例中,第二设备在接收第一设备发送的多模态数据以后,可以根据优先级和匹配逻辑从多 模态数据库中匹配得到与该多模态数据匹配的第二素材数据,并且,上述优先级可用于精确定位数据范围,即可满足与该多模态数据中多种类型的第一素材数据的高匹配度,上述匹配逻辑可用于精确筛选每一种第二子模态数据,以优化处理第二设备中的第二子模态数据,有利于提高筛选准确率;并且,在通过优先级一次次的匹配过程中,可以优先筛选出更符合用户标准的数据,并有利于确认得到更为准确的第二素材数据有利于提高用户体验。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为GPS信息,所述目标第一子模态信息为第一GPS信息,所述目标第二子模态信息为目标第二GPS信息,所述第二子模态信息包括多个第二GPS信息,所述目标第二GPS信息为所述多个第二GPS信息中任意一个;对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,上述方法可包括如下步骤:从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第二子模态信息;和/或,确定每一所述第二GPS信息对应的第二GPS精度和所述第一GPS信息的第一GPS精度;若所述第一GPS精度和/或任意一个所述第二GPS精度大于预设精度阈值,则确定所述第一GPS精度和所述第二GPS精度之间的信息差值;若所述信息差值小于或等于预设信息差值,则将所述第二GPS精度对应的目标第二GPS信息作为所述目标第二子模态信息;若所述第一GPS精度和/或任意一个所述第二GPS精度小于或等于所述预设精度阈值,则执行所述从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第二子模态信息的步骤。
其中,GPS信息可用于确认拍摄得到素材数据的经纬度信息、位置信息等数据;上述预设区间可为用户自行设定,在此不作限定,该预设区间可指第一GPS信息指示的位置信息对应的区间。
其中,上述预设精度阈值和/或预设信息差值可由用户自行设定或者系统默认,在此不作限定;该预设精度阈值可用于表征两个GPS硬件的GPS精度的高低。
其中,若所述第一GPS精度和/或任意一个所述第二GPS精度大于预设精度阈值,则可表明第一设备和/或第二设备的GPS硬件的精度高;反之,若所述第一GPS精度和/或任意一个所述第二GPS精度小于或等于预设精度阈值,则可表明第一设备和/或第二设备的GPS硬件的精度低。
其中,上述信息差值可指GPS信息表征的数据的差异性,上述预设信息差值用于表征第一GPS信息和第二GPS信息之间的匹配度。例如,当GPS信息用于确认经纬度信息时,可确定第一设备对应的第一经纬度信息,和第二设备对应的第二经纬度信息;并求取两个经纬度信息的差值,可预设信息差值为0.01°,如果得到的信息差值小于0.01°,则确认第一GPS信息与第二GPS信息匹配,进而可确定第二GPS信息对应的第二素材数据。
可选地,在示例中,在GPS精度无法读取的时候,第二设备可根据第一GPS信息对应的第一GPS精度和第二GPS信息对应的第二GPS精度可通过其分别对应素材数据的经纬度信息和位置信息确定。
具体实现中,当第一GPS信息和/或第二GPS信息用于指示位置信息时,则可确定第一GPS精度和/或第二GPS精度高,第二设备可执行根据位置信息,对多个第二GPS信息中选取与第一GPS信息处于同一预设区域的目标第二GPS信息为目标第二子模态信息的步骤。
进一步地,当第一GPS信息和/或第二GPS中用于指示经纬度信息时,则可确定第一GPS精度和/或第二GPS精度低;第二设备则可确定第一GPS对应的第一经纬度信息,每一第二GPS信息对应的第二经纬度信息,比较第一经纬度信息和每一第二GPS经纬度信息之间的信息差值,若任意一个信息差值小于或等于预设信息差值,则将其对应的第二经纬度信息对应的目标第二GPS信息作为目标第二子模态信息。
可选地,当第一GPS信息和/或第二GPS中用于指示经纬度信息和位置信息时,可优先根据经纬度,确定第一GPS精度和/或第二GPS精度高,并执行与上述描述相同的步骤,并在任意一个信息差值小于或等于预设信息差值时,第二设备执行根据位置信息,对多个第二GPS信息中选取与第一GPS信息处于同一预设区域的目标第二GPS信息为目标第二子模态信息的步骤。
例如,当两个GPS信息指示的位置信息属于同一地区范围,例如第一GPS信息指示在深圳拍摄,第二GPS信息指示在广州拍摄,两个GPS信息均在广东省,则确定第二GPS信息为目标第二GPS信息。
可见,本示例中,在多模态信息为GPS信息时,第二设备可以根据预设信息差值和预设精度阈值等指标数据,对所述每一第二子模态信息进行筛选,得到与任意一个目标第一子模态信息匹配的所述目标第二子模态信息,实现对于第二子模态信息的筛选。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为人脸信息,所述目标第一子模态信息为第一人脸信息,所述目标第二子模态信息为目标第二人脸信息,所述第二子模态信息包括多个第二人脸信息,所述目标第二人脸信息为所述多个第二人脸信息中任意一个;对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,上述方法可包括如下步骤:确定所述第一人脸信息对应的人物档案;从所述多个第二人脸信息中选取与所述人物档案匹配的目标第二人脸信息作为所述目标第二子模态信息。
其中,第二设备可针对不同的人脸图像设定人物档案;上述人脸信息可以包括用于表征人脸中的五官、表情等信息的像素点。
可见,本示例中,第二设备可根据实现设定的人物档案,确定与第一人脸信息匹配的多个第二人脸信息中的任意一个目标第二人脸信息,有利于实现对于人脸信息类多模态信息的筛选。
可选地,如果上述第二设备没有检索到与第一人脸信息匹配的人物档案,则可逐个匹配上述第一人脸信息和每一第二人脸信息,得到匹配的目标第二人脸图像,并为该目标第二人脸图像建立人物档案,以便于下一次匹配人脸信息。
在一个可能的示例中,其特征在于,若所述第二子模态信息和所述目标第一子模态信息均为场景信息,所述场景信息包括以下至少一种:时间、季节、天气、节日、空间。
其中,第二设备可通过时间(年/月/日/早中晚/…)、季节(春夏秋冬/…)、天气(晴/多云/雨/雪/…)、节日(春节/生日/纪念日/…)、空间(室内/室外/景点/…)、宽泛的主体类型(人/物品/动物/风景/…)等等表征场景信息。
其中,第二设备可对上述场景信息进行优先级划分,进而可根据优先级,逐个与第一设备对应的目标第一子模态信息匹配,以选取匹配的第二子模态信息中的场景信息作为第二子模态信息。
可见,本示例中,第二设备可根据场景信息实现对于人脸信息类多模态信息的筛选,有利于提高后续第二素材数据的准确率。
在一个可能的示例中,若所述第二子模态信息或者所述目标第一子模态信息为主体信息,所述目标第一子模态信息为第一主体信息,所述目标第二子模态信息为目标第二主体信息,所述第二子模态信息包括多个第二主体信息,所述目标第二主体信息为所述多个第二主体信息中任意一个;对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,上述方法可包括如下步骤:确定所述第一主体信息对应的多个第一标签信息,其中,所述第一标签信息用于表征所述第一素材数据中所述第一主体信息的种类;确定任意一个所述第二主体信息对应的多个第二标签信息;将所述多个第一标签信息与所述多个第二标签信息进行匹配,得到多个匹配率,其中,每一匹配率对应一个第一标签信息;确定所述多个匹配率中大于预设匹配率的所述匹配率的匹配数量;若所述匹配数量大于预设匹配数量,则确定所述第二主体信息为所述目标第二主体信息,将所述目标第二主体信息作为所述目标第二子模态信息。
其中,上述主体信息可用于表征图像数据或者视频数据中的主体,该主体可包括以下至少一种:高楼、居民、古建筑、草原、森林、天空、河流、湖泊、火锅、烧烤、桌子、椅子、猫、狗、电脑、用户人脸等等,在此不作限定。上述第一标签信息用于表征第一素材数据中的多个主体的种类,第二标签信息用于表征第二素材数据中多个主体的种类。上述第一标签信息和/或第二标签信息可包括以下至少一种:建筑物(高楼/民居/古建筑/…)、雕塑、风景(草原/森林/河流/湖泊/天空/…)、食物(火锅/烧烤/西餐/甜品/小吃/…)、自然物体(花/草/树/…)、动物(猫/狗/鸟/…)、日常生活物品(电脑/手机/桌子/椅子/…)等等,在此不作限定;第一标签信息用于表征所述第一素材数据中所述第一主体信息的种类;所述第二标签信息用于表征所述第二素材数据中所述第二主体信息的种类。
其中,上述预设匹配数量可为用户自行设置或者系统默认,在此不作限定;该预设匹配数量可以设定为2个或者3个等等。
例如,若设定预设匹配数量为4个,第一主体信息中包括高楼、天空、湖泊、花、草、小猫、小狗等7个第一标签信息,第二主体信息包括高楼、天空、湖泊、花、小狗、花、草等6个第二标签信息,可得,第一主体信息和第二主体信息中有6个标签是匹配的(相同的),6是大于4的,因此,可确定该第二主体信息与第一主体信息匹配,可确定第二主体信息对应的第二素材数据为与第一素材数据匹配的。
可见,本示例中,第二设备可根据匹配数量实现对于主体信息类多模态信息的筛选,有利于提高后续确定第二素材数据的准确率。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为美学评价信息,所述目标第一子模态信息为第一美学评价信息,所述第一美学评价信息包括第一美学评价分数,所述目标第二子模态信息为目标第二美学评价信息,所述第二子模态信息包括多个第二美学评价信息,所述目标第二美学评价信息为所述多个第二美学评价信息中任意一个;对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,上述方法可包括如下步骤:若所述第一素材数据包括目标图像帧,则将所述目标图像帧对应的美学评价分数作为所述第一美学评价分数;若所述第一素材数据包括目标视频数据,则确定所述目标视频数据中包括的多个图像帧对应的美学评价分数的平均值,将所述平均值作为所述第一美学评价分数;确定所述多个第二美学评价信息分别对应的第二美学评价分数,得到多个第二美学评价分数;从所述多个第二美学评价分数中选择大于或等于所述第一美学评价分数的目标第二美学评价分数,并将所述目标第二美学评价分数作为所述目标第二子模态信息。
其中,上述目标图像帧可以是第一素材数据中多个图像数据中任意一个。上述目标视频数据可以是第一素材数据中至少一个视频数据中任意一个。
其中,上述第二美学评价分数可以是第二设备事先计算得到,第一美学评价分数可为第一设备计算得到。
可见,本示例中,第二设备可以比较第一素材数据中的图像数据和/或视频数据,以确定第一美学评价 信息对应的第一美学评价分数;并以该第一美学评价分数为评价标准,以从多个第二美学评价分数中选取大于或等于第一美学评价分数的目标第二美学评价分数,以实现根据美学评价分数,筛选得到更好的或者更优的图像数据和/或视频数据作为第二素材数据,有利于提高后续确定第二素材数据的准确率,并有利于得到更符合大众标准或者审美的目标素材数据。
在一个可能的示例中,上述方法还可包括如下步骤:从所述第二素材数据中筛选出所述目标多模态信息中的高光片段信息;将所述高光片段信息对应的目标第二素材数据发送于所述第一设备。
其中,上述高光片段是信息可以是目标多模态信息对应的视频数据中的具备高光时刻的图像帧组成的视频片段,该高光片段所占第二设备的内存是远小于第二素材数据,即全部的视频数据的,第二设备可以将该高光片段信息对应的目标第二素材数据发送于第一设备。
可见,本示例中,第二设备可发送全部的目标多模态信息对应的第二素材数据,也可以发只发送高光片段,有利于提高传输效率,并且有利于节省第一设备(主设备)对视频素材数据的二次裁剪时间,直接获得可用的视频素材高光片段,有利于提高用户体验。
在一个可能的示例中,若所述第二素材数据包括视频数据;所述从所述第二素材数据中筛选出所述目标多模态信息中的高光片段信息,,所述高光片段信息包括高光片段;上述方法可包括如下步骤:确定所述视频数据中每一帧视频帧对应的目标美学评价分数;选取所述目标美学评价分数大于或等于预设分数值的视频帧作为目标视频帧,得到多个目标视频帧;将多个所述目标视频帧组合成目标视频,并将所述目标视频作为高光片段。
其中,上述预设分数值可为用户自行设定或者系统默认,在此不作限定,若5分为最高美学评价分数,该预设分数值可以设定为4分或者5分。
可见,本示例中,第二设备可以在得到第二素材数据以后,从第二素材数据对应的视频数据中,根据每一帧视频帧对应的目标美学评价分数和预设分数值,选取得到符合高光时刻评价标准的目标视频帧,并组合得到高光片段,有利于实现高光片段的生成。
在一个可能的示例中,所述确定所述视频数据中每一帧视频帧对应的目标美学评价分数,上述方法可包括如下步骤:获取预设的低质量图像帧的质量评价指标和每一所述质量评价指标对应的质量评价参数;根据所述质量评价指标,确定所述每一视频帧图像对应的目标评价参数;比较每一所述目标评价参数和所述质量评价参数;若存在任意一个目标评价参数和质量评价参数一致,则删除所述目标评价参数对应的视频帧图像,得到出所述视频帧图像以外的多个第一视频帧;获取预设的美学评价指标和每一所述美学评价指标对应的美学评价参数;根据所述美学评价指标和每一所述美学评价指标对应的美学评价参数,对所述多个第一视频帧进行美学评价,得到每一第一视频帧对应的目标美学评价分数。
其中,第二设备可设置将质量评价指标和美学评价指标,并根据上述两个评价指标评价视频帧或者图像帧的美学评价分数。
其中,如下表1所示为一种低质量图像帧的质量评价标准和质量评价参数之间的映射关系,第二设备可以为每一类(伪象、曝光问题、清晰度和色彩)的质量评价标准中的评价参数设定评价值,例如,针对色彩类,如果某视频帧图像中评价参数包括色偏、色彩溢出,则可确定该视频帧图像为低质量图像帧,可确定删除该视频帧图像,将剩下的视频帧图像确认为第一视频帧,得到多个第一视频帧。
表1、低质量图像帧的质量评价标准和质量评价参数之间的映射关系
其中,如下表2所示,为美学评价指标和美学评价参数之间的映射关系,美学评价指标可包括以下至少一种:内容语义、色彩、构图和专业摄影技巧等,在此不作限定;每一美学评价指标可对应0个或者至少一个美学评价参数;美学评价参数在可以根据大众审美设定,或者用户自定义等等,在此不作限定。上述映射关系中,还可以包括每一美学评价参数对应的分数。例如,针对某一第一视频帧,如果内容语义、色彩、构图和专业摄影技巧,如果其对应的美学评价参数均为0或者没有,则可确定该第一视频帧对应的美学评价参数为1分;如果其对应的内容语义对应的美学评价参数为“内容语义不清”,其他美学评价指标为0或者没有,则可确定该第一视频帧对应的美学评价参数为2分,以此类推,以得到每一第一视频帧对应的美学评价分数。
表2、美学评价指标和美学评价参数之间的映射关系
可见,本示例中,可对视频图像帧或者视频帧或者图像帧优先进行质量评价,进而筛选质量不好的图像帧等;针对视频帧来说,最后可以对筛选出来的多个第一视频帧中每一第一视频帧进行美学评价,得到其对应的目标美学评价分数,如此,有利于实现对于视频帧图像的美学评价,有利于提高后续确定第二素材数据的准确率,并有利于得到更符合大众标准或者审美的目标素材数据。
在一个可能的示例中,上述方法还可包括如下步骤:确定隐私信息;对所述第二素材数据进行分析,删除与所述隐私信息相关的素材数据,得到目标第二素材数据;将所述目标第二素材数据发送于所述第一设备;确定所述隐私信息对应的隐私标签,并将所述隐私标签同步到所述多模态数据库以对其中包括的多模态信息进行隐私设置。
其中,上述隐私信息可为用户自行设定或者系统默认,在此不作限定;该隐私信息可以是包括用户聊天信息的截图信息,也可以是包括用户手机号的信息等等,在此不作限定。
其中,上述隐私标签可以为禁止分享、私密等标签。
可见,本示例中,在确定第二素材数据以后,对第二设备上的图像/视频素材进行分析,把包含用户人隐私信息的图片/视频素材添加上隐私标签(禁止分享,私密等等)的素材数据去掉,以得到目标第二素材数据,有利于保护用户隐私安全。进一步地,可以将隐私标签和隐私信息添加入多模态数据库中,在下一次的过滤或者筛选过程中,图像数据和/或视频数据素材被第二设备自动拉取时,可以自动过滤掉含有隐私标签(禁止分享,私密)的素材,防止图片/视频素材意外传出,有利于防止包含用户个人隐私信息的图片/视频素材被意外传出,有利于提高用户信息安全,并有利于提高数据筛选效率。
在一个可能的示例中,所述第二素材数据和/或所述目标第二素材数据以缩略图的形式展示于所述第二设备界面中。
可见,本示例中,以缩略图的形式展示第二素材数据和/或目标第二素材数据,有利于用户查看,有利于提高用户体验。
在一个可能的示例中,在所述接收所述第一设备发送的素材获取请求之后,所述方法还可包括如下步骤:以弹框的方式显示所述素材获取请求;响应于用户在所述弹框中的选择操作,执行所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息的步骤。
其中,上述弹框的方式可以为用户自行设置或者系统默认,在此不作限定;可以为用户提供素材选择通道,以在第二设备对应用户点击同意后,第二设备才开始检测第二素材数据,确保了设备之间素材检测,有利于提高传输的安全性。
请参阅图6,图6是本申请实施例提供的一种素材数据处理方法的流程示意图,应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;如图所示,本素材数据处理方法包括以下操作。
S601、响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息, 其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作。
S602、向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定。
S603、接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果。
S604、若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求。
S605、接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果。
S606、若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
其中,上述步骤S601-步骤S606的具体描述可参照图4所描述的素材数据处理方法的步骤S401-步骤S404的对应步骤。
其中,上述素材应用场景可指上述第一素材数据所应用的场景,可以是智能创作场景、素材数据存储场景、素材数据共享场景等等,在此不作限定。该素材应用场景还可以跟预设指令对应的预设操作对应,例如,当预设指令用于指示第一设备执行智能创作操作时,上述素材应用场景为智能创作场景;当预设指令用于指示第一设备执行存储操作时,上述素材应用场景为素材数据存储场景等等,在此不作限定。
其中,上述素材场景检测请求可用于第一设备确定至少一个第二设备中的每一第二设备是否存在与本设备相同素材应用场景的第二素材数据,例如,当预设指令用于指示第一设备执行智能创作操作时,该素材场景检测请求用于确定第二设备对应的智能创作场景是否与第一设备对应的智能创作场景一致。
其中,上述素材场景检测请求可包括多模态信息,该多模态信息可用于第二设备确定其对应的素材应用场景是否与第一设备的素材应用场景一致,具体地,当第二设备对应的多模态数据库中存在与第一设备的多模态信息匹配的目标多模态信息,则确定第二设备与第一设备对应的素材应用场景一致。
其中,上述素材场景检测结果可包括:第二设备存在与第一设备相同素材应用场景的第二素材数据,第二设备不存在与第一设备相同素材应用场景的第二素材数据。
其中,该第二素材数据为第二设备根据多模态信息匹配多模态数据库得到,具体地,当多模态数据库中存在与多模态信息匹配的目标多模态信息时,则第二设备确定目标多模态信息对应的素材数据为第二素材数据。
其中,若任意一个素材场景检测结果指示对应的第二设备存在与第一设备相同素材应用场景的第二素材数据,则确定第二设备与第一设备对应的素材应用场景一致,则第一设备可继续向对应的第二设备发送素材获取请求,该素材获取请求用于获取该第二设备对应的第二素材数据。
可选地,若任意一个素材场景检测结果指示对应的第二设备不存在与第一设备相同素材应用场景的第二素材数据,第一设备则不发送后续的素材获取请求,并显示第一提示信息,以提示用户当前的至少一个第二设备中不存在同一素材应用场景的素材数据,第一设备可继续显示第二提示信息,第二提示信息用于指示用户是否根据本地的第一素材数据执行预设操作,或者提示用户可重新选择第一素材数据,以确认至少一个第二设备中是否存在与重新选择的第一素材数据同一素材应用场景的素材数据。
可以看出,本申请实施例中所描述的素材数据处理方法,上述第一素材数据可应用于不同的素材应用场景,第一设备可根据该素材应用场景确定是否需要执行后续的预设操作。考虑到第二设备中可能不存在第一设备需要的素材数据,因此,第一设备可首先向第二设备发送素材场景检测请求,以确定第二设备是否与第一设备处于同一素材应用场景,在第二设备存在第一设备需要的第二素材数据时,确定第二设备与第一设备处于同一素材应用场景以后再实现第二素材数据的获取,当第二设备不存在第一设备需要的第二素材数据以后,可终止上述素材获取请求。
进一步地,考虑到第二设备对应的用户的隐私性,第二设备可能不允许第一设备获取第二素材数据,因此,第一设备可以向第二设备发送素材获取请求,以询问第二设备是否同意发送该第二素材数据,以帮助第一设备进一步获取第二素材数据,如此,有利于保护用户的隐私,并有利于提高用户体验。
可选地,若上述预设操作用于指示第一设备执行智能创作操作,上述素材应用场景为智能创作场景;在响应于用户触发的针对第一素材数据的预设指令之后,上述第一设备还可以确定第一素材数据对应的智能创作场景。该智能创作场景和第一素材数据对应,也和第一素材数据对应的多模态信息对应,即上述第一素材数据可以为同一智能创作场景的图像数据和/或视频数据。并根据智能创作场景对应的第一素材数据,在多模态数据库中获取与该第一素材数据对应的多模态信息。
可选地,第一设备还可根据预设的多模态信息和智能创作场景之间的映射关系,从第一设备对应的多 模态数据库中匹配得到与该智能创作场景对应的多模态信息。
其中,上述智能创作场景可包括以下至少一种:旅途中的风景、生活中的琐碎场景、天空中的满天繁星等等,在此不作限定;该智能创作场景可用于指示用户想要创作图像或者视频所对应的场景信息,该智能创作场景可对应有主题信息,例如,针对旅途中的风景,其对应的主题信息为旅途主题,针对生活中的琐碎场景,其对应的主题信息为生活主题;该生活主题可根据第一素材数据确定。
需要说明的是,该多模态信息中包括的场景信息与智能创作场景不同,该智能创作场景是带有第一设备对应的用户主观创作想法的场景,场景信息是指的素材数据对应的单一的场景信息,不包括用户想要在该场景中创作的行为。
示例地,若智能创作场景由第一设备确定;具体实现中,如果第一素材数据包括多个图像,则可对多个图像中每一图像进行特征识别,确定每一图像对应的特征集合,得到多个特征集合,每一特征集合可包括对应图像的多个特征;第一设备可每一特征集合中包括的多个特征对上述多个特征集合进行分类,以确定每一特征集合的类别,得到每一特征集合对应的多个类别,上述类别可以包括以下至少一种:场所、人物、物品、动物、风景环境、人物或动物或物品或风景环境对应的状态等等,在此不作限定;其中,场所可以包括景点、公园、办公室、写字楼、小区等等。
进一步地,第一设备可根据预设的组合逻辑,对上述每一特征集合对应的多个类别进行信息整合,以确定该特征集合对应的组合式类别,进而可得到该特征集合对应的一个或多个组合式类别,选取多个组合式类别中层级结构最完整的组合式类别作为该特征集合对应的目标组合式类别,可得到该特征集合对应的目标组合式类别集合,该目标组合式类别集合中可包括至少一个目标组合式类别,进而可得到每一特征集合对应的每一目标组合式类别集合。
再进一步地,第一设备可将每一特征集合对应的目标组合式类别集合中的至少一个目标组合式类别整合到同一集合中,并确定出现次数最多的目标组合式类别,根据出现次数最多的目标组合式类别确定第一素材数据对应的主题信息,并根据该主题信息确定智能创作场景。
其中,第一设备可预先设定组合逻辑,可以理解为场景、人物或物品或动物或风景环境、人物或动物或物品或风景环境对应的状态等三个层级中的一个或多个层级的随机组合。其中,人物或物品或动物或风景环境属于一个层级,人物或动物或物品或风景环境对应的状态属于一个层级,且人物与人物的状态对应,物品与物品的状态对应,...,依次类推。上述层级结构可以是场景+人物,场景+人物+人物状态,人物+人物状态、物品等等,可以是一层、两层或三层。层级结构最完整的可以理解为层数最多的,例如,场景+人物或物品或动物或风景环境+人物或动物或物品或风景环境对应的状态这种三层结构。
其中,上述主题信息可以包括以下至少一种:旅途主题、生活主题、工作主题、运动主题等等,在此不作限定。上述不同的主题信息可由用户自行设定或者系统默认,在此不作限定。每一主题信息可对应有其主题范围,例如,和办公用品相、商务穿着相关的设定为工作主题,和运动类项目、运动类穿着相关的设定为运动主题等等,和旅游景点相关的设定为旅途主题等等,在此不作限定。
示例地,第一设备可根据该预设的组合逻辑对某一图像A的特征集合A对应的多个类别进行信息整合,得到的组合式类别可以是公园里的小猫在晒太阳、公园里的小孩在荡秋千、鸟儿在天空中飞、鱼儿在水中游等等,在此不作限定。如此,可将公园里的小猫在晒太阳、公园里的小孩在荡秋千作为该特征集合A对应的目标组合式类别A。
第一设备可根据该预设的组合逻辑对某一图像B的特征集合B对应的多个类别进行信息整合,得到的组合式类别可以是公园里的蝴蝶在采蜜、公园里的小孩在荡秋千、妈妈在对孩子讲话、鸟儿在天空中飞、鱼儿在水中游等等,在此不作限定。如此,可将公园里的蝴蝶在采蜜、公园里的小孩在荡秋千作为该特征集合B对应的目标组合式类别B。
进而,第一设备可选取出现次数最多的目标组合式类别为公园里的小孩儿在荡秋千,则确定该目标组合式类别对应的主题信息为生活主题,则可确定该生活主题对应的“生活中的琐碎场景”作为本次智能创作操作中的智能创作场景。
需要说明的是,针对上述步骤S401中的第一素材数据由第一设备根据上述至少一张图像确定智能创作场景的方式与上述方法相同,在此不再赘述。
需要说明的是,考虑到不同的设备(第一设备和第二设备)对于本设备内智能创作场景的定义可能有偏差,如果第二设备直接通过智能创作场景匹配到目标多模态数据,进而得到第二素材数据,该第二素材数据可能不是第一设备想要的,不利于后续的智能创作,不利于提高用户体验。
当然,如果同一组网内设备对于素材应用场景的定义相同,则上述智能创作场景也可通过素材场景检测请求发送至第二设备,第二设备对应的多模态数据库中的多模态信息可以按照素材应用场景分类,进而,当第二设备接收到包括智能创作场景的素材场景检测请求时,第二设备可根据该智能创作场景从对应的多模态数据库中确定是否存在和当前的智能创作场景匹配的目标多模态信息,若存在目标多模态信息,则可直接确定该目标多模态信息对应的第二素材数据,该第二素材数据和第一素材数据为同一智能创作场景中的素材数据。如此,有利于提高智能创作得到的目标素材数据的准确率,并不需要执行后续步骤中的多模 态信息的匹配,有利于提高匹配效率。
可选地,若第一设备检测到用户选择的素材数据包括多个图像,但是可能存在多个素材数据对应的素材应用场景不同的情况,则可确定多个图像中每一图像对应的素材应用场景的目标类型,并计算每一目标类型的概率,选取最大概率对应的素材应用场景为目标素材应用场景,并确定目标素材应用场景对应的素材数据为第一素材数据,如此,在匹配得到第二素材数据时,得到的是与第一素材数据相同素材应用的数据。
需要说明的是,在本申请实施例中,仅针对一个素材应用场景进行说明,当然,当第一设备检测到用户选择的素材数据包括多个图像,且多个图像对应有不同的素材应用场景,即第一素材数据可能对应有多个素材应用场景,可能用户想针对多个素材应用场景进行智能创作,即第一设备需要对多个素材应用场景的素材数据进行智能创作,对于每一素材应用场景的素材数据的匹配方式上述方法也适用,在此不作赘述。
示例的,若第一设备的从设备包括多个,即包括多个第二设备时,由于考虑到每一第二设备中的素材应用场景可能也是不同的,也可能第二设备中也包括多个素材应用场景对应的素材数据。上述第一设备可直接从多模态数据库中选择多个图像对应的多模态信息,并将该多模态信息通过如步骤S401-S404中的素材获取请求发送至多个第二设备,或者通过步骤S601-S606中的素材场景检测请求发送至多个第二设备,由每一第二设备根据多模态信息匹配得到目标多模态信息,这时候不再关注是否是有几个相同的素材匹配场景,若第二设备能匹配到与多模态信息匹配的目标多模态信息,则表明该第二设备中具备相同素材应用场景的数据。如此,可以从多个第二设备中分别匹配得到与第一设备的素材应用场景相同的第二素材数据,以帮助第一设备完成对于多个素材应用场景的智能创作,不需要关注素材应用场景是否相同,有利于提高智能创作效率。
请参阅图7,图7是本申请实施例提供的一种素材数据处理方法的交互示意图,应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;如图所示,本素材数据处理方法包括以下操作。
S701、接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息。
S702、确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息。
S703、若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据。
S704、向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据。
S705、接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据。
S706、显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据。
S707、响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
需要说明的是,上述步骤S701~S707的具体描述可参照图4所描述的素材数据处理方法的步骤S401-步骤S404的对应步骤,以及图5所描述的素材处理方法的步骤S501-步骤S504的对应步骤,在此不再赘述。
其中,上述素材场景检测请求中可包括多模态信息,该多模态信息用于第二设备确定多模态数据库中是否存在与多模态信息匹配的目标多模态信息。
具体地,当多模态数据库中存在与多模态信息匹配的目标多模态信息,则可确定该目标多模态信息对应的素材数据为第二素材数据,且第二设备还可确定存在与第一设备相同素材应用场景的第二素材数据,即确定第二设备与第一设备对应的素材应用场景一致。
其中,上述素材场景检测结果包括以下任意一种:第二设备存在与第一设备相同素材应用场景的第二素材数据、第二设备不存在与第一设备相同素材应用场景的第二素材数据。
可选地,当第二设备存在与第一设备相同素材应用场景的第二素材数据,第二设备可发送素材场景检测结果,该素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据,进而,第二设备可接受第一设备发送的素材获取请求,该素材获取请求用于获取第二设备对应的第二素材数据。
进一步地,当第二设备不存在与第一设备相同素材应用场景的第二素材数据,则也可向第一设备发送素材场景检测结果,该素材场景检测结果用于指示第二设备不存在与第一设备相同素材应用场景的第二素材数据,并终止后续流程。
其中,上述提示信息可为用户自行设置或者系统默认,在此不作限定;该提示信息可用于第二设备提醒用户以确定该用户是否同意发送第二素材数据到第一设备。当第二设备检测到用户针对第二素材数据的确认发送指令以后,可将目标第二素材数据发送至第一设备。
需要说明的是,上述确定多模态数据库中是否存在与多模态信息匹配的目标多模态信息的相关步骤与 上述步骤S502及其对应的实施例相同,在此不再赘述。
可以看出,本申请实施例所描述的素材数据处理方法,接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。通过设定两个请求(素材场景检测请求和素材获取请求)有利于优化第一设备和第二设备的交互流程,上述多模态信息用于确认第二设备对应的素材应用场景是否与第一设备中素材应用场景一致,并向第一设备发送素材场景检测结果,以提示第一设备对应的第二设备中存在或不存在相同素材应用场景的第二素材数据,以供第一设备确定后续是否需要继续获取第二素材数据,此时不需要发送第二素材数据。进一步地,当接收到素材获取请求以后,说明第一设备需要第二素材数据,此时再选择发送或者不发送第二素材数据,有利于为第二设备提供更多的选择,例如,以用于向第二设备对应的用户确认是否允许发送第二素材数据,有利于保护用户的隐私,并有利于提高用户体验。再进一步地,第二设备在确定存在相同素材应用场景的第二素材数据以后,通过素材获取请求还可以避免当第一设备不需要获取第二素材数据时,第二设备一股脑将第二素材数据发送到第一设备的无用功,即占用了带宽,也不利于保证第二设备的用户隐私。
请参阅图8,图8是本申请实施例提供的一种素材数据处理方法的交互示意图,所述第一设备与至少一个第二设备建立通信连接,所述第一设备和所述至少一个第二设备为同一个通信组网的设备,所述至少一个第二设备为所述第一设备的从设备,本申请实施例中的第二设备为所述至少一个第二设备中任意一个,如图所示,本素材数据处理方法包括以下操作。
S801、第一设备向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定。
S802、第二设备接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括所述多模态信息。
S803、若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则第二设备确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据。
S804、第二设备向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据。
S805、第一设备接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果。
S806、若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则第一设备执行向所述对应的第二设备发送所述素材获取请求的步骤,其中,所述素材获取请求用于获取所述第二素材数据。
S807、第二设备接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据。
S808、第二设备显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据。
S809、第二设备响应于所述用户针对所述第二素材数据的确定发送操作,执行所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据的步骤。
S810、第一设备接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果。
S811、若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则第一设备对所述第一素材数据和所述第二素材数据执行所述预设操作。
可选地,上述步骤S801-步骤S811的具体描述可参照图6所描述的素材数据处理方法的步骤S601-步骤S606的对应步骤,以及图7所描述的素材数据处理方法的步骤S701-S707的对应步骤在此不再赘述。
可以看出,本申请实施例中所描述的素材数据处理方法,第一设备向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确 定;第二设备接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括所述多模态信息;若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则第二设备确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;第二设备向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;第一设备接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则第一设备执行向所述对应的第二设备发送所述素材获取请求的步骤,其中,所述素材获取请求用于获取所述第二素材数据;第二设备接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;第二设备显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;第二设备响应于所述用户针对所述第二素材数据的确定发送操作,执行所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据的步骤;第一设备接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则第一设备对所述第一素材数据和所述第二素材数据执行所述预设操作。可自动适配与该第一设备在同一组网的其他的第二设备的第二素材数据,并将第一素材数据和第二素材数据进行预设操作,有利于提高选择效率,并能够保证由第一素材数据和第二素材数据处理以后的数据的用户满意度,有利于提高用户体验。且第二设备可在确定与多模态信息匹配的目标多模态信息的过程中完成对于素材数据的优选,以得到第二素材数据,如此,可实现中多设备之间自动共享优选的素材数据,用户不需要手动挑选实现设备间素材互传的繁琐流程,有利于提高用户体验。
在一种可能的示例中,如图9A、图9B所示,分别为一种智能创作方法的场景示意图,如图9A所示,第一设备为主设备,用户可以在第一设备的UI界面或者显示桌面中通过点击“智能创作”模块触发智能创作指令,进而,第一设备可响应于智能创作指令,确定用户选择的第一素材数据,并确定第一素材数据对应的多模态信息,并向组网内的至少一个第二设备分别发送素材场景检测请求,和第一素材数据对应的多模态信息。
如图9B所示,第一设备可接收到第二设备发送的素材场景检测请求,且在素材场景检测结果指示第二设备存在与所述第一设备相同智能创作场景的第二素材数据之后,弹出对话框,显示【检测其他组网设备上存在相似场景素材,是否需要获取】等字样,并响应于用户在该显示桌面的确认操作,向对应的第二设备发送素材获取请求,以获取第二设备中的第二素材数据。当然,如果同一组网内的第二设备不存在与该第一素材数据相同智能创作场景的第二素材数据,第一设备也可接收到第二设备发送的素材场景检测结果,且该素材场景检测结果指示第二设备不存在与智能创作场景对应的第二素材数据,同时,第一设备弹出对话框【检测其他组网设备上不存在相似场景素材】的字样。
如图9C~图9E所示,分别为一种智能创作方法的场景示意图,与图9B相对应的,如图9C所示,在第一设备发送素材场景检测请求以后,第二设备收到第一设备素材场景检测请求后(如图9C所示),若同一组网内的第二设备多模态数据库中存在与多模态信息匹配的目标多模态信息,即第二设备存在与第一设备相同智能创作场景的第二素材数据,向第一设备发送素材场景检测结果,该素材场景检测结果用于指示第二设备存在与第一设备相同素材应用场景的第二素材数据。
进一步地,第二设备可在接收到第一设备发送的素材获取请求以后,第二设备可弹出对话框,包括【其他组网设备请求获取素材】字样,响应于用户在图9C中的点击操作,如图9D所示,弹出对话框,并显示【是否同意其他组网设备获取素材的请求?】,在第二设备接收到用户选择的确认选择以后,向第一设备发送素材获取结果,该素材获取结果可用于指示第二设备存在与第一设备相同智能创作场景对应的第二素材数据,该素材获取结果中包括第二素材数据。
当然,当第二设备检测到不存在第二素材数据时,不弹出如图9C所示的包括【其他组网设备请求获取素材】的对话框,直接向第一设备反馈素材场景检测结果,该素材场景检测结果可用于指示第二设备不存在与第一设备智能创作场景对应的第二素材数据。
进一步地,若同一组网内的第二设备多模态数据库中存在与多模态信息匹配的目标多模态信息,即第二设备存在与第一设备相同智能创作场景的第二素材数据,向第一设备发送素材场景检测结果。第二设备可在接收到第一设备发送的素材获取请求以后,第二设备可弹出对话框,包括【其他组网设备请求获取素材】字样,响应于用户在图9C中的点击操作,如图9E所示,弹出对话框,并显示【是否同意其他组网设备获取素材的请求?】,在第二设备接收到用户选择的取消选择以后,发送素材获取结果,该素材获取结果用于指示第二设备不存在与第一素材数据对应的第二素材数据。
如图9F-图9G所示,为第一设备的场景示意图,第一设备可接收至少一个第二设备发送的第二素材数据,并对一个或多个第二素材数据和第一素材数据执行智能创作操作,并在显示界面中显示对话框,包括【正在创作中】;若第二设备未接收到用户的取消选择,则可得到如图9G所示的目标素材数据,例如,可 以是高光片段。
请参阅图10,图10是本申请实施例提供的一种电子设备的结构示意图,如图所示,该电子设备包括处理器、存储器、通信接口以及一个或多个程序,应用于电子设备,该电子设备包括第一设备和/或第二设备。
可选地,若电子设备为第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;其中,上述一个或多个程序被存储在上述存储器中,上述一个或多个程序被配置由上述处理器执行以下步骤的指令:
响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;
接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
可以看出,本申请实施例中所描述的电子设备,响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。如此,可通过多模态信息的匹配,实现从设备(第二设备)的第二素材数据的自动适配;且得到的第二素材数据和第一素材数据对应,并将第一素材数据和第二素材数据进行预设操作,不需要用户手动二次编辑,有利于提高选择效率,并能够保证得到的素材数据的高满意度,有利于提高用户体验。
在一个可能的示例中,所述多模态信息包括以下至少一种:GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息;所述预设操作包括以下至少一种:存储操作、智能创作操作,其中,所述智能创作操作包括以下至少一种:裁剪操作、特效美化操作、合成编辑操作。
可选地,若电子设备为第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;其中,上述一个或多个程序被存储在上述存储器中,上述一个或多个程序被配置由上述处理器执行以下步骤的指令:
接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;
确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;
向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
可以看出,本申请实施例中所描述的电子设备,接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。如此,第二设备可接收同一组网中第一设备发送的多模态信息,并根据该多模态信息匹配得到与第一素材数据对应的第二素材数据,有利于为第一设备的预设操作提供数据参考和数据支持,并有利于提高第一设备完成预设操作的满意度。
在一个可能的示例中,在所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据之前,上述程序包括用于执行以下步骤的指令:
接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括所述多模态信息;
确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
响应于所述用户针对所述第二素材数据的确定发送操作,执行所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据的步骤。
在一个可能的示例中,所述多模态信息包括多个目标第一子模态信息,所述多模态数据库包括多个第二子模态信息,所述目标多模态信息包括多个目标第二子模态信息,所述第二子模态信息或者目标第一子模态信息或者所述目标第二子模态信息包括以下任意一种:GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息,任意一个所述目标第二子模态信息存在与其对应的所述目标第一子模态信息;
在所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息方面,上述程序包括用于执行以下步骤的指令:
根据预设的所述第二子模态信息和匹配逻辑之间的映射关系,确定每一所述第二子模态信息对应的匹配逻辑;
根据预设的所述第二子模态信息和优先级之间的映射关系,确定每一所述第二子模态信息对应的优先级;
根据所述每一所述第二子模态信息对应的优先级,确定所述多个第二子模态信息的筛选顺序;
根据所述筛选顺序和所述每一第二子模态信息对应的匹配逻辑,对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息;
若任意一个所述第二子模态信息筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息;
若任意一个所述第二子模态信息未筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中不存在与所述多模态信息匹配的所述目标多模态信息。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息为GPS信息,所述目标第一子模态信息为第一GPS信息,所述目标第二子模态信息为目标第二GPS信息,所述第二子模态信息包括多个第二GPS信息,所述目标第二GPS信息为所述多个第二GPS信息中任意一个;
在对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述程序包括用于执行以下步骤的指令:
从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第二子模态信息;或者,
确定每一所述第二GPS信息对应的第二GPS精度和所述第一GPS信息的第一GPS精度;
若所述第一GPS精度和/或任意一个所述第二GPS精度大于预设精度阈值,则确定所述第一GPS精度和所述第二GPS精度之间的信息差值;
若所述信息差值小于或等于预设信息差值,则将所述第二GPS精度对应的目标第二GPS信息作为所述目标第二子模态信息;
若所述第一GPS精度和/或任意一个所述第二GPS精度小于或等于所述预设精度阈值,则执行所述从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第二子模态信息的步骤。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为人脸信息,所述目标第一子模态信息为第一人脸信息,所述目标第二子模态信息为目标第二人脸信息,所述第二子模态信息包括多个第二人脸信息,所述目标第二人脸信息为所述多个第二人脸信息中任意一个;
在对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述程序包括用于执行以下步骤的指令:
确定所述第一人脸信息对应的人物档案;
从所述多个第二人脸信息中选取与所述人物档案匹配的目标第二人脸信息作为所述目标第二子模态信息。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为场景信息,所述场景信息包括以下至少一种:时间、季节、天气、节日、空间。
在一个可能的示例中,若所述第二子模态信息或者所述目标第一子模态信息为主体信息,所述目标第一子模态信息为第一主体信息,所述目标第二子模态信息为目标第二主体信息,所述第二子模态信息包括多个第二主体信息,所述目标第二主体信息为所述多个第二主体信息中任意一个;
在对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述程序包括用于执行以下步骤的指令:
确定所述第一主体信息对应的多个第一标签信息,其中,所述第一标签信息用于表征所述第一素材数 据中所述第一主体信息的种类;
确定任意一个所述第二主体信息对应的多个第二标签信息,其中,所述第二标签信息用于表征对应的第二主体信息的种类;
将所述多个第一标签信息与所述多个第二标签信息进行匹配,得到多个匹配率,其中,每一匹配率对应一个第一标签信息;
确定所述多个匹配率中大于预设匹配率的匹配数量;
若所述匹配数量大于预设匹配数量,则确定所述第二主体信息为所述目标第二主体信息,将所述目标第二主体信息作为所述目标第二子模态信息。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为美学评价信息,所述目标第一子模态信息为第一美学评价信息,所述第一美学评价信息包括第一美学评价分数,所述目标第二子模态信息为目标第二美学评价信息,所述第二子模态信息包括多个第二美学评价信息,所述目标第二美学评价信息为所述多个第二美学评价信息中任意一个;
在所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述程序包括用于执行以下步骤的指令:
若所述第一素材数据包括目标图像帧,则将所述目标图像帧对应的美学评价分数作为所述第一美学评价分数;
若所述第一素材数据包括目标视频数据,则确定所述目标视频数据中包括的多个图像帧对应的美学评价分数的平均值,将所述平均值作为所述第一美学评价分数;
确定所述多个第二美学评价信息分别对应的第二美学评价分数,得到多个第二美学评价分数;
从所述多个第二美学评价分数中选择大于或等于所述第一美学评价分数的目标第二美学评价分数,并将所述目标第二美学评价分数作为所述目标第二子模态信息。
在一个可能的示例中,上述程序还包括用于执行以下步骤的指令:
从所述第二素材数据中筛选出所述目标多模态信息中的高光片段信息;
将所述高光片段信息对应的目标第二素材数据发送于所述第一设备。
在一个可能的示例中,若所述第二素材数据包括视频数据,所述高光片段信息包括高光片段;
在所述从所述第二素材数据中筛选出所述目标多模态信息中的高光片段信息方面,上述程序包括用于执行以下步骤的指令:
确定所述视频数据中每一帧视频帧对应的目标美学评价分数;
选取所述目标美学评价分数大于或等于预设分数值的视频帧作为目标视频帧,得到多个目标视频帧;
将多个所述目标视频帧组合成目标视频,并将所述目标视频作为所述高光片段。
在一个可能的示例中,在所述确定所述视频数据中每一帧视频帧对应的目标美学评价分数方面,上述程序包括用于执行以下步骤的指令:
获取预设的低质量图像帧的质量评价指标和每一所述质量评价指标对应的质量评价参数;
根据所述质量评价指标,确定所述每一视频帧图像对应的目标评价参数;
比较每一所述目标评价参数和所述质量评价参数;
若存在任意一个目标评价参数和质量评价参数一致,则删除所述目标评价参数对应的视频帧图像,得到除所述视频帧图像以外的多个第一视频帧;
获取预设的美学评价指标和每一所述美学评价指标对应的美学评价参数;
根据所述美学评价指标和每一所述美学评价指标对应的美学评价参数,对所述多个第一视频帧进行美学评价,得到每一第一视频帧对应的目标美学评价分数。
在一个可能的示例中,上述程序还包括用于执行以下步骤的指令:
确定隐私信息;
对所述第二素材数据进行分析,删除与所述隐私信息相关的素材数据,得到目标第二素材数据;
将所述目标第二素材数据发送于所述第一设备;
确定所述隐私信息对应的隐私标签,并将所述隐私标签同步到所述多模态数据库以对其中包括的多模态信息进行隐私设置。
在一个可能的示例中,所述第二素材数据和/或所述目标第二素材数据以缩略图的形式展示于所述第二设备界面中。
在一个可能的示例中,在所述接收所述第一设备发送的素材获取请求之后,上述程序还包括用于执行以下步骤的指令:
以弹框的方式显示所述素材获取请求;
响应于用户在所述弹框中的选择操作,执行所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息的步骤。
可选地,若电子设备为第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第 二设备为所述第一设备的从设备;其中,上述一个或多个程序被存储在上述存储器中,上述一个或多个程序被配置由上述处理器执行以下步骤的指令:
响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;
接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;
若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;
接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
可以看出,本申请实施例中所描述的电子设备,响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。上述第一素材数据可应用于不同的素材应用场景,第一设备可根据该素材应用场景确定是否需要执行后续的预设操作。考虑到第二设备中可能不存在第一设备需要的素材数据,因此,第一设备可首先向第二设备发送素材场景检测请求,以确定第二设备是否与第一设备处于同一素材应用场景,在第二设备存在第一设备需要的第二素材数据时,确定第二设备与第一设备处于同一素材应用场景以后再实现第二素材数据的获取,当第二设备不存在第一设备需要的第二素材数据以后,可终止上述素材获取请求。进一步地,考虑到第二设备对应的用户的隐私性,第二设备可能不允许第一设备获取第二素材数据,因此,第一设备可以向第二设备发送素材获取请求,以询问第二设备是否同意发送该第二素材数据,以帮助第一设备进一步获取第二素材数据,如此,有利于保护用户的隐私,并有利于提高用户体验。
可选地,若电子设备为第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;其中,上述一个或多个程序被存储在上述存储器中,上述一个或多个程序被配置由上述处理器执行以下步骤的指令:
接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;
确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
可以看出,本申请实施例中所描述的电子设备,接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;向所述第一设备发 送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。通过设定两个请求(素材场景检测请求和素材获取请求)有利于优化第一设备和第二设备的交互流程,上述多模态信息用于确认第二设备对应的素材应用场景是否与第一设备中素材应用场景一致,并向第一设备发送素材场景检测结果,以提示第一设备对应的第二设备中存在或不存在相同素材应用场景的第二素材数据,以供第一设备确定后续是否需要继续获取第二素材数据,此时不需要发送第二素材数据。进一步地,当接收到素材获取请求以后,说明第一设备需要第二素材数据,此时再选择发送或者不发送第二素材数据,有利于为第二设备提供更多的选择,例如,以用于向第二设备对应的用户确认是否允许发送第二素材数据,有利于保护用户的隐私,并有利于提高用户体验。再进一步地,第二设备在确定存在相同素材应用场景的第二素材数据以后,通过素材获取请求还可以避免当第一设备不需要获取第二素材数据时,第二设备一股脑将第二素材数据发送到第一设备的无用功,即占用了带宽,也不利于保证第二设备的用户隐私。
上述主要从方法侧执行过程的角度对本申请实施例的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所提供的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的区间。
本申请实施例可以根据上述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图11示出了素材数据处理装置的示意图,如图11所示,所述装置应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述装置包括:确定单元、发送单元、接收单元和执行单元,该素材数据处理装置1100可以包括:确定单元1101、发送单元1102、接收单元1103和执行单元1104,其中,
所述确定单元1101,用于响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
所述发送单元1102,用于向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;
所述接收单元1103,用于接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
所述执行单元1104,用于若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
可以看出,本申请实施例中所描述的素材数据处理装置,响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。如此,可通过多模态信息的匹配,实现从设备(第二设备)的第二素材数据的自动适配;且得到的第二素材数据和第一素材数据对应,并将第一素材数据和第二素材数据进行预设操作,不需要用户手动二次编辑,有利于提高选择效率,并能够保证得到的素材数据的高满意度,有利于提高用户体验。
在一个可能的示例中,在所述向所述至少一个第二设备发送素材获取请求之前,上述执行单元1104,还用于:
向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是 否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;
接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;
若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则执行向所述对应的第二设备发送所述素材获取请求的步骤,其中,所述素材获取请求用于获取所述第二素材数据。
请参阅图12A,图12A示出了素材数据处理装置的示意图,如图12A所示,所述装置应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;该素材数据处理装置1200可以包括:接收单元1201、确定单元1202和发送单元1203,其中,
所述接收单元1201,用于接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;
所述确定单元1202,用于确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
所述确定单元1202,还用于若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;
所述发送单元1203,还用于向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据,其中,所述第二素材数据用于所述第一设备智能创作所述第一素材数据得到目标素材数据。
可以看出,本申请实施例中所描述的素材数据处理装置,接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。如此,第二设备可接收同一组网中第一设备发送的多模态信息,并根据该多模态信息匹配得到与第一素材数据对应的第二素材数据,有利于为第一设备的预设操作提供数据参考和数据支持,并有利于提高第一设备完成预设操作的满意度。
在一个可能的示例中,在所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据之前,上述发送单元还用于:
接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括所述多模态信息;
确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
响应于所述用户针对所述第二素材数据的确定发送操作,执行所述向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据的步骤。
在一个可能的示例中,所述多模态信息包括多个目标第一子模态信息,所述多模态数据库包括多个第二子模态信息,所述目标多模态信息包括多个目标第二子模态信息,所述第二子模态信息或者目标第一子模态信息或者所述目标第二子模态信息包括以下任意一种:GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息,任意一个所述目标第二子模态信息存在与其对应的所述目标第一子模态信息;在所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息方面,上述确定单元1202具体用于:
根据预设的所述第二子模态信息和匹配逻辑之间的映射关系,确定每一所述第二子模态信息对应的匹配逻辑;
根据预设的所述第二子模态信息和优先级之间的映射关系,确定每一所述第二子模态信息对应的优先级;
根据所述每一所述第二子模态信息对应的优先级,确定所述多个第二子模态信息的筛选顺序;
根据所述筛选顺序和所述每一第二子模态信息对应的匹配逻辑,对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息;
若任意一个所述第二子模态信息筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息;
若任意一个所述第二子模态信息未筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中 不存在与所述多模态信息匹配的所述目标多模态信息。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息为GPS信息,所述目标第一子模态信息为第一GPS信息,所述目标第二子模态信息为目标第二GPS信息,所述第二子模态信息包括多个第二GPS信息,所述目标第二GPS信息为所述多个第二GPS信息中任意一个;
在所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述确定单元1202具体用于:
从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第二子模态信息;或者,
确定每一所述第二GPS信息对应的第二GPS精度和所述第一GPS信息的第一GPS精度;
若所述第一GPS精度和/或任意一个所述第二GPS精度大于预设精度阈值,则确定所述第一GPS精度和所述第二GPS精度之间的信息差值;
若所述信息差值小于或等于预设信息差值,则将所述第二GPS精度对应的目标第二GPS信息作为所述目标第二子模态信息;
若所述第一GPS精度和/或任意一个所述第二GPS精度小于或等于所述预设精度阈值,则执行所述从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第二子模态信息的步骤。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为人脸信息,所述目标第一子模态信息为第一人脸信息,所述目标第二子模态信息为目标第二人脸信息,所述第二子模态信息包括多个第二人脸信息,所述目标第二人脸信息为所述多个第二人脸信息中任意一个;
在所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述确定单元1202具体用于:
确定所述第一人脸信息对应的人物档案;
从所述多个第二人脸信息中选取与所述人物档案匹配的目标第二人脸信息作为所述目标第二子模态信息。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为场景信息,所述场景信息包括以下至少一种:时间、季节、天气、节日、空间。
在一个可能的示例中,若所述第二子模态信息或者所述目标第一子模态信息为主体信息,所述目标第一子模态信息为第一主体信息,所述目标第二子模态信息为目标第二主体信息,所述第二子模态信息包括多个第二主体信息,所述目标第二主体信息为所述多个第二主体信息中任意一个;
在对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述确定单元1202具体用于:
确定所述第一主体信息对应的多个第一标签信息,其中,所述第一标签信息用于表征所述第一素材数据中所述第一主体信息的种类;
确定任意一个所述第二主体信息对应的多个第二标签信息,其中,所述第二标签信息用于表征对应的第二主体信息的种类;
将所述多个第一标签信息与所述多个第二标签信息进行匹配,得到多个匹配率,其中,每一匹配率对应一个第一标签信息;
确定所述多个匹配率中大于预设匹配率的匹配数量;
若所述匹配数量大于预设匹配数量,则确定所述第二主体信息为所述目标第二主体信息,将所述目标第二主体信息作为所述目标第二子模态信息。
在一个可能的示例中,若所述第二子模态信息和所述目标第一子模态信息均为美学评价信息,所述目标第一子模态信息为第一美学评价信息,所述第一美学评价信息包括第一美学评价分数,所述目标第二子模态信息为目标第二美学评价信息,所述第二子模态信息包括多个第二美学评价信息,所述目标第二美学评价信息为所述多个第二美学评价信息中任意一个;
在所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息方面,上述确定单元1202具体用于:
若所述第一素材数据包括目标图像帧,则将所述目标图像帧对应的美学评价分数作为所述第一美学评价分数;
若所述第一素材数据包括目标视频数据,则确定所述目标视频数据中包括的多个图像帧对应的美学评价分数的平均值,将所述平均值作为所述第一美学评价分数;
确定所述多个第二美学评价信息分别对应的第二美学评价分数,得到多个第二美学评价分数;
从所述多个第二美学评价分数中选择大于或等于所述第一美学评价分数的目标第二美学评价分数,并将所述目标第二美学评价分数作为所述目标第二子模态信息。
在一个可能的示例中,上述发送单元1203具体用于:
从所述第二素材数据中筛选出所述目标多模态信息中的高光片段信息;
将所述高光片段信息对应的目标第二素材数据发送于所述第一设备。
在一个可能的示例中,若所述第二素材数据包括视频数据,所述高光片段信息包括高光片段;与上述图12A一致的,如图12B所示,为一种素材数据处理装置的示意图,该素材数据处理装置1200还可以包括:组合单元1204,该组合单元1204,用于:确定所述视频数据中每一帧视频帧对应的目标美学评价分数;选取所述目标美学评价分数大于或等于预设分数值的视频帧作为目标视频帧,得到多个目标视频帧;将多个所述目标视频帧组合成目标视频,并将所述目标视频作为高光片段。
在一个可能的示例中,在所述确定所述视频数据中每一帧视频帧对应的目标美学评价分数方面,上述确定单元1202具体用于:
获取预设的低质量图像帧的质量评价指标和每一所述质量评价指标对应的质量评价参数;
根据所述质量评价指标,确定所述每一视频帧图像对应的目标评价参数;
比较每一所述目标评价参数和所述质量评价参数;
若存在任意一个目标评价参数和质量评价参数一致,则删除所述目标评价参数对应的视频帧图像,得到除所述视频帧图像以外的多个第一视频帧;
获取预设的美学评价指标和每一所述美学评价指标对应的美学评价参数;
根据所述美学评价指标和每一所述美学评价指标对应的美学评价参数,对所述多个第一视频帧进行美学评价,得到每一第一视频帧对应的目标美学评价分数。
在一个可能的示例中,上述确定单元1202具体还用于:
确定隐私信息;
对所述第二素材数据进行分析,删除与所述隐私信息相关的素材数据,得到目标第二素材数据;
将所述目标第二素材数据发送于所述第一设备;
确定所述隐私信息对应的隐私标签,并将所述隐私标签同步到所述多模态数据库以对其中包括的多模态信息进行隐私设置。
在一个可能的示例中,所述第二素材数据和/或所述目标第二素材数据以缩略图的形式展示于所述第二设备界面中。
在一个可能的示例中,在所述接收所述第一设备发送的素材获取请求之后,上述确定单元1202具体还用于:以弹框的方式显示所述素材获取请求;
响应于用户在所述弹框中的选择操作,执行所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息的步骤。
请参阅图13,图13示出了素材数据处理装置的示意图,如图13所示,所述装置应用于第一设备,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;该素材数据处理装置1300可以包括:确定单元1301、发送单元1302、接收单元1303和执行单元1304,其中,
所述确定单元1301,用于响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
所述发送单元1302,用于向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;
所述接收单元1303,用于接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;
所述发送单元1302,还用于若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;
所述接收单元1303,还用于接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
所述执行单元1304,用于若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
可以看出,本申请实施例所描述的素材数据处理装置,响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素 材场景检测结果;若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。上述第一素材数据可应用于不同的素材应用场景,第一设备可根据该素材应用场景确定是否需要执行后续的预设操作。考虑到第二设备中可能不存在第一设备需要的素材数据,因此,第一设备可首先向第二设备发送素材场景检测请求,以确定第二设备是否与第一设备处于同一素材应用场景,在第二设备存在第一设备需要的第二素材数据时,确定第二设备与第一设备处于同一素材应用场景以后再实现第二素材数据的获取,当第二设备不存在第一设备需要的第二素材数据以后,可终止上述素材获取请求。进一步地,考虑到第二设备对应的用户的隐私性,第二设备可能不允许第一设备获取第二素材数据,因此,第一设备可以向第二设备发送素材获取请求,以询问第二设备是否同意发送该第二素材数据,以帮助第一设备进一步获取第二素材数据,如此,有利于保护用户的隐私,并有利于提高用户体验。
请参阅图14,图14示出了素材数据处理装置的示意图,如图14所示,所述装置应用于第二设备,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;该素材数据处理装置1400可以包括:接收单元1401、确定单元1402、发送单元1403和显示单元1404,其中,
所述接收单元1401,用于接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;
所述确定单元1402,用于确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
所述确定单元1402,还用于若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
所述发送单元1403,用于向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
所述接收单元1401,还用于接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
所述显示单元1404,用于显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
所述发送单元1403,用于响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
可以看出,本申请实施例所描述的素材数据处理装置,接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。通过设定两个请求(素材场景检测请求和素材获取请求)有利于优化第一设备和第二设备的交互流程,上述多模态信息用于确认第二设备对应的素材应用场景是否与第一设备中素材应用场景一致,并向第一设备发送素材场景检测结果,以提示第一设备对应的第二设备中存在或不存在相同素材应用场景的第二素材数据,以供第一设备确定后续是否需要继续获取第二素材数据,此时不需要发送第二素材数据。进一步地,当接收到素材获取请求以后,说明第一设备需要第二素材数据,此时再选择发送或者不发送第二素材数据,有利于为第二设备提供更多的选择,例如,以用于向第二设备对应的用户确认是否允许发送第二素材数据,有利于保护用户的隐私,并有利于提高用户体验。再进一步地,第二设备在确定存在相同素材应用场景的第二素材数据以后,通过素材获取请求还可以避免当第一设备不需要获取第二素材数据时,第二设备一股脑将第二素材数据发送到第一设备的无用功,即占用了带宽,也不利于保证第二设备的用户隐私。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例提供的电子设备,用于执行上述素材数据处理方法,因此可以达到与上述实现方法相同的效果。
在采用集成的单元的情况下,电子设备可以包括处理模块、存储模块和通信模块。其中,处理模块可以用于对电子设备的动作进行控制管理,例如,可以用于支持电子设备执行上述确定单元1101、发送单元 1102、接收单元1103和执行单元1104,或者接收单元1201、确定单元1202、发送单元1203和组合单元1204,或者确定单元1301、发送单元1302、接收单元1303和执行单元1304,或者接收单元1401、确定单元1402、发送单元1403和显示单元1404所执行的步骤。存储模块可以用于支持电子设备执行存储程序代码和数据等。通信模块,可以用于支持电子设备与其他设备的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器的组合等等。存储模块可以是存储器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子设备交互的设备。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤,上述计算机包括电子设备。
本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,上述计算机包括电子设备。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例上述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器、随机存取器、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用区间上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (24)

  1. 一种素材数据处理方法,应用于第一设备,其特征在于,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述方法包括:
    响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
    向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;
    接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
    若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
  2. 根据权利要求1所述的方法,其特征在于,所述多模态信息包括以下至少一种:GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息;所述预设操作包括以下至少一种:存储操作、智能创作操作,其中,所述智能创作操作包括以下至少一种:裁剪操作、特效美化操作、合成编辑操作。
  3. 一种素材数据处理方法,应用于第二设备,其特征在于,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述方法包括:
    接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;
    确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
    若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;
    向所述第一设备发送素材获取结果。
  4. 根据权利要求3所述的方法,其特征在于,所述多模态信息包括多个目标第一子模态信息,所述多模态数据库包括多个第二子模态信息,所述目标多模态信息包括多个目标第二子模态信息,所述第二子模态信息或者目标第一子模态信息或者所述目标第二子模态信息包括以下任意一种:GPS信息、人脸信息、场景信息、主体信息、美学评价信息和高光片段信息,任意一个所述目标第二子模态信息存在与其对应的所述目标第一子模态信息;
    所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息,包括:
    根据预设的所述第二子模态信息和匹配逻辑之间的映射关系,确定每一所述第二子模态信息对应的匹配逻辑;
    根据预设的所述第二子模态信息和优先级之间的映射关系,确定每一所述第二子模态信息对应的优先级;
    根据所述每一所述第二子模态信息对应的优先级,确定所述多个第二子模态信息的筛选顺序;
    根据所述筛选顺序和所述每一第二子模态信息对应的匹配逻辑,对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息;
    若任意一个所述第二子模态信息筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息;
    若任意一个所述第二子模态信息未筛选得到对应的目标第二子模态信息,则确定所述多模态数据库中不存在与所述多模态信息匹配的所述目标多模态信息。
  5. 根据权利要求4所述的方法,其特征在于,若所述第二子模态信息和所述目标第一子模态信息为GPS信息,所述目标第一子模态信息为第一GPS信息,所述目标第二子模态信息为目标第二GPS信息,所述第二子模态信息包括多个第二GPS信息,所述目标第二GPS信息为所述多个第二GPS信息中任意一个;
    所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,包括:
    从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第二子模态信息;或者,
    确定每一所述第二GPS信息对应的第二GPS精度和所述第一GPS信息的第一GPS精度;
    若所述第一GPS精度和/或任意一个所述第二GPS精度大于预设精度阈值,则确定所述第一GPS精度和所述第二GPS精度之间的信息差值;
    若所述信息差值小于或等于预设信息差值,则将所述第二GPS精度对应的目标第二GPS信息作为所述目标第二子模态信息;
    若所述第一GPS精度和/或任意一个所述第二GPS精度小于或等于所述预设精度阈值,则执行所述从所述多个第二GPS信息中选取与所述第一GPS信息处于同一预设区间的目标第二GPS信息为所述目标第 二子模态信息的步骤。
  6. 根据权利要求4所述的方法,其特征在于,若所述第二子模态信息和所述目标第一子模态信息均为人脸信息,所述目标第一子模态信息为第一人脸信息,所述目标第二子模态信息为目标第二人脸信息,所述第二子模态信息包括多个第二人脸信息,所述目标第二人脸信息为所述多个第二人脸信息中任意一个;
    所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,包括:
    确定所述第一人脸信息对应的人物档案;
    从所述多个第二人脸信息中选取与所述人物档案匹配的目标第二人脸信息作为所述目标第二子模态信息。
  7. 根据权利要求4所述的方法,其特征在于,若所述第二子模态信息和所述目标第一子模态信息均为场景信息,所述场景信息包括以下至少一种:时间、季节、天气、节日、空间。
  8. 根据权利要求4所述的方法,其特征在于,若所述第二子模态信息或者所述目标第一子模态信息为主体信息,所述目标第一子模态信息为第一主体信息,所述目标第二子模态信息为目标第二主体信息,所述第二子模态信息包括多个第二主体信息,所述目标第二主体信息为所述多个第二主体信息中任意一个;
    所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,包括:
    确定所述第一主体信息对应的多个第一标签信息,其中,所述第一标签信息用于表征所述第一素材数据中所述第一主体信息的种类;
    确定任意一个所述第二主体信息对应的多个第二标签信息,其中,所述第二标签信息用于表征对应的第二主体信息的种类;
    将所述多个第一标签信息与所述多个第二标签信息进行匹配,得到多个匹配率,其中,每一匹配率对应一个第一标签信息;
    确定所述多个匹配率中大于预设匹配率的匹配数量;
    若所述匹配数量大于预设匹配数量,则确定所述第二主体信息为所述目标第二主体信息,将所述目标第二主体信息作为所述目标第二子模态信息。
  9. 根据权利要求4所述的方法,其特征在于,若所述第二子模态信息和所述目标第一子模态信息均为美学评价信息,所述目标第一子模态信息为第一美学评价信息,所述第一美学评价信息包括第一美学评价分数,所述目标第二子模态信息为目标第二美学评价信息,所述第二子模态信息包括多个第二美学评价信息,所述目标第二美学评价信息为所述多个第二美学评价信息中任意一个;
    所述对所述每一第二子模态信息进行筛选,得到所述目标第二子模态信息,包括:
    若所述第一素材数据包括目标图像帧,则将所述目标图像帧对应的美学评价分数作为所述第一美学评价分数;
    若所述第一素材数据包括目标视频数据,则确定所述目标视频数据中包括的多个图像帧对应的美学评价分数的平均值,将所述平均值作为所述第一美学评价分数;
    确定所述多个第二美学评价信息分别对应的第二美学评价分数,得到多个第二美学评价分数;
    从所述多个第二美学评价分数中选择大于或等于所述第一美学评价分数的目标第二美学评价分数,并将所述目标第二美学评价分数作为所述目标第二子模态信息。
  10. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    从所述第二素材数据中筛选出所述目标多模态信息中的高光片段信息;
    将所述高光片段信息对应的目标第二素材数据发送于所述第一设备。
  11. 根据权利要求10所述的方法,其特征在于,若所述第二素材数据包括视频数据,所述高光片段信息包括高光片段;
    所述从所述第二素材数据中筛选出所述目标多模态信息中的高光片段信息,包括:
    确定所述视频数据中每一帧视频帧对应的目标美学评价分数;
    选取所述目标美学评价分数大于或等于预设分数值的视频帧作为目标视频帧,得到多个目标视频帧;
    将多个所述目标视频帧组合成目标视频,并将所述目标视频作为所述高光片段。
  12. 根据权利要求11所述的方法,其特征在于,所述确定所述视频数据中每一帧视频帧对应的目标美学评价分数,包括:
    获取预设的低质量图像帧的质量评价指标和每一所述质量评价指标对应的质量评价参数;
    根据所述质量评价指标,确定所述每一视频帧图像对应的目标评价参数;
    比较每一所述目标评价参数和所述质量评价参数;
    若存在任意一个目标评价参数和质量评价参数一致,则删除所述目标评价参数对应的视频帧图像,得到除所述视频帧图像以外的多个第一视频帧;
    获取预设的美学评价指标和每一所述美学评价指标对应的美学评价参数;
    根据所述美学评价指标和每一所述美学评价指标对应的美学评价参数,对所述多个第一视频帧进行美学评价,得到每一第一视频帧对应的目标美学评价分数。
  13. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    确定隐私信息;
    对所述第二素材数据进行分析,删除与所述隐私信息相关的素材数据,得到目标第二素材数据;
    将所述目标第二素材数据发送于所述第一设备;
    确定所述隐私信息对应的隐私标签,并将所述隐私标签同步到所述多模态数据库以对其中包括的多模态信息进行隐私设置。
  14. 根据权利要求10或11所述的方法,其特征在于,所述第二素材数据和/或所述目标第二素材数据以缩略图的形式展示于所述第二设备界面中。
  15. 根据权利要求14所述的方法,其特征在于,在所述接收所述第一设备发送的素材获取请求之后,所述方法还包括:
    以弹框的方式显示所述素材获取请求;
    响应于用户在所述弹框中的选择操作,执行所述确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息的步骤。
  16. 一种素材数据处理方法,应用于第一设备,其特征在于,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述方法包括:
    响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
    向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;
    接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;
    若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;
    接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
    若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
  17. 一种素材数据处理方法,应用于第二设备,其特征在于,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述方法还包括:
    接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;
    确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
    若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
    向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
    接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
    显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
    响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
  18. 一种素材数据处理装置,应用于第一设备,其特征在于,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述装置包括:确定单元、发送单元、接收单元和执行单元,其中,
    所述确定单元,用于响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
    所述发送单元,用于向所述至少一个第二设备发送素材获取请求,其中,所述素材获取请求包括所述多模态信息,所述多模态信息用于所述第二设备筛选与所述多模态信息匹配的目标多模态信息,所述目标多模态信息与第二素材数据对应,所述第二素材数据为对应的所述第二设备根据所述多模态信息确定;
    所述接收单元,用于接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
    所述执行单元,用于若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
  19. 一种素材数据处理装置,应用于第二设备,其特征在于,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述装置包括:接收单元、确定单元和发送单元,其中,
    所述接收单元,用于接收所述第一设备发送的素材获取请求,其中,所述素材获取请求包括多模态信息,所述多模态信息为所述第一设备根据第一素材数据确定;
    所述确定单元,用于确定多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
    所述确定单元,还用于若所述多模态数据库中存在所述目标多模态信息,则确定所述多模态信息对应的第二素材数据;
    所述发送单元,用于向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
  20. 一种素材数据处理装置,应用于第一设备,其特征在于,所述第一设备与至少一个第二设备建立通信连接,所述至少一个第二设备为所述第一设备的从设备;所述装置包括:确定单元、发送单元、接收单元和执行单元,其中,
    所述确定单元,用于响应于用户触发的针对第一素材数据的预设指令,确定所述第一素材数据对应的多模态信息,其中,所述第一素材数据包括以下至少一种:图像数据和视频数据,所述预设指令用于指示所述第一设备执行对应的预设操作;
    所述发送单元,用于向所述至少一个第二设备发送至少一个素材场景检测请求,其中,每一第二设备对应一个素材场景检测请求,所述素材场景检测请求包括所述多模态信息,所述素材场景检测请求用于确定对应的第二设备是否存在与所述第一设备相同素材应用场景的第二素材数据,所述第二素材数据为对应的所述第二设备根据所述多模态信息匹配多模态数据库确定;
    所述接收单元,用于接收所述至少一个第二设备发送的至少一个素材场景检测结果,其中,每一所述第二设备对应一个素材场景检测结果;
    所述发送单元,还用于若任意一个所述素材场景检测结果指示对应的第二设备存在与所述第一设备相同素材应用场景的第二素材数据,则向所述对应的第二设备发送所述素材获取请求;
    所述接收单元,还用于接收所述至少一个第二设备发送的至少一个素材获取结果,其中,每一所述第二设备对应一个所述素材获取结果;
    所述执行单元,用于若任意一个所述素材获取结果指示对应的所述第二设备存在所述第二素材数据,则对所述第一素材数据和所述第二素材数据执行所述预设操作。
  21. 一种素材数据处理装置,应用于第二设备,其特征在于,所述第二设备与第一设备建立通信连接,所述第二设备为所述第一设备的从设备;所述装置包括:接收单元、确定单元、发送单元和显示单元,其中,
    所述接收单元,用于接收所述第一设备发送的素材场景检测请求,其中,所述素材场景请求中包括多模态信息;
    所述确定单元,用于确定所述多模态数据库中是否存在与所述多模态信息匹配的目标多模态信息;
    所述确定单元,还用于若所述多模态数据库中存在与所述多模态信息匹配的目标多模态信息,则确定所述多模态信息对应的第二素材数据,并确定存在与所述第一设备相同素材应用场景的第二素材数据;
    所述发送单元,用于向所述第一设备发送素材场景检测结果,其中,所述素材场景检测结果用于指示所述第二设备存在与所述第一设备相同素材应用场景的第二素材数据;
    所述接收单元,还用于接收所述第一设备发送的素材获取请求,其中,所述素材获取请求用于所述第一设备获取所述第二素材数据;
    所述显示单元,用于显示提示信息,其中,所述提示信息用于指示所述用户选择发送或不发送所述第二素材数据;
    所述发送单元,用于响应于所述用户针对所述第二素材数据的确定发送操作,向所述第一设备发送素材获取结果,其中,所述素材获取结果中包括或不包括所述第二素材数据。
  22. 一种电子设备,其特征在于,包括处理器、存储器、通信接口,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-2或3-16或17或18任一项所述的方法中的步骤的指令。
  23. 一种计算机可读存储介质,其特征在于,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-2或3-16或17或18任一项所述的方法。
  24. 一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如权利要求1-2或3-16或17或18任一项所述的方法。
PCT/CN2023/127013 2022-11-11 2023-10-27 素材数据处理方法及相关产品 WO2024099101A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211417536.4 2022-11-11
CN202211417536.4A CN118035508A (zh) 2022-11-11 2022-11-11 素材数据处理方法及相关产品

Publications (1)

Publication Number Publication Date
WO2024099101A1 true WO2024099101A1 (zh) 2024-05-16

Family

ID=90993948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/127013 WO2024099101A1 (zh) 2022-11-11 2023-10-27 素材数据处理方法及相关产品

Country Status (2)

Country Link
CN (1) CN118035508A (zh)
WO (1) WO2024099101A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170185675A1 (en) * 2014-05-27 2017-06-29 Telefonaktiebolaget Lm Ericsson (Publ) Fingerprinting and matching of content of a multi-media file
CN110298283A (zh) * 2019-06-21 2019-10-01 北京百度网讯科技有限公司 图像素材的匹配方法、装置、设备以及存储介质
CN110858914A (zh) * 2018-08-23 2020-03-03 北京优酷科技有限公司 视频素材推荐方法及装置
CN111263241A (zh) * 2020-02-11 2020-06-09 腾讯音乐娱乐科技(深圳)有限公司 媒体数据的生成方法、装置、设备及存储介质
CN111918094A (zh) * 2020-06-29 2020-11-10 北京百度网讯科技有限公司 视频处理方法、装置、电子设备和存储介质
US20210103615A1 (en) * 2019-10-03 2021-04-08 Adobe Inc. Adaptive search results for multimedia search queries

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170185675A1 (en) * 2014-05-27 2017-06-29 Telefonaktiebolaget Lm Ericsson (Publ) Fingerprinting and matching of content of a multi-media file
CN110858914A (zh) * 2018-08-23 2020-03-03 北京优酷科技有限公司 视频素材推荐方法及装置
CN110298283A (zh) * 2019-06-21 2019-10-01 北京百度网讯科技有限公司 图像素材的匹配方法、装置、设备以及存储介质
US20210103615A1 (en) * 2019-10-03 2021-04-08 Adobe Inc. Adaptive search results for multimedia search queries
CN111263241A (zh) * 2020-02-11 2020-06-09 腾讯音乐娱乐科技(深圳)有限公司 媒体数据的生成方法、装置、设备及存储介质
CN111918094A (zh) * 2020-06-29 2020-11-10 北京百度网讯科技有限公司 视频处理方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN118035508A (zh) 2024-05-14

Similar Documents

Publication Publication Date Title
JP5214825B1 (ja) プレゼンテーションコンテンツ生成装置、プレゼンテーションコンテンツ生成方法、プレゼンテーションコンテンツ生成プログラム、及び集積回路
EP3706015A1 (en) Method and device for displaying story album
KR101879619B1 (ko) 콘텐츠 항목의 저장
CN105190480B (zh) 信息处理设备和信息处理方法
US11249620B2 (en) Electronic device for playing-playing contents and method thereof
EP2402867B1 (en) A computer-implemented method, a computer program product and a computer system for image processing
US20160189414A1 (en) Autocaptioning of images
CN111489264A (zh) 指示地理空间活动度量的基于地图的图形用户界面
US9460057B2 (en) Theme-based media content generation system and method
CN108431802A (zh) 组织与用户关联的图像
WO2017107672A1 (zh) 一种信息处理方法和装置、一种用于信息处理的装置
US20040174434A1 (en) Systems and methods for suggesting meta-information to a camera user
US20090300109A1 (en) System and method for mobile multimedia management
CN103412951A (zh) 基于人物照片的人脉关联分析管理系统与方法
WO2018152822A1 (zh) 一种生成相册的方法、装置和移动终端
RU2677613C1 (ru) Способ обработки изображения и устройство
US9973649B2 (en) Photographing apparatus, photographing system, photographing method, and recording medium recording photographing control program
US20130083049A1 (en) Image display system, image display apparatus, server, image display method and storage medium storing a program
CN103403765A (zh) 内容加工装置及其集成电路、方法和程序
CN110633377A (zh) 一种图片清理方法和装置
CN104486548A (zh) 一种信息处理方法及电子设备
WO2024099101A1 (zh) 素材数据处理方法及相关产品
US20110304779A1 (en) Electronic Apparatus and Image Processing Method
CN108701135A (zh) 媒体文件分享方法、媒体文件分享设备及终端
CN111163170A (zh) 照片分享方法、系统及服务器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23887791

Country of ref document: EP

Kind code of ref document: A1