CN113870133A - Multimedia display and matching method, device, equipment and medium - Google Patents

Multimedia display and matching method, device, equipment and medium Download PDF

Info

Publication number
CN113870133A
CN113870133A CN202111136435.5A CN202111136435A CN113870133A CN 113870133 A CN113870133 A CN 113870133A CN 202111136435 A CN202111136435 A CN 202111136435A CN 113870133 A CN113870133 A CN 113870133A
Authority
CN
China
Prior art keywords
multimedia data
multimedia
matched
target
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111136435.5A
Other languages
Chinese (zh)
Other versions
CN113870133B (en
Inventor
黄造军
徐之俊
冯宇飞
邓子建
吴铭泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202111136435.5A priority Critical patent/CN113870133B/en
Publication of CN113870133A publication Critical patent/CN113870133A/en
Priority to PCT/CN2022/115521 priority patent/WO2023045710A1/en
Application granted granted Critical
Publication of CN113870133B publication Critical patent/CN113870133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a multimedia display and matching method, apparatus, device, and medium. The multimedia display method comprises the following steps: receiving original multimedia data; performing special effect editing on the original multimedia data to obtain multimedia data to be matched; generating synthetic multimedia data based on the multimedia data to be matched and the target multimedia data, wherein the target multimedia data is obtained according to the first multimedia characteristic matching of the multimedia data to be matched; and displaying the synthesized multimedia data. According to the embodiment of the disclosure, the beautifying effect of the multimedia data is enriched, the interestingness of the multimedia data display is improved, the user can interact through the multimedia data, the various interaction among the users is realized, and the use experience of the user is improved.

Description

Multimedia display and matching method, device, equipment and medium
Technical Field
The present disclosure relates to the field of multimedia processing technologies, and in particular, to a method, an apparatus, a device, and a medium for displaying and matching multimedia.
Background
With the rapid development of computer technology and mobile communication technology, various network platforms based on electronic devices are widely used, and the daily life of people is greatly enriched. More and more users are willing to beautify multimedia data, such as images or video, on a network platform to obtain a satisfactory photo or video.
At present, although users can beautify multimedia data by using a preset special effect template, the interaction mode among the users is single, interestingness is lacked, and user experience is reduced.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present disclosure provides a multimedia display and matching method, apparatus, device, and medium.
In a first aspect, the present disclosure provides a multimedia display method, including:
receiving original multimedia data;
performing special effect editing on the original multimedia data to obtain multimedia data to be matched;
generating synthetic multimedia data based on the multimedia data to be matched and the target multimedia data, wherein the target multimedia data is obtained according to the first multimedia characteristic matching of the multimedia data to be matched;
and displaying the synthesized multimedia data.
In a second aspect, the present disclosure provides a multimedia matching method, including:
receiving multimedia data to be matched, wherein the multimedia data to be matched is obtained by carrying out special effect processing on original multimedia data;
extracting a first multimedia characteristic from multimedia data to be matched;
acquiring a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and inquiring target multimedia data matched with the first multimedia characteristics in the candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In a third aspect, the present disclosure provides a multimedia display apparatus comprising:
a data receiving unit configured to receive original multimedia data;
the special effect editing unit is configured to carry out special effect editing on the original multimedia data to obtain multimedia data to be matched;
the data synthesis unit is configured to generate synthesized multimedia data based on the multimedia data to be matched and the target multimedia data, and the target multimedia data is obtained according to the first multimedia feature matching of the multimedia data to be matched;
a data display unit configured to display the synthesized multimedia data.
In a fourth aspect, the present disclosure provides a multimedia matching apparatus, comprising:
the data receiving unit is configured to receive multimedia data to be matched, and the multimedia data to be matched is obtained by performing special effect processing on original multimedia data;
the characteristic extraction unit is configured to extract first multimedia characteristics from the multimedia data to be matched;
the data acquisition unit is configured to acquire a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and the data query unit is configured to query target multimedia data matched with the first multimedia characteristics in the candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In a fifth aspect, the present disclosure provides a computing device comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the multimedia display method of the first aspect or to implement the multimedia matching method of the second aspect.
In a sixth aspect, the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the multimedia display method of the first aspect or the multimedia matching method of the second aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the multimedia display and matching method, device, equipment and medium of the embodiment of the disclosure, after the received original multimedia data is subjected to special effect editing, the synthesized multimedia data is generated and displayed based on the multimedia data to be matched and the target multimedia data obtained through editing. The target pair media data are obtained based on the first multimedia characteristic matching of the multimedia data to be matched, the synthesized multimedia data obtained based on the original multimedia data can comprise the content of the target multimedia data matched with the multimedia data to be matched besides the special effect in the multimedia data to be matched, so that the multimedia data image has multiple elements, the beautifying effect of the multimedia data is enriched, the interestingness of multimedia data display is improved, the user can interact through the multimedia data, the diversity interaction between the users is realized, and the use experience of the user is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 illustrates an architecture diagram of a multimedia display system provided by an embodiment of the present disclosure;
FIG. 2 illustrates an architecture diagram of another multimedia display system provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a multimedia display method according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating a shooting preview interface provided by an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a special effects editing interface provided by an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating a display interface of multimedia data to be matched according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating matching logic for multimedia data provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a display interface for synthesizing multimedia data according to an embodiment of the disclosure;
FIG. 9 is a flow chart illustrating another multimedia display method provided by the embodiment of the disclosure;
FIG. 10 is a flow chart illustrating a multimedia display method according to an embodiment of the disclosure;
fig. 11 is a flow chart illustrating a multimedia matching method provided by an embodiment of the present disclosure;
FIG. 12 is a schematic diagram illustrating a multimedia display apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a multimedia matching apparatus provided in an embodiment of the present disclosure;
fig. 14 shows a schematic structural diagram of a computing device provided by an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
With the rapid development of computer technology and mobile communication technology, various network platforms based on electronic devices are widely used, and the daily life of people is greatly enriched. More and more users are willing to beautify taking multimedia data, such as images or videos, on a network platform to obtain a satisfactory picture or video.
At present, a user can beautify multimedia data by using a preset special effect template. For example, a preset sticker may be added to a shot picture, or a preset special effect may be added to the shot picture.
However, in the method, a user can only select the special effect template from the preset special effect tools, the beautifying effect is single, the interestingness is lacked, and the user experience is reduced.
In order to solve the above problem, embodiments of the present disclosure provide a multimedia display and matching method, apparatus, device, and medium capable of displaying synthesized multimedia data generated from multimedia data to be matched and target multimedia data.
The multimedia display method provided by the present disclosure can be applied to the architectures shown in fig. 1 and fig. 2, and is specifically described in detail with reference to fig. 1 and fig. 2.
Fig. 1 shows an architecture diagram of a multimedia display system provided by an embodiment of the present disclosure.
As shown in fig. 1, the multimedia display system may include at least one electronic device 101 of a client and at least one server 102 of a server. The electronic device 101 may establish a connection with the server 102 and perform information interaction through a network protocol, such as hypertext transfer protocol security protocol (HTTPS). The electronic device 101 may be a device with a communication function, such as a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable device, an all-in-one machine, and an intelligent home device, and may also be a device simulated by a virtual machine or a simulator. The server 102 may be a device with storage and computing functions, such as a cloud server or a server cluster.
Based on the above architecture, a user can perform special effect editing on the original multimedia data through a specific service platform on the electronic device 101, and generate and display the synthesized multimedia data. The specific service platform may be a specific application program or a specific website, such as a social platform or a video playing platform with social function.
In some embodiments, after a user logs in a specific service platform through the electronic device 101, the electronic device 101 may acquire original multimedia data such as an image or a video, and perform special effect editing on the original multimedia data to obtain multimedia data to be matched. And after acquiring a plurality of candidate multimedia data including the target multimedia data P11 from the server 102, the electronic device 101 may query the target multimedia data P11 from the candidate multimedia data based on the first multimedia characteristics of the multimedia data to be matched. Then, the electronic device 101 may generate synthetic multimedia data P12 from the target multimedia data obtained based on the first multimedia feature matching of the multimedia data to be matched and the multimedia data to be matched. Alternatively, with continued reference to fig. 1, the electronic device 101 may upload the generated composite multimedia data P12 to the server 102.
In other embodiments, the electronic device 101 may upload the multimedia data to be matched to the server 102. Then, after receiving the multimedia data to be matched, the server 102 may match the target multimedia data P11 from the candidate multimedia data and send the target multimedia data P11 to the electronic device 101. Then, the electronic device 101 may generate the composite multimedia data P12 based on the target multimedia data and the multimedia data to be matched, which are obtained by matching the first multimedia features of the multimedia data to be matched.
In addition, the multimedia display method provided by the present disclosure may be applied to a specific scenario in which users of multiple electronic devices interact with each other through multimedia data, and is described with reference to the architecture shown in fig. 2.
Fig. 2 shows an architecture diagram of another multimedia display system provided by an embodiment of the present disclosure.
As shown in fig. 2, the multimedia display system may include at least one first electronic device 201 and at least one second electronic device 202 of a client, and at least one server 203 of a server. The first electronic device 201, the second electronic device 202 and the server 203 may respectively establish connection and perform information interaction through a network protocol, such as HTTPS. The first electronic device 201 and the second electronic device 202 may be devices with communication functions, such as a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable device, an all-in-one machine, and an intelligent home device, respectively, or may be devices simulated by a virtual machine or a simulator. The server 203 may be a device with storage and computing functions, such as a cloud server or a server cluster.
Based on the above architecture, a first user may log in a specific service platform on the first electronic device 201, and a second user may log in the same specific service platform on the second electronic device 202. During the interaction between the first user and the first user through the specific service platform, the second user may use the second electronic device 202 to send the target multimedia data P22, which needs to be synthesized by the first user, to the first user through the server 203 of the specific social platform within the specific social platform. Wherein, the specific social platform can be a specific application program with social function or a specific website.
In one embodiment, after the second user sends the target multimedia data P22 edited with special effects to the server 203 through the second electronic device 202, the server 203 may send candidate multimedia data including the target multimedia data P22 to the first electronic device 201. If the first electronic device 201 determines that the target multimedia data P21 matches the multimedia data to be matched after the tricking process, it may generate the synthesized multimedia data P23 and transmit the synthesized multimedia data P21 to the second electronic device 202 through the server 203.
In another embodiment, after the server 23 receives the trick-edited to-be-matched multimedia data P21 sent by the first user through the first electronic device 201 and the trick-edited target multimedia data P22 sent by the second user through the second electronic device 202, if the target to-be-matched multimedia data P22 matches the to-be-matched multimedia data P21, the target multimedia data P22 is sent to the first electronic device 201. The first electronic device 201 transmits the synthesized multimedia data P21 to the second electronic device 202 through the server 203 after generating the synthesized multimedia data P23.
After the architecture of the multimedia display system according to the embodiment of the present disclosure is introduced with fig. 1 and fig. 2, a multimedia display method according to the embodiment of the present disclosure will be described with reference to fig. 3 to fig. 8.
Fig. 3 shows a flow chart of a multimedia display method provided by an embodiment of the present disclosure.
In the disclosed embodiment, the multimedia display method may be performed by an electronic device. Among them, the electronic devices may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), wearable devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like.
As shown in fig. 3, the multimedia display method may include the following steps.
S310, receiving original multimedia data.
In the embodiment of the disclosure, a user may trigger a related operation in a target application program when the user wants to perform special effect editing on an image or wants to perform special effect editing on the image, and the electronic device may receive original multimedia data in response to the related trigger operation. The target application program can be a social platform or a video publishing platform. Specifically, the original multimedia data may be multimedia data including visualization information, such as video data or image data.
In some embodiments, the raw multimedia data may be collected by the user in real-time. Accordingly, the related operation may be an opening operation of the shooting page by the user. Alternatively, the related operation may be a photographing operation of the user on a photographing page. Or, the related operations may be triggered by the user for the multimedia combining function on the live page or the shooting page.
In other embodiments, the raw multimedia data may be stored locally by the electronic device. Accordingly, the related operation may be a selection operation of the image or video in the electronic album by the user.
In still other embodiments, the raw multimedia data may be user downloaded. Accordingly, the related operation may be a downloading operation of the user for the image or video within a browser, a target application, or a download page of a third party application.
In still other embodiments, the raw multimedia data may be transmitted to the electronic device by other devices. Accordingly, the electronic device may take the multimedia data transmitted by the other device as original multimedia data after receiving the multimedia data.
In some embodiments, the original multimedia data may be multimedia data including a target object such as a person, an animal, a plant, or an object. For example, the original multimedia data may be a user self-shot, a self-portrait video of the user, and the like. Alternatively, the original multimedia data may include a partial image or a whole image of the target object, for example, the original multimedia data may include only an image of the face of the person, or an image of the face and other body parts.
And S320, performing special effect editing on the original multimedia data to obtain the multimedia data to be matched.
In the embodiment of the disclosure, after receiving the original multimedia data, the electronic device may respond to a trigger operation of a special effect editing function or a multimedia photographing function of a user to perform special effect editing on the original multimedia data to obtain the multimedia data to be matched.
In some embodiments, the special effect editing may change the characteristics of the target object itself in the original multimedia data or change the characteristics of the accessory components of the target object by adding or replacing the original characteristics. Specifically, the special effect editing may be special effect editing of the original multimedia image by at least one of special effect editing tools such as a beauty tool, an image modification tool, a special effect prop, a filter, an image style migration tool, and a chartlet. The special effect editing tool may be provided by a target application program, a third-party application program, a webpage, and the like. It should be noted that, for convenience of description in subsequent sections, the following sections of the embodiments of the present disclosure refer to a target object after being subjected to special effect processing in the multimedia data to be matched as a first special effect object.
In one example, the electronic device may change the facial features of the target object by way of beauty, image modification, special effects props, and the like. For example, the features of the target object face contour, eyes, skin, nose, mouth, etc. may be adjusted.
In another example, features of a target object such as height, overall or local obesity may be altered by functions such as beauty, image modification, special effects props, and the like.
In still other embodiments, features of accessory components such as target object apparel, headwear, glasses, makeup, masks, facial effects that do not alter the original component features of the face, and the like may be added or altered by charting, special effects props, and the like. The special face effect which does not change the original feature of the face part can comprise animal beard and the like.
In still other embodiments, the entire original multimedia data image style, or the entire or partial image style of the target object, may be style migrated via an image style migration tool or filter. For example, the image style of the original multimedia data may be converted into an animation style, and accordingly, the target object in the original multimedia data becomes a cartoon character.
In some embodiments, a target special effect template may be selected from a plurality of selectable special effect templates of a special effect editing tool for special effect editing of original multimedia data. Specifically, if the original multimedia data is an image, static or dynamic special effect editing may be performed on a local image or an entire image of the original image by using the target special effect template, so as to generate multimedia data to be matched in an image or video format. Or, if the original multimedia data is a video, one or more key video frames may be extracted from the original video, and a target special effect template is used to perform static or dynamic special effect editing on a local image or an overall image of the key video frames to generate multimedia data to be matched in an image or video format. Alternatively, the key video frame may be a video frame of the original video containing the target object.
Further, S120 may include at least the following two embodiments according to different manners of special effect editing.
In some embodiments, S120 may specifically include: and responding to the template selection operation of the target special effect template, and performing special effect editing on the original multimedia data based on the target special effect template to obtain the multimedia data to be matched.
Specifically, after the original multimedia data is received, if the user wants to perform special effect editing on the original multimedia data, the electronic device may respond to a trigger operation that the user selects a target special effect template from a plurality of selectable special effect templates of the special effect editing tool, and perform special effect editing on the original multimedia data by using the target special effect template selected by the user, so as to obtain a multimedia template to be matched.
In other embodiments, S120 may specifically include: and performing special effect editing on the original multimedia data based on the target special effect template corresponding to the original multimedia data to obtain the multimedia data to be matched.
Specifically, if the user selects the target special effect template in advance, the target special effect template can be directly used to perform special effect editing on the original multimedia data, so as to obtain the multimedia data to be matched. Still alternatively, an appropriate special effect template may be matched thereto as a target special effect template based on the multimedia characteristics of the original multimedia data. Or, if the user selects the target special effect template, the multimedia data can be shot in the template, and accordingly, the multimedia data to be matched is directly displayed on the shooting interface.
And S330, generating synthetic multimedia data based on the multimedia data to be matched and the target multimedia data.
In the embodiment of the disclosure, after receiving the multimedia data to be matched and the target multimedia data, the electronic device may add the target image portion of the multimedia data to be matched and the target image portion of the target multimedia data to the target image area in the target multimedia template directly or after a certain conversion in an image splicing or image fusion manner, so as to obtain the composite multimedia data, so that the composite multimedia data may have at least part of characteristics of a first special effect role in the multimedia data to be matched and a second special effect role in the target multimedia data in the target multimedia template at the same time.
For convenience of explanation of the composite multimedia data, the following sections of the embodiments of the present disclosure will make specific descriptions of the first multimedia feature of the multimedia data to be matched and the target multimedia data before introducing the composite multimedia data.
The first multimedia feature of the multimedia data to be matched may be a feature of the first special-effect character itself or a feature of an accessory.
In some embodiments, the first special effect character's own features may include facial features of the first special effect character or physical features such as height, obesity, etc.
Alternatively, facial features may include facial features such as head aspect ratio, face shape, chin to head width ratio, forehead length to head length ratio, and the like. Alternatively, ocular features such as eye size, inter-ocular distance, pupil color, pupil size, eye shape, etc. may be included. Still alternatively, nasal features such as nose length, wing width, bridge height, bridge width, etc. may be included. Still alternatively, hair characteristics such as hair length, hair color, hair shape (curly, straight) and the like may be included. Still alternatively, skin information such as skin color, skin roughness, etc. may be included.
In some embodiments, the accessory features may include features such as whether to wear glasses, whether to wear a mask, whether to wear jewelry, whether to wear headwear, whether to make up, whether to have special facial effects that do not alter the original part features of the face, and the like. For example, if the first special effects object is worn with an accessory, specific features of the accessory may be included. For example, if the first special effects object is wearing a mask, the first multimedia feature may also include a model, a name, etc. of the mask.
Through the first multimedia characteristics shown above, target multimedia data having the same or matched characteristics as the multimedia data to be matched can be obtained through matching.
For the target multimedia data, optionally, the target multimedia data may include an object subjected to special effect editing. Illustratively, the object in the target multimedia data may be a different object than the target object in the original multimedia data. For example, the object in the target multimedia data may be an image of the second user after special effect editing, and the target object in the multimedia data to be matched may be an image of the first user after special effect editing. For convenience of explanation, the edited object in the target multimedia data may be referred to as a second special effect object.
In some embodiments, the target multimedia data may be pre-stored data in a multimedia database of a target application, a third party application, or a web page.
In other embodiments, the target multimedia data may be special effect edited multimedia data uploaded by other users.
In addition, the manner of generating the target multimedia data by other users is similar to the manner of generating the multimedia data to be matched, and is not described herein again.
In some embodiments, the multimedia data to be matched and the target multimedia data may be the same type of multimedia data. For example, both are images, or both are videos. Alternatively, the multimedia data to be matched and the target multimedia data may be different kinds of multimedia data. One of the two is an image and the other of the two is a video.
After the multimedia data to be matched and the target multimedia data are introduced in detail, the synthesized multimedia data will be described in the following embodiments of the present disclosure.
In some embodiments, if the target multimedia template is a scene template, the synthesized multimedia data may be used to present an interaction behavior or an interaction action of a first special-effect character in the multimedia data to be matched and a second special-effect character in the target multimedia data in a scene corresponding to the target multimedia template. The target multimedia template may be an image scene template or a video scene template, and the specific type thereof is not limited.
In one example, the electronic device may generate the composite multimedia data based on a user operation to select a target multimedia template from a plurality of selectable scene templates. The optional scene template can be obtained from a target application program, a third-party application program or a scene template library of a webpage.
In another example, the electronic device may determine a matching target scene template according to a feature of a first special effect object in the multimedia data to be matched and a feature of a second special effect object in the target multimedia data. Wherein the special effect of the first special effect object and the feature of the second special effect object may be their actions.
For example, if the motion of the first special effect object is cup lifting and the motion of the second special effect object is cup lifting, the local or whole images of the first special effect object and the second special effect object may be added to a scene template such as a party, a bar, etc., to generate composite multimedia data such as a dry cup.
For another example, if the motion of the first special effect object is taken by a princess and the motion of the second special effect object is taken by a person, the local or whole images of the first special effect object and the second special effect object may be added to a romantic scene template such as a wedding, a beautiful sky, etc., to generate the composite multimedia data such as a circling princess taken.
For another example, if the action of the first special effect object is kicking a ball and shooting a goal and the action of the second special effect object is defending, local or overall characteristics of the first special effect object and the second special effect object may be added to a scene template such as a sports field to generate composite multimedia data such as a soccer game.
In yet another example, the electronic device can generate the composite multimedia data based on a target multimedia template uploaded by the user.
In some embodiments, in order to improve the interest, characters, music, special effects, and the like can be added to the synthesized multimedia data.
And S340, displaying the synthesized multimedia data.
In the embodiment of the present disclosure, the electronic device may display the synthesized multimedia data in response to a user's synthesis operation on the multimedia data or a trigger operation display for displaying the synthesized multimedia data by the user. Or, the electronic device does not need to respond to the trigger operation, and the synthesized multimedia data is directly displayed on the relevant interface after being generated.
According to the video display method, the video display device, the video display equipment and the video display medium, after the received original multimedia data are subjected to special effect editing, the synthesized multimedia data are generated and displayed based on the multimedia data to be matched and the target multimedia data obtained through editing. The target pair media data are obtained based on the first multimedia characteristic matching of the multimedia data to be matched, the synthesized multimedia data obtained based on the original multimedia data can comprise the special effect in the multimedia data to be matched and the content of the target multimedia data matched with the multimedia data to be matched, so that the multimedia data image has multiple elements, the beautifying effect of the multimedia data is enriched, the interestingness of multimedia data display is improved, the user can interact through the multimedia data, the diversity interaction between the users is realized, and the use experience of the user is improved.
For convenience of understanding, the embodiments of the present disclosure will be described in detail with reference to fig. 4 to 8.
Fig. 4 shows a schematic diagram of a shooting preview interface provided by an embodiment of the present disclosure.
As shown in fig. 4, the electronic device may display a target object 41, and various special effect editing tools such as a filter tool 401, a beauty tool 402, a special effect tool 403, and the like, and may also display a multimedia composition tool 404 in the photographing preview interface 40. The filter tool 401, the beauty tool 402, and the special effect tool 403 may each include one or more special effect templates.
When the user clicks on the special effects tool 403, the displayed interface may be as shown in FIG. 5. Fig. 5 is a schematic diagram illustrating a special effect editing interface provided by an embodiment of the present disclosure.
As shown in fig. 5, a plurality of special effect templates 4031 to 4034 of the special effect tool 403 may be displayed on the special effect editing interface 50. After the user selects the mask special effect template 4033, the generated multimedia data to be matched is as shown in fig. 6. Fig. 6 is a schematic diagram illustrating a display interface of multimedia data to be matched according to an embodiment of the present disclosure.
As shown in fig. 6, the display interface 60 of the multimedia data to be matched may include a first special effects character 61 subjected to special effects processing and a multimedia composition tool 404. After the user clicks the multimedia composition tool 404, the electronic device or the server performs a matching procedure of the multimedia data. Fig. 7 is a schematic diagram illustrating matching logic of multimedia data provided by an embodiment of the present disclosure.
As shown in fig. 7, if the target multimedia data P72 including the second special effect character 73 is obtained by matching according to the to-be-matched multimedia data P71 including the first special effect character 61, the generated composite multimedia data is as shown in fig. 8.
Fig. 8 is a schematic diagram illustrating a display interface for synthesizing multimedia data according to an embodiment of the disclosure. As shown in fig. 8, the composite multimedia data P81 may present a scene in which the first and second special effects characters 61 and 73 do a dry cup in a false dance scene in the form of an image or a video.
In some embodiments provided by the embodiments of the present disclosure, fig. 9 is a flowchart illustrating another multimedia display method provided by the embodiments of the present disclosure.
In the disclosed embodiment, the multimedia display method may be performed by an electronic device. Among them, the electronic devices may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), wearable devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like.
As shown in fig. 9, the multimedia display method may include the following steps.
S910, receiving the original multimedia data. The specific content of S910 is similar to that of S310, and is not described again.
S920, performing special effect editing on the original multimedia data to obtain the multimedia data to be matched. The specific content of S920 is similar to that of S320, and is not described again.
S930, extracting the first multimedia features from the multimedia data to be matched.
In some embodiments, the first multimedia feature may be extracted from the multimedia data to be matched using an image feature extraction technique or a video frame feature extraction technique.
For specific content of the first multimedia feature, reference may be made to the related description of S330 in the above-mentioned part of the embodiment of the present disclosure, and details are not repeated again.
And S940, a plurality of candidate multimedia data corresponding to the multimedia data to be matched are obtained.
In some embodiments, the candidate multimedia data may be obtained from a multimedia database of the target application, a third party application, or a web page.
In one example, the candidate multimedia data may be pre-stored data in a multimedia database.
In another example, the candidate multimedia data may be special effect edited multimedia data uploaded by other users.
S950, querying a target multimedia data matching the first multimedia feature from the plurality of candidate multimedia data.
In some embodiments, the target multimedia data may be determined from a plurality of candidate multimedia data by means of feature matching.
In one embodiment, S950 may specifically include the following steps.
Step a1, determining at least one feature tag corresponding to the first multimedia feature.
Optionally, the feature tag may be a tag obtained by classifying the first special effect object from one or more dimensions based on one or a class of the first multimedia feature. For example, the feature tag may classify the first special effect object from the dimensions of the first special effect object itself or the accessory component.
For example, the feature tag of the first special effect object itself may include a tag for characterizing a nose, a tag for eyes, a tag for gender, an action tag, a tag for skin condition, and the like, which can classify the first special effect object from the character itself.
For another example, the tag attached to the first special effect object may include a tag of whether glasses are worn, a tag of whether a mask is worn, a tag of whether makeup is done, and the like.
Step a2, for each candidate multimedia data, a common label identical to at least one characteristic label is determined.
That is, if the same tag exists in the tag of the multimedia data to be matched and the tag of the candidate multimedia data, the same tag may be used as a common tag of the multimedia data to be matched and the candidate multimedia data.
Exemplary, if user a's label includes wearing glasses, high nose, yellow skin, tall child, female; user B's labels include no glasses, small mouth, thin, male. The two common tags may include an eyeglasses tag (with or without eyeglasses), a gender tag (male or female).
Step A3, calculating the label matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common label.
In one example, the weight value of the common label may be preset.
In yet another example, the weight value of the common label may be set according to a user's selection. For tags that are not of interest to the user, a low weight value a is set. For tags that the user attaches importance or concerns (such as whether the user likes or dislikes other people wearing glasses and whether the person is a pony tail), a high weight value b may be set for the glasses tag and the hair style tag. Wherein, the weight value b is larger than the weight value a. Or a high weight value c can be set for the label in which the user is interested, and a low weight value d can be set for the label in which the user is disliked. The weight value c is greater than the weight value a, and the weight value a is greater than the weight value d.
The label matching score is used for reflecting the matching degree of each candidate multimedia data and the multimedia data to be matched on the aspect of one characteristic or one class of characteristics corresponding to the label.
In some embodiments, for each feature tag, a tag score for the multimedia data corresponding to the tag may be generated from the feature. For example, for the glasses tag, if the first special effect object in the multimedia data to be matched wears glasses, the tag score of the glasses tag may be 100, and if the first special effect object does not wear glasses, the tag score of the glasses tag may be 0.
It should be noted that the label score of the candidate multimedia data is the same as the calculation method of the multimedia data to be matched, and is not described herein again.
Accordingly, after the tag scores of the candidate multimedia data and the to-be-matched multimedia data are obtained, the similarity scores of the candidate multimedia data and the to-be-matched multimedia data can be calculated according to the tag scores of the common tags of the candidate multimedia data and the to-be-matched multimedia data. And then calculating the label matching scores of the two according to the similarity scores and the weight values of the two.
Optionally, for some feature tags, the closeness of the tag score of each candidate multimedia data and the multimedia data to be matched positively correlates with the similarity score between the two. That is, the closer each candidate multimedia data is to the tag score of the multimedia data to be matched, the higher the similarity score between the two. For example, if both are wearing glasses, the matching degree score is high. For example, the similarity score corresponding to the class of feature tags may be equal to the preset value minus the target tag score difference. And the target label score difference is the difference between the label score of each candidate multimedia data in the label and the label score of the multimedia data to be matched in the label.
For other feature tags, the closeness of the tag score of each candidate multimedia data and the multimedia data to be matched is inversely related to the similarity score between the two. That is, the greater the difference between the label scores of each candidate multimedia data and the multimedia data to be matched, the lower the similarity score between the two. For example, if the gender is the same, the similarity score is low. If the gender of the two is opposite, the similarity score is high. For example, the similarity score corresponding to the class feature tag may be equal to the target tag score difference.
In one example, tag scores for a plurality of candidate multimedia data may be recorded in a matching table. Correspondingly, after acquiring the feature tags of the multimedia data to be matched and the tag scores of the feature tags, the electronic device calculates the tag matching scores of the multimedia data to be matched and each candidate multimedia data based on the calculating method, so as to calculate and find the target multimedia data from the matching table.
In other embodiments, the tag matching score of each tag may be obtained according to the weight value of the tag and the feature matching degree score between each candidate multimedia data and the multimedia data to be matched. For example, the tag matching score of each tag may be equal to the product of the weight value of the tag and the feature matching degree score between each candidate multimedia data and the multimedia data to be matched.
Optionally, for some feature tags, the similarity between each candidate multimedia data and the multimedia data to be matched positively correlates with the feature matching score between the candidate multimedia data and the multimedia data to be matched. That is, the higher the similarity between each candidate multimedia data and the multimedia data to be matched, the higher the feature matching score between the candidate multimedia data and the multimedia data to be matched. For example, if both are wearing glasses, the feature matching degree score is high.
For other feature labels, the similarity between each candidate multimedia data and the multimedia data to be matched is inversely related to the feature matching degree score between the candidate multimedia data and the multimedia data to be matched. That is, the lower the similarity between each candidate multimedia data and the multimedia data to be matched, the higher the score of the two. For example, if the gender is the same, the feature matching score is low. If the gender of the two is opposite, the feature matching degree score is high.
It should be noted that the correlation between the similarity of each feature label and the feature matching score may be set according to an actual scene and specific requirements, which is not specifically limited.
And step A4, sorting the label matching scores of the candidate multimedia data to determine the target multimedia data.
In one example, the data may be sorted from high to low according to the tag matching score, and the candidate multimedia data with the highest score may be used as the target multimedia data. The label matching scores between the multimedia data to be matched and the candidate multimedia data can be recorded in the matching table from large to small or from small to large.
In some embodiments, to improve the matching accuracy, the target multimedia data may further satisfy one or more of the following conditions.
The condition C1 and the special effect editing mode of the target multimedia data are the same as those of the multimedia data to be matched. Illustratively, if the target multimedia data and the multimedia data to be matched are subjected to special effect editing for adding masks, the special effect editing modes of the target multimedia data and the multimedia data to be matched are the same. For another example, if a certain special effect template corresponds to the first special effect and the second special effect, the first special effect is adopted by the target multimedia data, and the second special effect is adopted by the multimedia data to be matched, the special effects and the target multimedia data have the same special effect editing mode.
The condition C2 indicates that the user to which the target multimedia data belongs is an online user. That is, if the user opens the interface of the target application program through the electronic device, or the target application program is in a background running state on the electronic device, the user is considered to be an online user.
The condition C3 that the location distance between the distribution location of the target multimedia data and the distribution location of the original multimedia data is less than or equal to a preset distance threshold.
For example, the distance threshold may be a default value set by the system, or may be a target distance threshold selected by the user from a plurality of selectable distance thresholds. Or, if the user to which the target multimedia data belongs and the user to which the original multimedia data belongs are in the same region, such as the same region, the same city, and the same province, the position distance between the two users is considered to be smaller than or equal to the preset distance threshold. The distance threshold may be set according to an actual situation or a specific scenario, which is not limited herein.
The condition C4 indicates that the historical matching times of the target multimedia data satisfy the preset time screening condition. The frequency screening condition may be that the historical matching frequency is within a preset frequency value range. The preset number value range may be a value set by default in the system, or may be a target number value range selected by the user from a plurality of selectable number value ranges.
In one example, in order to improve flexibility of matching, if the user cannot filter out the target multimedia data through the steps a1 through a4, the target multimedia data may be selected from the candidate multimedia data through at least one of the above-mentioned conditions C1 through C4.
In another example, in order to improve the matching accuracy, if the user filters a plurality of target multimedia data through steps a1 to a4, the user may continue to further filter the plurality of target multimedia data through at least one of the above conditions C1 to C4 to obtain the target multimedia data.
In still other examples, after obtaining the multimedia data to be matched, the target multimedia data may be screened from the plurality of candidate multimedia data directly using at least one of the conditions C1 to C4.
In still another example, in order to improve the matching rate, the candidate multimedia data may be filtered by using at least one of the conditions C1 through C4.
In some embodiments, if the user performs the filtering of the target multimedia data according to at least two of the above conditions C1 to C4, the candidate multimedia data may be sequentially filtered according to a preset conditional usage order until the target multimedia data is obtained by filtering after the last condition is used. Or, when the data of the target multimedia data obtained by using one more feature is within a preset number range, the target multimedia data can be obtained.
In some embodiments, the target multimedia data is further derived from a second multimedia feature match of the original multimedia data.
In one example, in order to improve flexibility of matching, if the user cannot filter out the target multimedia data through steps a1 to a4, the target multimedia data may be matched from the candidate multimedia data through the second multimedia features of the original multimedia data.
In another example, in order to improve the matching accuracy, if the user filters a plurality of target multimedia data through steps a1 to a4, the user may continue to further filter the plurality of target multimedia data through the second multimedia feature of the original multimedia data to obtain the target multimedia data.
In some examples, the second multimedia feature may be a multimedia feature of a target object in the original multimedia data. The second multimedia feature is similar to the first multimedia feature, and the method for querying the target multimedia data using the second multimedia feature is similar to the method for querying the target multimedia data using the first multimedia feature, which is not repeated herein.
S960, generating synthetic multimedia data based on the multimedia data to be matched and the target multimedia data, wherein the target multimedia data is obtained according to the first multimedia feature matching of the multimedia data to be matched. The specific content of S960 is similar to that of S330, and is not described again.
And S970, displaying the synthesized multimedia data. The specific content of S970 is similar to that of S340, and is not described again.
According to the multimedia display method, the target multimedia data with the same characteristics can be accurately matched from the candidate multimedia data by using the first data characteristics of the multimedia data to be matched, so that the generated synthetic multimedia data comprises the first special-effect object and the second special-effect object with high characteristic matching degree, and the interestingness of the multimedia display method is improved.
In some embodiments provided by the embodiments of the present disclosure, fig. 10 is a flowchart illustrating a further multimedia display method provided by the embodiments of the present disclosure.
In the disclosed embodiment, the multimedia display method may be performed by an electronic device. Among them, the electronic devices may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), wearable devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like.
As shown in fig. 10, the multimedia display method may include the following steps.
S1010, receiving the original multimedia data. The specific content of S1010 is similar to the specific content of S310, and is not described again.
And S1020, performing special effect editing on the original multimedia data to obtain the multimedia data to be matched. The specific content of S1020 is similar to the specific content of S320, and is not described again.
And S1030, generating synthetic multimedia data based on the multimedia data to be matched and the target multimedia data, wherein the target multimedia data is obtained according to the first multimedia feature matching of the multimedia data to be matched. The specific content of S1030 is similar to that of S330, and is not described again.
And S1040, displaying the synthesized multimedia data. The specific content of S1040 is similar to that of S340, and is not described again.
And S1050, when the triggering operation on the synthesized multimedia data is detected, the synthesized multimedia data is issued to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs.
In some embodiments, when a user wants to interact with a user to which the target multimedia data belongs, a trigger operation for synthesizing the multimedia data is performed. The trigger operation may be triggered when the synthesized multimedia data is generated or after the synthesized multimedia data is previewed, and the trigger timing is not limited.
In one embodiment, the electronic device may publish the composite multimedia data to a user to which the original multimedia data belongs and a user to which the target multimedia data belongs through the server.
In one embodiment, the electronic device may display the raw multimedia data in an image/video favorite or a presentation bar of a user to whom the raw multimedia data belongs and a target application of a user to whom the target multimedia data belongs. And adds a logo to the corresponding icon to prompt the user to view the composite multimedia data.
In another embodiment, S1050 may include the following steps.
And D1, sending first prompt information to the user to which the original multimedia data belongs, wherein the first prompt information is used for triggering the display of the synthesized multimedia data and the display of the social homepage of the user to which the target multimedia data belongs.
Alternatively, the first prompt message may be issued in the form of text, pictures, voice, etc. through a chat box, a display window on the interface, or a broadcast bar on the interface. Illustratively, the first prompt message may be in the specific form of "you just participated in a false face Pair (scene corresponding to composite multimedia video) with XXX, go to TA Home Page to see/chat with TA"
Illustratively, the first prompt message may include a link such as a text/two-dimensional code of the composite multimedia data display interface, or the user may jump to the composite multimedia display interface by triggering the message bar of the first prompt message. Optionally, for convenience of interaction, the first prompt message may further include a link such as a text/two-dimensional code of the user to which the target multimedia data belongs. The composite multimedia data display interface may include a control for accessing a social home page of a user to which the target multimedia data belongs, or the composite multimedia data display interface may include a control for adding a friend of the user to which the target multimedia data belongs, or the composite multimedia data display interface may include a control for establishing a chat with the user to which the target multimedia data belongs.
And D2, sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering the playing of the synthesized multimedia data and the display of the social homepage of the user to which the original multimedia data belongs.
The second prompt message is similar to the first prompt message, and is not described again.
By the embodiment of the disclosure, the synthesized multimedia data can be issued to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs, so that the interaction between the user to which the original multimedia data belongs and the user to which the target multimedia data belongs can be realized by the synthesized multimedia data, the interest of multimedia display is improved, and the use experience is improved.
Fig. 11 shows a flow chart of a multimedia matching method provided by the embodiment of the present disclosure.
In the disclosed embodiment, the multimedia matching method may be performed by a server. The server may be a cloud server or a server cluster or other devices with storage and computing functions.
As shown in fig. 11, the multimedia matching method may include the following steps.
S1110, receiving multimedia data to be matched, wherein the multimedia data to be matched is obtained by performing special effect processing on original multimedia data.
S1120, extracting a first multimedia feature from the multimedia data to be matched.
S1130, a plurality of candidate multimedia data corresponding to the multimedia data to be matched are obtained.
S1140, in the plurality of candidate multimedia data, target multimedia data matched with the first multimedia features are inquired, and the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In some embodiments of the present disclosure, S1140 may comprise:
determining at least one feature tag corresponding to the first multimedia feature;
determining, for each candidate multimedia data, a common label that is identical to at least one characteristic label;
calculating a label matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common label;
and sequencing the label matching scores of the candidate multimedia data to determine the target multimedia data.
It should be noted that the multimedia matching method shown in S1110 to S1140 is similar to the multimedia display method shown in the foregoing combination with S910 to S970, and is not described herein again.
In some embodiments of the present disclosure, after S1140, the multimedia matching method may further include: and receiving an issuing instruction of the synthesized multimedia data, and issuing the synthesized multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs. Wherein the issuing instruction is generated by the electronic equipment after detecting the triggering operation of the synthetic multimedia data. The steps are similar to S1050 and are not described herein again.
In some embodiments of the present disclosure, after S1140, the multimedia matching method may further include:
and sending first prompt information to the user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthetic multimedia data and displaying the social homepage of the user to which the target multimedia data belongs. The step is similar to the step D1, and is not described herein again.
And sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering and playing the synthesized multimedia data and displaying the social homepage of the user to which the original multimedia data belongs. The step is similar to the step D2, and is not described herein again.
According to the multimedia matching method, after the received original multimedia data are subjected to special effect editing, the synthesized multimedia data are generated and displayed based on the multimedia data to be matched and the target multimedia data obtained through editing. The target pair media data are obtained based on the first multimedia characteristic matching of the multimedia data to be matched, the synthesized multimedia data obtained based on the original multimedia data can comprise the special effect in the multimedia data to be matched and the content of the target multimedia data matched with the multimedia data to be matched, so that the multimedia data image has multiple elements, the beautifying effect of the multimedia data is enriched, the interestingness of multimedia data display is improved, the user can interact through the multimedia data, the diversity interaction between the users is realized, and the use experience of the user is improved.
The embodiment of the present disclosure further provides a multimedia display apparatus for implementing the above multimedia display method, which is described below with reference to fig. 12.
In the embodiment of the present disclosure, the multimedia display apparatus may be an electronic device, for example, the multimedia display apparatus may be the first electronic device 101 in the client shown in fig. 1. The electronic device may include a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable electronic device, an all-in-one machine, an intelligent home device, and other devices having a communication function, and may also be a virtual machine or a simulator-simulated device.
Fig. 12 is a schematic structural diagram of a multimedia display device provided in an embodiment of the present disclosure.
As shown in fig. 12, the multimedia display apparatus 1200 may include a data receiving unit 1210, an effect editing unit 1220, a data synthesizing unit 1230, and a data display unit 1240.
A data receiving unit 1210 configured to receive original multimedia data;
the special effect editing unit 1220 is configured to perform special effect editing on the original multimedia data to obtain multimedia data to be matched;
the data synthesis unit 1230 is configured to generate synthesized multimedia data based on the multimedia data to be matched and the target multimedia data, wherein the target multimedia data is obtained by matching the first multimedia features of the multimedia data to be matched;
and a data display unit 1240 configured to display the synthesized multimedia data.
The multimedia display device of the embodiment generates and displays the synthesized multimedia data based on the multimedia data to be matched and the target multimedia data obtained by editing after performing special effect editing on the received original multimedia data. The target pair media data are obtained based on the first multimedia characteristic matching of the multimedia data to be matched, the synthesized multimedia data obtained based on the original multimedia data can comprise the special effect in the multimedia data to be matched and the content of the target multimedia data matched with the multimedia data to be matched, so that the multimedia data image has multiple elements, the beautifying effect of the multimedia data is enriched, the interestingness of multimedia data display is improved, the user can interact through the multimedia data, the diversity interaction between the users is realized, and the use experience of the user is improved.
In some embodiments of the present disclosure, the special effect editing unit 1220 may be further configured to: responding to the template selection operation of the target special effect template, and performing special effect editing on the original multimedia data based on the target special effect template to obtain multimedia data to be matched;
in other embodiments of the present disclosure, the special effect editing unit 1220 may be further configured to: and performing special effect editing on the original multimedia data based on the target special effect template corresponding to the original multimedia data to obtain the multimedia data to be matched.
In some embodiments of the present disclosure, the multimedia display apparatus 1200 may further include a feature extraction unit, a data acquisition unit, and a data query unit.
The multimedia matching device comprises a feature extraction unit, a matching unit and a matching unit, wherein the feature extraction unit is configured to extract first multimedia features from multimedia data to be matched;
the data acquisition unit is configured to acquire a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and the data query unit is configured to query the target multimedia data matched with the first multimedia characteristics in the candidate multimedia data.
In some embodiments of the present disclosure, the data querying unit may be further configured to:
determining at least one feature tag corresponding to the first multimedia feature;
determining, for each candidate multimedia data, a common label that is identical to at least one characteristic label;
calculating a label matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common label;
and sequencing the label matching scores of the candidate multimedia data to determine the target multimedia data.
In some embodiments of the present disclosure, the target multimedia data satisfies at least one of the following:
the special effect editing mode of the target multimedia data is the same as that of the multimedia data to be matched;
the user to which the target multimedia data belongs is an online user;
the position distance between the release position of the target multimedia data and the release position of the original multimedia data is smaller than or equal to a preset distance threshold value;
the historical matching times of the target multimedia data meet the preset time screening condition.
In some embodiments of the present disclosure, the target multimedia data is further obtained according to a second multimedia feature matching of the original multimedia data.
In some embodiments of the present disclosure, the multimedia display apparatus 1200 may further include a data distribution unit.
And the data issuing unit is configured to issue the synthesized multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs when the triggering operation on the synthesized multimedia data is detected.
In some embodiments of the present disclosure, the data publishing unit may be further configured to:
sending first prompt information to a user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthetic multimedia data and displaying a social homepage of the user to which the target multimedia data belongs;
and sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering and playing the synthesized multimedia data and displaying the social homepage of the user to which the original multimedia data belongs.
It should be noted that the multimedia display apparatus 1200 shown in fig. 12 may perform each step in the method embodiments shown in fig. 3 to 10, and implement each process and effect in the method embodiments shown in fig. 3 to 10, which are not described herein again.
The embodiment of the present disclosure further provides a multimedia matching device for implementing the above multimedia matching method, which is described below with reference to fig. 13. In the embodiment of the present disclosure, the multimedia display device may be a server, for example, the multimedia matching device may be the server 102 in the client shown in fig. 1. The server may be a cloud server or a server cluster or other devices with storage and computing functions.
Fig. 13 is a schematic structural diagram illustrating a multimedia matching apparatus provided in an embodiment of the present disclosure.
As shown in fig. 13, the multimedia matching apparatus 1300 may include a data receiving unit 1310, a feature extracting unit 1320, a data acquiring unit 1330, and a data querying unit 1340.
A data receiving unit 1310 configured to receive multimedia data to be matched, where the multimedia data to be matched is obtained by performing special effect processing on original multimedia data;
a feature extraction unit 1320 configured to extract a first multimedia feature from the multimedia data to be matched;
a data obtaining unit 1330 configured to obtain a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
the data query unit 1340 is configured to query, among the plurality of candidate multimedia data, target multimedia data that matches the first multimedia feature, where the target multimedia data is used to generate merged multimedia data with the multimedia data to be matched.
The multimedia matching device of the embodiment generates and displays the synthesized multimedia data based on the multimedia data to be matched and the target multimedia data obtained by editing after performing special effect editing on the received original multimedia data. The target pair media data are obtained based on the first multimedia characteristic matching of the multimedia data to be matched, the synthesized multimedia data obtained based on the original multimedia data can comprise the special effect in the multimedia data to be matched and the content of the target multimedia data matched with the multimedia data to be matched, so that the multimedia data image has multiple elements, the beautifying effect of the multimedia data is enriched, the interestingness of multimedia data display is improved, the user can interact through the multimedia data, the diversity interaction between the users is realized, and the use experience of the user is improved.
In some embodiments of the present disclosure, the data query unit 1340 may be further configured to:
determining at least one feature tag corresponding to the first multimedia feature;
determining, for each candidate multimedia data, a common label that is identical to at least one characteristic label;
calculating a label matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common label;
and sequencing the label matching scores of the candidate multimedia data to determine the target multimedia data.
In some embodiments of the present disclosure, the target multimedia data satisfies at least one of the following:
the special effect editing mode of the target multimedia data is the same as that of the multimedia data to be matched;
the user to which the target multimedia data belongs is an online user;
the position distance between the release position of the target multimedia data and the release position of the original multimedia data is smaller than or equal to a preset distance threshold value;
the historical matching times of the target multimedia data meet the preset time screening condition.
In some embodiments of the present disclosure, the target multimedia data is further obtained from a second multimedia feature matching of the original multimedia data.
In some embodiments of the present disclosure, the target multimedia data is further obtained according to a second multimedia feature matching of the original multimedia data.
In some embodiments of the present disclosure, the multimedia matching apparatus 1300 may further include a data distribution unit.
And the data issuing unit is configured to respond to an issuing instruction of the synthesized multimedia data and issue the synthesized multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs. Wherein the issuing instruction is generated by the electronic equipment after detecting the triggering operation of the synthetic multimedia data.
In some embodiments of the present disclosure, the data publishing unit may be further configured to:
sending first prompt information to a user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthetic multimedia data and displaying a social homepage of the user to which the target multimedia data belongs;
and sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering and playing the synthesized multimedia data and displaying the social homepage of the user to which the original multimedia data belongs.
It should be noted that the multimedia matching apparatus 1300 shown in fig. 13 may perform each step in the method embodiment shown in fig. 11, and implement each process and effect in the method embodiment shown in fig. 11, which are not described herein again.
Embodiments of the present disclosure also provide a computing device that may include a processor and a memory that may be used to store executable instructions. The processor may be configured to read the executable instructions from the memory and execute the executable instructions to implement the multimedia display method and/or the multimedia matching method in the above embodiments.
Fig. 14 shows a schematic structural diagram of a computing device provided by an embodiment of the present disclosure. With specific reference now to FIG. 14, a block diagram is shown illustrating an architecture for implementing a computing device 1400 in an embodiment of the present disclosure.
In some embodiments, when implementing the multimedia display method in the above embodiments, the computing device 1400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), a wearable device, and the like, and fixed terminals such as a digital TV, a desktop computer, a smart home device, and the like.
In other embodiments, when the multimedia matching method in the foregoing embodiments is implemented, the computing device 1400 in the embodiments of the present disclosure may include, but is not limited to, a device with storage and computing functions, such as a cloud server or a server cluster.
It should be noted that the computing device 1400 shown in fig. 14 is only one example and should not bring any limitations to the function and scope of the disclosed embodiments.
As shown in fig. 14, the computing device 1400 may include a processing means (e.g., central processing unit, graphics processor, etc.) 1401 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1402 or a program loaded from storage device 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data necessary for the operation of the computing device 1400 are also stored. The processing device 1401, the ROM1402, and the RAM 1403 are connected to each other by a bus 1404. An input/output (I/O) interface 1405 is also connected to bus 1404.
Generally, the following devices may be connected to the I/O interface 1405: input devices 1406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, or the like; storage devices 1408 including, for example, magnetic tape, hard disk, etc.; and a communication device 1409. The communication means 1409 may allow the computing device 1400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 14 illustrates a computing device 1400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
The embodiments of the present disclosure also provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the processor is enabled to implement the multimedia display method or the multimedia matching method in the above embodiments.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs.
The embodiments of the present disclosure also provide a computer program product, which may include a computer program, and when the computer program is executed by a processor, the processor is enabled to implement the video editing method or the video playing method in the above embodiments.
For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1409, or installed from the storage device 1408, or installed from the ROM 1402. The computer program, when executed by the processing apparatus 1401, performs the above-described functions defined in the multimedia display method or the multimedia matching method of the embodiment of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP, and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the computing device; or may exist separately and not be assembled into the computing device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the computing device to perform:
receiving original multimedia data; performing special effect editing on the original multimedia data to obtain multimedia data to be matched; generating synthetic multimedia data based on the multimedia data to be matched and the target multimedia data, wherein the target multimedia data is obtained according to the first multimedia characteristic matching of the multimedia data to be matched; and displaying the synthesized multimedia data.
Or receiving multimedia data to be matched, wherein the multimedia data to be matched is obtained by performing special effect processing on the original multimedia data; extracting a first multimedia characteristic from multimedia data to be matched; acquiring a plurality of candidate multimedia data corresponding to the multimedia data to be matched; and inquiring target multimedia data matched with the first multimedia characteristics in the candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In embodiments of the present disclosure, computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (14)

1. A multimedia display method, comprising:
receiving original multimedia data;
performing special effect editing on the original multimedia data to obtain multimedia data to be matched;
generating synthetic multimedia data based on the multimedia data to be matched and target multimedia data, wherein the target multimedia data is obtained according to the first multimedia feature matching of the multimedia data to be matched;
and displaying the synthesized multimedia data.
2. The method according to claim 1, wherein the performing effect editing on the original multimedia data to obtain multimedia data to be matched comprises:
responding to the template selection operation of a target special effect template, and carrying out special effect editing on the original multimedia data based on the target special effect template to obtain the multimedia data to be matched;
or performing special effect editing on the original multimedia data based on a target special effect template corresponding to the original multimedia data to obtain the multimedia data to be matched.
3. The method of claim 1, wherein before the generating composite multimedia data based on the multimedia data to be matched and the target multimedia data, the method further comprises:
extracting the first multimedia features from the multimedia data to be matched;
obtaining a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and inquiring the target multimedia data matched with the first multimedia characteristic in the candidate multimedia data.
4. The method of claim 3, wherein said querying the target multimedia data matching the first multimedia feature from the plurality of candidate multimedia data comprises:
determining at least one feature tag corresponding to the first multimedia feature;
for each of the candidate multimedia data, determining a common label that is identical to the at least one feature label;
calculating a label matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common label;
and sequencing the label matching scores of the candidate multimedia data to determine target multimedia data.
5. The method of claim 1, wherein the target multimedia data satisfies at least one of:
the special effect editing mode of the target multimedia data is the same as that of the multimedia data to be matched;
the user to which the target multimedia data belongs is an online user;
the position distance between the release position of the target multimedia data and the release position of the original multimedia data is smaller than or equal to a preset distance threshold value;
and the historical matching times of the target multimedia data meet a preset time screening condition.
6. The method of claim 1, wherein the target multimedia data is further derived from a second multimedia feature match of the original multimedia data.
7. The method of claim 1, wherein after said displaying said composite multimedia data, said method further comprises:
and when the triggering operation on the synthesized multimedia data is detected, the synthesized multimedia data is issued to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs.
8. The method of claim 7, wherein said distributing the synthesized multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs comprises:
sending first prompt information to a user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthesized multimedia data and displaying a social homepage of the user to which the target multimedia data belongs;
and sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering the playing of the synthesized multimedia data and displaying the social homepage of the user to which the original multimedia data belongs.
9. A multimedia matching method, comprising:
receiving multimedia data to be matched, wherein the multimedia data to be matched is obtained by carrying out special effect processing on original multimedia data;
extracting the first multimedia features from the multimedia data to be matched;
obtaining a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and inquiring the target multimedia data matched with the first multimedia characteristics in the candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
10. The method of claim 9, wherein querying the target multimedia data matching the first multimedia feature from the plurality of candidate multimedia data comprises:
determining at least one feature tag corresponding to the first multimedia feature;
for each of the candidate multimedia data, determining a common label that is identical to the at least one feature label;
calculating a label matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common label;
and sequencing the label matching scores of the candidate multimedia data to determine target multimedia data.
11. A multimedia display apparatus, comprising:
a data receiving unit configured to receive original multimedia data;
the special effect editing unit is configured to carry out special effect editing on the original multimedia data to obtain multimedia data to be matched;
the data synthesis unit is configured to generate synthesized multimedia data based on the multimedia data to be matched and target multimedia data, and the target multimedia data is obtained according to the first multimedia feature matching of the multimedia data to be matched;
a data display unit configured to display the synthesized multimedia data.
12. A multimedia matching apparatus, comprising:
the data receiving unit is configured to receive multimedia data to be matched, wherein the multimedia data to be matched is obtained by performing special effect processing on original multimedia data;
the characteristic extraction unit is configured to extract the first multimedia characteristic from the multimedia data to be matched;
the data acquisition unit is configured to acquire a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and the data query unit is configured to query the target multimedia data matched with the first multimedia feature in the candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
13. A computing device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the multimedia display method of any one of the above claims 1-8 or the multimedia matching method of any one of the above claims 9-10.
14. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, causes the processor to implement the multimedia presentation method of any of the above claims 1-8 or the multimedia matching method of any of the above claims 9-10.
CN202111136435.5A 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium Active CN113870133B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111136435.5A CN113870133B (en) 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium
PCT/CN2022/115521 WO2023045710A1 (en) 2021-09-27 2022-08-29 Multimedia display and matching methods and apparatuses, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136435.5A CN113870133B (en) 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113870133A true CN113870133A (en) 2021-12-31
CN113870133B CN113870133B (en) 2024-03-12

Family

ID=78991263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136435.5A Active CN113870133B (en) 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN113870133B (en)
WO (1) WO2023045710A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023011318A1 (en) * 2021-08-04 2023-02-09 北京字跳网络技术有限公司 Media file processing method and apparatus, device, readable storage medium, and product
WO2023045710A1 (en) * 2021-09-27 2023-03-30 北京字节跳动网络技术有限公司 Multimedia display and matching methods and apparatuses, device and medium
CN115941841A (en) * 2022-12-06 2023-04-07 北京字跳网络技术有限公司 Associated information display method, device, equipment, storage medium and program product
WO2024140239A1 (en) * 2022-12-30 2024-07-04 北京字跳网络技术有限公司 Page display method and apparatus, device, computer readable storage medium and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200349385A1 (en) * 2018-04-13 2020-11-05 Tencent Technology (Shenzhen) Company Limited Multimedia resource matching method and apparatus, storage medium, and electronic apparatus
CN112351327A (en) * 2019-08-06 2021-02-09 北京字节跳动网络技术有限公司 Face image processing method and device, terminal and storage medium
CN112528049A (en) * 2020-12-17 2021-03-19 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and computer-readable storage medium
CN112597320A (en) * 2020-12-09 2021-04-02 上海掌门科技有限公司 Social information generation method, device and computer readable medium
CN112988671A (en) * 2019-12-13 2021-06-18 北京字节跳动网络技术有限公司 Media file processing method and device, readable medium and electronic equipment
CN113099129A (en) * 2021-01-27 2021-07-09 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010183317A (en) * 2009-02-05 2010-08-19 Olympus Imaging Corp Imaging device, image composition and display device, image composition and display method, and program
CN105338242A (en) * 2015-10-29 2016-02-17 努比亚技术有限公司 Image synthesis method and device
CN106528588A (en) * 2016-09-14 2017-03-22 厦门幻世网络科技有限公司 Method and apparatus for matching resources for text information
CN110866086A (en) * 2018-12-29 2020-03-06 北京安妮全版权科技发展有限公司 Article matching system
CN113870133B (en) * 2021-09-27 2024-03-12 抖音视界有限公司 Multimedia display and matching method, device, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200349385A1 (en) * 2018-04-13 2020-11-05 Tencent Technology (Shenzhen) Company Limited Multimedia resource matching method and apparatus, storage medium, and electronic apparatus
CN112351327A (en) * 2019-08-06 2021-02-09 北京字节跳动网络技术有限公司 Face image processing method and device, terminal and storage medium
CN112988671A (en) * 2019-12-13 2021-06-18 北京字节跳动网络技术有限公司 Media file processing method and device, readable medium and electronic equipment
CN112597320A (en) * 2020-12-09 2021-04-02 上海掌门科技有限公司 Social information generation method, device and computer readable medium
CN112528049A (en) * 2020-12-17 2021-03-19 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and computer-readable storage medium
CN113099129A (en) * 2021-01-27 2021-07-09 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023011318A1 (en) * 2021-08-04 2023-02-09 北京字跳网络技术有限公司 Media file processing method and apparatus, device, readable storage medium, and product
US12019669B2 (en) 2021-08-04 2024-06-25 Beijing Zitiao Network Technology Co., Ltd. Method, apparatus, device, readable storage medium and product for media content processing
WO2023045710A1 (en) * 2021-09-27 2023-03-30 北京字节跳动网络技术有限公司 Multimedia display and matching methods and apparatuses, device and medium
CN115941841A (en) * 2022-12-06 2023-04-07 北京字跳网络技术有限公司 Associated information display method, device, equipment, storage medium and program product
WO2024140239A1 (en) * 2022-12-30 2024-07-04 北京字跳网络技术有限公司 Page display method and apparatus, device, computer readable storage medium and product

Also Published As

Publication number Publication date
CN113870133B (en) 2024-03-12
WO2023045710A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
US11895068B2 (en) Automated content curation and communication
WO2021238631A1 (en) Article information display method, apparatus and device and readable storage medium
US11321385B2 (en) Visualization of image themes based on image content
CN113870133B (en) Multimedia display and matching method, device, equipment and medium
CN111476871B (en) Method and device for generating video
CN109635680B (en) Multitask attribute identification method and device, electronic equipment and storage medium
US10157638B2 (en) Collage of interesting moments in a video
EP3815042B1 (en) Image display with selective depiction of motion
CN104637035B (en) Generate the method, apparatus and system of cartoon human face picture
CN107222795B (en) Multi-feature fusion video abstract generation method
CN113287118A (en) System and method for face reproduction
CN108701207A (en) For face recognition and video analysis to identify the personal device and method in context video flowing
US12073524B2 (en) Generating augmented reality content based on third-party content
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
US10642881B2 (en) System architecture for universal emotive autography
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
US20210409614A1 (en) Generating augmented reality content based on user-selected product data
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
KR101757184B1 (en) System for automatically generating and classifying emotionally expressed contents and the method thereof
CN112990176A (en) Writing quality evaluation method and device and electronic equipment
CN113409208A (en) Image processing method, device, equipment and storage medium
KR102718174B1 (en) Display images that optionally depict motion
CN115861469A (en) Image identifier creating method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant