CN113870133B - Multimedia display and matching method, device, equipment and medium - Google Patents

Multimedia display and matching method, device, equipment and medium Download PDF

Info

Publication number
CN113870133B
CN113870133B CN202111136435.5A CN202111136435A CN113870133B CN 113870133 B CN113870133 B CN 113870133B CN 202111136435 A CN202111136435 A CN 202111136435A CN 113870133 B CN113870133 B CN 113870133B
Authority
CN
China
Prior art keywords
multimedia data
multimedia
matched
target
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111136435.5A
Other languages
Chinese (zh)
Other versions
CN113870133A (en
Inventor
黄造军
徐之俊
冯宇飞
邓子建
吴铭泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202111136435.5A priority Critical patent/CN113870133B/en
Publication of CN113870133A publication Critical patent/CN113870133A/en
Priority to PCT/CN2022/115521 priority patent/WO2023045710A1/en
Application granted granted Critical
Publication of CN113870133B publication Critical patent/CN113870133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a multimedia display and matching method, apparatus, device and medium. The multimedia display method comprises the following steps: receiving original multimedia data; performing special effect editing on the original multimedia data to obtain multimedia data to be matched; generating composite multimedia data based on the multimedia data to be matched and target multimedia data, wherein the target multimedia data is obtained by matching the first multimedia features of the multimedia data to be matched; and displaying the synthesized multimedia data. According to the embodiment of the disclosure, the beautifying effect of the multimedia data is enriched, the interestingness of the multimedia data display is improved, the user can interact through the multimedia data, the diversity interaction among the users is realized, and the use experience of the user is improved.

Description

Multimedia display and matching method, device, equipment and medium
Technical Field
The present disclosure relates to the field of multimedia processing technologies, and in particular, to a multimedia display and matching method, apparatus, device, and medium.
Background
With rapid development of computer technology and mobile communication technology, various network platforms based on electronic devices are widely used, and daily lives of people are greatly enriched. More and more users are willing to beautify multimedia data such as images or video on a network platform to obtain photos or video with satisfactory effect.
At present, although users can beautify multimedia data by using a preset special effect template, the interaction mode among users is single, the interestingness is lacked, and the user experience is reduced.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a multimedia display and matching method, apparatus, device and medium.
In a first aspect, the present disclosure provides a multimedia display method, including:
receiving original multimedia data;
performing special effect editing on the original multimedia data to obtain multimedia data to be matched;
generating composite multimedia data based on the multimedia data to be matched and target multimedia data, wherein the target multimedia data is obtained by matching the first multimedia features of the multimedia data to be matched;
and displaying the synthesized multimedia data.
In a second aspect, the present disclosure provides a multimedia matching method, including:
receiving multimedia data to be matched, wherein the multimedia data to be matched is obtained by performing special effect processing on the original multimedia data;
extracting a first multimedia feature from multimedia data to be matched;
acquiring a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
And querying target multimedia data matched with the first multimedia features in the plurality of candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In a third aspect, the present disclosure provides a multimedia display device comprising:
a data receiving unit configured to receive original multimedia data;
the special effect editing unit is configured to carry out special effect editing on the original multimedia data to obtain multimedia data to be matched;
the data synthesis unit is configured to generate synthesized multimedia data based on the multimedia data to be matched and target multimedia data, wherein the target multimedia data is obtained by matching the first multimedia features of the multimedia data to be matched;
and a data display unit configured to display the composite multimedia data.
In a fourth aspect, the present disclosure provides a multimedia matching apparatus, comprising:
the data receiving unit is configured to receive multimedia data to be matched, wherein the multimedia data to be matched is obtained by performing special effect processing on the original multimedia data;
the feature extraction unit is configured to extract first multimedia features from the multimedia data to be matched;
the data acquisition unit is configured to acquire a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
And the data query unit is configured to query target multimedia data matched with the first multimedia features in the plurality of candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In a fifth aspect, the present disclosure provides a computing device comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the multimedia display method of the first aspect or to implement the multimedia matching method of the second aspect.
In a sixth aspect, the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the multimedia display method of the first aspect, or to implement the multimedia matching method of the second aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
the multimedia display and matching method, device, equipment and medium of the embodiment of the disclosure generate and display composite multimedia data based on the multimedia data to be matched and target multimedia data obtained by editing after performing special effect editing on the received original multimedia data. Because the target is obtained by matching the first multimedia features of the multimedia data to be matched, the synthesized multimedia data obtained by the original multimedia data can also comprise the content of the target multimedia data matched with the multimedia data to be matched besides the special effect in the multimedia data to be matched, so that the multimedia data image has various elements, the beautifying effect of the multimedia data is enriched, the interestingness of the multimedia data display is improved, and the user can interact through the multimedia data, so that the diversity interaction among the users is realized, and the use experience of the user is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 illustrates an architecture diagram of a multimedia display system provided by an embodiment of the present disclosure;
FIG. 2 illustrates an architecture diagram of another multimedia display system provided by an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a multimedia display method according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a shooting preview interface provided by an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of a special effects editing interface provided by embodiments of the present disclosure;
fig. 6 is a schematic diagram of a display interface of multimedia data to be matched according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of matching logic for multimedia data provided by an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a display interface for synthesizing multimedia data according to an embodiment of the disclosure;
fig. 9 is a flowchart illustrating another multimedia display method according to an embodiment of the disclosure;
Fig. 10 is a flowchart illustrating yet another multimedia display method according to an embodiment of the present disclosure;
fig. 11 is a schematic flow chart of a multimedia matching method according to an embodiment of the disclosure;
fig. 12 is a schematic structural diagram of a multimedia display device according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a multimedia matching device according to an embodiment of the present disclosure;
fig. 14 shows a schematic structural diagram of a computing device provided by an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
With rapid development of computer technology and mobile communication technology, various network platforms based on electronic devices are widely used, and daily lives of people are greatly enriched. More and more users are willing to beautify shooting multimedia data such as images or videos on a network platform to obtain photos or videos with satisfactory effects.
At present, users can beautify multimedia data by using preset special effect templates. For example, a preset sticker or a preset special effect can be added to the shot picture.
However, the user can only select the special effect template from the preset special effect tools, the beautifying effect is single, the interestingness is lacked, and the user experience is reduced.
In order to solve the above-mentioned problems, embodiments of the present disclosure provide a multimedia display and matching method, apparatus, device, and medium capable of displaying composite multimedia data generated by multimedia data to be matched and target multimedia data.
The multimedia display method provided by the present disclosure may be applied to the architecture shown in fig. 1 and fig. 2, and specifically described in detail with reference to fig. 1 and fig. 2.
Fig. 1 shows an architecture diagram of a multimedia display system provided in an embodiment of the present disclosure.
As shown in fig. 1, the multimedia display system may include at least one electronic device 101 of a client and at least one server 102 of a server. The electronic device 101 may establish a connection and interact with the server 102 via a network protocol, such as the hypertext transfer security protocol (HyperTextTransferProtocoloverSecureSocketLayer, HTTPS). The electronic device 101 may be a device with a communication function, such as a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable device, an integrated machine, or an intelligent home device, or may be a device simulated by a virtual machine or a simulator. The server 102 may be a cloud server or a server cluster, or other device with storage and computing capabilities.
Based on the above architecture, a user can make special effects editing on the original multimedia data through a specific service platform on the electronic device 101, and generate and display the composite multimedia data. The specific service platform may be a specific application program or a specific website, for example, may be a social platform or a video playing platform with social function, etc.
In some embodiments, after a user logs in a specific service platform through the electronic device 101, the electronic device 101 may obtain original multimedia data, such as an image or a video, and perform special effects editing on the original multimedia data to obtain multimedia data to be matched. And after acquiring the plurality of candidate multimedia data including the target multimedia data P11 from the server 102, the electronic device 101 may query the target multimedia data P11 from the candidate multimedia data based on the first multimedia feature of the multimedia data to be matched. Then, the electronic device 101 may generate the composite multimedia data P12 from the target multimedia data obtained by matching based on the first multimedia feature of the multimedia data to be matched and the multimedia data to be matched. Optionally, with continued reference to fig. 1, the electronic device 101 may upload the generated composite multimedia data P12 to the server 102.
In other embodiments, the electronic device 101 may upload the multimedia data to be matched to the server 102. Then, the server 102 may match the target multimedia data P11 from the plurality of candidate multimedia data after receiving the multimedia data to be matched, and transmit the target multimedia data P11 to the electronic device 101. Then, the electronic device 101 may generate the composite multimedia data P12 based on a target multimedia data obtained by matching with the first multimedia feature of the multimedia data to be matched and the multimedia data to be matched.
In addition, the multimedia display method provided by the present disclosure may be applied to a specific scenario where users of multiple electronic devices interact through multimedia data, and is described below with reference to the architecture shown in fig. 2.
Fig. 2 illustrates an architecture diagram of another multimedia display system provided by an embodiment of the present disclosure.
As shown in fig. 2, the multimedia display system may include at least one first electronic device 201 and at least one second electronic device 202 of a client, and at least one server 203 of a server. The first electronic device 201, the second electronic device 202 and the server 203 may respectively establish a connection and perform information interaction through a network protocol, such as HTTPS. The first electronic device 201 and the second electronic device 202 may be devices having a communication function, such as a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable device, an integrated machine, and an intelligent home device, or may be devices simulated by a virtual machine or a simulator. The server 203 may be a cloud server or a server cluster, or other devices with storage and computing functions.
Based on the above architecture, a first user may log in a specific service platform on a first electronic device 201, and a second user may log in the same specific service platform on a second electronic device 202. During the interaction of the first user with the first user through the specific service platform, the second user may use the second electronic device 202 to send the target multimedia data P22, which needs to be synthesized by the first user, to the first user through the server 203 of the specific social platform in the specific social platform. Wherein the specific social platform may be a specific application program or a specific website with social functions.
In one embodiment, after the second user transmits the special effects edited target multimedia data P22 to the server 203 through the second electronic device 202, the server 203 may transmit candidate multimedia data including the target multimedia data P22 to the first electronic device 201. If the first electronic device 201 determines that the target multimedia data P21 matches the multimedia data to be matched after the special effect processing, the composite multimedia data P23 may be generated, and the composite multimedia data P21 may be sent to the second electronic device 202 through the server 203.
In another embodiment, after the server 23 receives the special-effect edited multimedia data P21 to be matched sent by the first user through the first electronic device 201 and the special-effect edited target multimedia data P22 sent by the second user through the second electronic device 202, if the target multimedia data P22 to be matched matches the multimedia data P21 to be matched, the target multimedia data P22 is sent to the first electronic device 201. After generating the composite multimedia data P23, the first electronic device 201 transmits the composite multimedia data P21 to the second electronic device 202 via the server 203.
Having described the architecture of the multimedia display system of the embodiments of the present disclosure through fig. 1 and 2, a description of a multimedia display method provided by the embodiments of the present disclosure will be first provided with reference to fig. 3 to 8.
Fig. 3 is a schematic flow chart of a multimedia display method according to an embodiment of the disclosure.
In the embodiment of the disclosure, the multimedia display method may be performed by an electronic device. Among them, the electronic devices may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable devices, etc., and stationary terminals such as digital TVs, desktop computers, smart home devices, etc.
As shown in fig. 3, the multimedia display method may include the following steps.
S310, receiving the original multimedia data.
In the embodiment of the disclosure, the user may trigger a related operation at the target application when the user wants to make special effect editing on the image or wants to make special effect editing on the image, and the electronic device may receive the original multimedia data in response to the related triggering operation. The target application program can be a social platform or a video release platform. Specifically, the original multimedia data may be multimedia data containing visualization information such as video data or image data.
In some embodiments, the original multimedia data may be collected by the user in real time. Accordingly, the above-described related operation may be an opening operation of the photographed page by the user. Alternatively, the above-described related operation may be a photographing operation by the user on a photographing page. Or, the related operation may be a triggering operation of the user on the live broadcast page or the shooting page for the multimedia composition function.
In other embodiments, the original multimedia data may be stored locally by the electronic device. Accordingly, the related operation may be a selection operation of an image or video by the user in the electronic album.
In still other embodiments, the original multimedia data may be user-downloaded. Accordingly, the related operation may be a download operation for an image or video within a download page of a browser, a target application, or a third party application by a user.
In still other embodiments, the original multimedia data may be sent by other devices to the electronic device. Accordingly, the electronic device may take the multimedia data sent by the other device as the original multimedia data after receiving the multimedia data.
In some embodiments, the original multimedia data may be multimedia data including a target object such as a person, an animal, a plant, or an object. For example, the original multimedia data may be a user's own photograph, a user's own video, etc. Alternatively, the original multimedia data may include a partial image or an entire image of the target object, for example, the original multimedia data may include only a face image of a person, or an image of a face and other body parts.
S320, performing special effect editing on the original multimedia data to obtain the multimedia data to be matched.
In the embodiment of the disclosure, after receiving the original multimedia data, the electronic device may perform special effect editing on the original multimedia data in response to a triggering operation of a special effect editing function or a multimedia snap function of a user, to obtain the multimedia data to be matched.
In some embodiments, the special effects editing may change the characteristics of the target object itself in the original multimedia data by adding features or replacing original features, or may change the characteristics of the accessory components of the target object. Specifically, the special effect editing can be to perform special effect editing on the original multimedia image through at least one of special effect editing tools such as beauty, image modification, special effect props, filters, image style migration tools, mapping and the like. The special effects editing tool may be provided by a target application program, a third party application program, a web page, and the like. It should be noted that, for convenience of description of the subsequent parts, the following parts of the embodiments of the present disclosure refer to the target object after the special effect processing in the multimedia data to be matched as the first special effect object.
In one example, the electronic device may change the facial features of the target object by way of beauty, image modification, special effects props, and the like. For example, features of the target subject's facial contours, eyes, skin, nose, mouth, etc. may be adjusted.
In another example, the characteristics of a target object such as height, overall or local weight may be changed by functions of beauty, image modification, special effects props, etc.
In still other embodiments, features of accessory components such as target apparel, headwear, glasses, makeup, mask, facial effects that do not change features of the original components of the face may be added or changed by mapping, special effects props, etc. Wherein, the face special effects without changing the original component characteristics of the face can comprise animal beard and the like.
In still other embodiments, the entire original multimedia data image style, or the entire or partial image style of the target object, may be style migrated by an image style migration tool or filter. For example, the image style of the original multimedia data may be converted into an animation style, and accordingly, the target object in the original multimedia data becomes a cartoon character.
In some embodiments, a target special effects template may be selected from a plurality of selectable special effects templates of the special effects editing tool to effect editing of the original multimedia data. Specifically, if the original multimedia data is an image, static or dynamic special effect editing can be performed on a local image or an overall image of the original image by using the target special effect template, so as to generate the multimedia data to be matched in an image or video format. Or if the original multimedia data is video, one or more key video frames can be extracted from the original video, and static or dynamic special effect editing is carried out on the local images or the whole images of the key video frames by utilizing the target special effect template, so that the multimedia data to be matched in the image or video format is generated. Alternatively, the key video frames may be video frames in the original video that contain the target object.
Further, S120 may include at least the following two embodiments according to the manner of special effect editing.
In some embodiments, S120 may specifically include: and responding to the template selection operation of the target special effect template, and carrying out special effect editing on the original multimedia data based on the target special effect template to obtain the multimedia data to be matched.
Specifically, after the original multimedia data is received, if the user wants to perform special effect editing on the original multimedia data, the electronic device may respond to a triggering operation that the user selects a target special effect template from a plurality of selectable special effect templates of the special effect editing tool, and perform special effect editing on the original multimedia data by using the target special effect template selected by the user to obtain the multimedia template to be matched.
In other embodiments, S120 may specifically include: and performing special effect editing on the original multimedia data based on a target special effect template corresponding to the original multimedia data to obtain the multimedia data to be matched.
Specifically, if the user selects the target special effect template in advance, the target special effect template can be directly used for editing the original multimedia data to obtain the multimedia data to be matched. Alternatively, a suitable special effect template may be matched as a target special effect template for the original multimedia data based on its multimedia features. Or if the user selects the target special effect template, the multimedia data can be shot in the template, and accordingly, the multimedia data to be matched is directly displayed on the shooting interface.
And S330, generating composite multimedia data based on the multimedia data to be matched and the target multimedia data.
In the embodiment of the disclosure, after receiving the multimedia data to be matched and the target multimedia data, the electronic device may directly or after a certain conversion add the target image portion of the multimedia data to be matched and the target image portion of the target multimedia data to a target image area in the target multimedia template in an image stitching or image fusion manner, so as to obtain composite multimedia data, so that the composite multimedia data may have at least part of characteristics of the first special effect object in the multimedia data to be matched and the second special effect object in the target multimedia data in the target multimedia template.
In order to facilitate the explanation of the composite multimedia data, the following sections of the embodiments of the present disclosure will develop a specific explanation of the first multimedia features of the multimedia data to be matched and the target multimedia data before introducing the composite multimedia data.
The first multimedia feature of the multimedia data to be matched may be a feature of the first special effect object itself or a feature of an accessory part.
In some embodiments, the first special effects object's own features may include the first special effects object's facial features or physical features such as height, weight, etc.
Alternatively, facial features may include facial features such as head aspect ratio, facial shape, chin to head width ratio, forehead length to head length ratio, and the like. Alternatively, ocular features such as eye size, eye distance, pupil color, pupil size, eye shape, etc. may be included. Still alternatively, nasal features such as nose length, wing width, bridge height, bridge width, and the like may be included. Still alternatively, hair characteristics such as hair length, hair color, hair shape (curly, straight), etc. may be included. Still alternatively, skin information such as skin color, skin roughness, etc. may be included.
In some embodiments, the features of the accessory component may include whether to wear glasses, whether to wear a mask, whether to wear an accessory, whether to wear headwear, whether to make up, whether to have facial special effects that do not alter the original component features of the face, and the like. For example, if the first special effects object is wearing an accessory component, specific features of the accessory component may be included. For example, if the first special effects object is wearing a mask, the first multimedia feature may also include the model, name, etc. of the mask.
Through the first multimedia features shown above, target multimedia data having the same or matched features as the multimedia data to be matched can be obtained by matching.
For the target multimedia data, optionally, the target multimedia data may include an object edited by special effects. For example, the object in the target multimedia data may be a different object than the target object in the original multimedia data. For example, the object in the target multimedia data may be an image of the second user after special effect editing, and the target object in the multimedia data to be matched may be an image of the first user after special effect editing. For ease of illustration, the edited object in the target multimedia data may be referred to as a second special effect object.
In some embodiments, the target multimedia data may be data pre-stored in a multimedia database of the target application, a third party application, or a web page.
In other embodiments, the target multimedia data may be multimedia data uploaded by other users after special effects editing.
In addition, the manner of generating the target multimedia data by other users is similar to that of generating the multimedia data to be matched, and will not be described herein.
In some embodiments, the multimedia data to be matched and the target multimedia data may be the same type of multimedia data. For example, both are images, or both are videos. Alternatively, the multimedia data to be matched and the target multimedia data may be different types of multimedia data. One of the two is an image, and the other is a video.
Having described in detail the multimedia data to be matched and the target multimedia data, embodiments of the present disclosure will be described below with respect to synthesizing the multimedia data.
In some embodiments, if the target multimedia template is a scene template, the synthesized multimedia data may be used to present an interaction behavior or an interaction action of the first special effect object in the multimedia data to be matched and the second special effect object in the target multimedia data in the scene corresponding to the target multimedia template. The target multimedia template can be an image scene template or a video scene template, and the specific type of the target multimedia template is not limited.
In one example, the electronic device may generate the composite multimedia data based on a user operation to select a target multimedia template from a plurality of selectable scene templates. Wherein the selectable scene template may be obtained from a scene template library of the target application, the third party application or the web page.
In another example, the electronic device can determine the matched target scene template based on the characteristics of the first special effect object in the multimedia data to be matched and the characteristics of the second special effect object in the target multimedia data. Wherein the special effects of the first special effects object and the features of the second special effects object may be their actions.
For example, if the action of the first special effect object is cup lifting and the action of the second special effect object is cup lifting, the local or whole images of the first special effect object and the second special effect object can be added into scene templates such as parties, bars and the like to generate composite multimedia data such as a dry cup and the like.
For another example, if the action of the first special effect object is embraced by princess and the action of the second special effect object is embracing person, the partial or whole images of the first special effect object and the second special effect object may be added to romantic scene templates such as wedding, under beautiful sky, etc., to generate synthetic multimedia data such as turning princess embracing.
For another example, if the action of the first special effect object is kicking a goal and the action of the second special effect object is defending, local or global features of the first special effect object and the second special effect object may be added to scene templates such as a sports stadium to generate composite multimedia data such as a football match.
In yet another example, the electronic device can generate the composite multimedia data based on the target multimedia template uploaded by the user.
In some embodiments, words, music, special effects, etc. may also be added to the synthesized multimedia data in order to increase interest.
And S340, displaying the synthesized multimedia data.
In the embodiment of the disclosure, the electronic device may display the synthesized multimedia data in response to a user synthesizing operation of the multimedia data or a trigger operation display for displaying the synthesized multimedia data by the user. Or, the electronic device does not need to respond to the triggering operation, and the synthesized multimedia data is directly displayed on the relevant interface after the synthesized multimedia data is generated.
The video display method, device, equipment and medium of the embodiment of the disclosure generate and display composite multimedia data based on the multimedia data to be matched and target multimedia data obtained by editing after performing special effect editing on the received original multimedia data. Because the target is obtained by matching the first multimedia features of the multimedia data to be matched, the synthesized multimedia data obtained by the original multimedia data can also comprise the content of the target multimedia data matched with the multimedia data to be matched besides the special effect in the multimedia data to be matched, so that the multimedia data image has various elements, further the beautifying effect of the multimedia data is enriched, the interestingness of the multimedia data display is improved, the users can interact through the multimedia data, the diversity interaction among the users is realized, and the use experience of the users is improved.
For ease of understanding, embodiments of the present disclosure will be described in detail below with reference to fig. 4 to 8 for a multimedia display method provided by embodiments of the present disclosure.
Fig. 4 shows a schematic diagram of a shooting preview interface provided by an embodiment of the present disclosure.
As shown in fig. 4, the electronic apparatus may display a target object 41 in the photographing preview interface 40, and various special effects editing tools such as a filter tool 401, a beauty tool 402, a special effects tool 403, and may also display a multimedia composition tool 404. The filter tool 401, the beauty tool 402, and the special effects tool 403 may include one or more special effects templates.
When the user clicks on the special effects tool 403, the displayed interface may be as shown in FIG. 5. Fig. 5 shows a schematic diagram of a special effects editing interface provided by an embodiment of the present disclosure.
As shown in fig. 5, a plurality of effect templates 4031 to 4034 of the effect tool 403 may be displayed on the effect editing interface 50. After the user selects mask special effects template 4033 therefrom, the generated multimedia data to be matched is shown in fig. 6. Fig. 6 is a schematic diagram of a display interface of multimedia data to be matched according to an embodiment of the disclosure.
As shown in fig. 6, the display interface 60 of the multimedia data to be matched may include the first special effect object 61 after special effect processing and the multimedia composition tool 404. After the user clicks on the multimedia composition tool 404, a matching step of the multimedia data may be performed by the electronic device or the server. Fig. 7 shows a schematic diagram of matching logic for multimedia data provided by an embodiment of the present disclosure.
As shown in fig. 7, if the target multimedia data P72 including the second special effect object 73 is obtained by matching the multimedia data P71 including the first special effect object 61, the resultant composite multimedia data is shown in fig. 8.
Fig. 8 is a schematic diagram of a display interface for synthesizing multimedia data according to an embodiment of the disclosure. As shown in fig. 8, the composite multimedia data P81 may present the first and second special effects objects 61 and 73 in the form of images or videos in a scene of a dry cup in a faked scene.
In some embodiments provided by the embodiments of the present disclosure, fig. 9 shows a schematic flow chart of another multimedia display method provided by the embodiments of the present disclosure.
In the embodiment of the disclosure, the multimedia display method may be performed by an electronic device. Among them, the electronic devices may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable devices, etc., and stationary terminals such as digital TVs, desktop computers, smart home devices, etc.
As shown in fig. 9, the multimedia display method may include the following steps.
S910, receiving the original multimedia data. The specific content of S910 is similar to that of S310, and will not be described again.
S920, performing special effect editing on the original multimedia data to obtain the multimedia data to be matched. The specific content of S920 is similar to that of S320, and will not be described again.
And S930, extracting the first multimedia features from the multimedia data to be matched.
In some embodiments, the first multimedia feature may be extracted from the multimedia data to be matched using an image feature extraction technique or a video frame feature extraction technique.
The specific content of the first multimedia feature may be referred to the related description of S330 in the foregoing portion of the embodiment of the disclosure, and will not be repeated here.
S940, a plurality of candidate multimedia data corresponding to the multimedia data to be matched are obtained.
In some embodiments, candidate multimedia data may be obtained from a multimedia database of a target application, a third party application, or a web page.
In one example, the candidate multimedia data may be data pre-stored in a multimedia database.
In another example, the candidate multimedia data may be multimedia data uploaded by other users after special effects editing.
S950, querying target multimedia data matched with the first multimedia feature in the plurality of candidate multimedia data.
In some embodiments, the target multimedia data may be determined from a plurality of candidate multimedia data by means of feature matching.
In one embodiment, S950 may specifically include the following steps.
And A1, determining at least one feature tag corresponding to the first multimedia feature.
Alternatively, the feature tag may be a tag obtained by classifying the first special effect object from one or more dimensions based on one or a class of the first multimedia features. For example, the feature tag may categorize the first effect object from the dimensions of the first effect object itself or the accessory component.
For example, the feature tag of the first special effect object itself may include a tag for characterizing a nose, a tag of eyes, a tag of gender, an action tag, a tag of skin state, or the like, which can classify the first special effect object from the feature of the person itself.
For another example, the accessory tag of the first special effect object may include a tag of whether to wear glasses, a tag of whether to wear a mask, a cosmetic tag, and the like.
Step A2, for each candidate multimedia data, determining the same common tag as the at least one feature tag.
That is, if the same tag exists between the tag of the multimedia data to be matched and the tag of the candidate multimedia data, the same tag may be used as a common tag of the multimedia data to be matched and the candidate multimedia data.
Exemplary labels for user a include glasses, high nose, yellow skin, high child, female; user B's tags include no glasses, small mouth, thin, male. The common tag may include a glasses tag (with or without glasses), a gender tag (male or female).
And step A3, calculating the label matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common label.
In one example, the weight value of the common tag may be preset.
In yet another example, the weight value of the common tag may be set according to a user's selection. For tags that are not of interest to the user, a low weight value a is set. For tags that the user pays attention to or pays attention to (e.g., whether the user likes or dislikes other people to wear glasses and whether it is a tail of a horse), a high weight value b may be set for the glasses tag and the hairstyle tag. Wherein the weight b is greater than the weight a. Or a high weight value c can be set for the labels interested by the user, and a low weight value d can be set for the labels disfavored by the user. Wherein the weight value c is greater than the weight value a, which is greater than the weight value d.
The tag matching score is used for reflecting the matching degree of each candidate multimedia data and the multimedia data to be matched in the aspect of a feature or a class of features corresponding to the tag.
In some embodiments, for each feature tag, a tag score for the multimedia data corresponding to the tag may be generated from the feature. For example, for a glasses tag, if a first special object in the multimedia data to be matched wears glasses, the tag score of the glasses tag may be 100, and if the first special object does not wear glasses, the tag score of the glasses tag may be 0.
It should be noted that, the label score of the candidate multimedia data is the same as the calculation mode of the multimedia data to be matched, and will not be described here again.
Accordingly, after the tag score of the candidate multimedia data and the tag score of the multimedia data to be matched are acquired, the similarity score of the candidate multimedia data and the tag score of the multimedia data to be matched can be calculated according to the tag score of the common tag of the candidate multimedia data and the tag score of the common tag. And then calculating the label matching scores of the two according to the similarity scores and the weight values.
Optionally, for some feature tags, the closeness of the tag scores of each candidate multimedia data and the multimedia data to be matched is positively correlated with the similarity score between the two. That is, the closer each candidate multimedia data is to the tag score of the multimedia data to be matched, the higher the similarity score between the two. For example, if both wear glasses, their matching score is high. For example, the similarity score corresponding to the class of feature tags may be equal to a preset value minus the target tag score difference. The target label score difference value is the difference value between the label score of each candidate multimedia data in the type label and the label score of the multimedia data to be matched in the type label.
For other feature tags, the closeness of the tag scores of each candidate multimedia data and the multimedia data to be matched is inversely related to the similarity score between the two. That is, the greater the tag score difference between each candidate multimedia data and the multimedia data to be matched, the lower the similarity score between the two. For example, if the two sexes are the same, the similarity score is low. If the two sexes are opposite, the similarity score is high. For example, the similarity score corresponding to the class of feature tags may be equal to the target tag score difference.
In one example, tag scores for a plurality of candidate multimedia data may be recorded in a matching table. Correspondingly, after the electronic equipment acquires the characteristic labels of the multimedia data to be matched and the label scores of the characteristic labels, the label matching scores of the multimedia data to be matched and the candidate multimedia data are calculated based on the calculation method, so that the target multimedia data is calculated and searched from the matching table.
In other embodiments, the tag matching score of each tag may be obtained according to the weight value of the tag and the feature matching score between each candidate multimedia data and the multimedia data to be matched. For example, the tag matching score of each tag may be equal to the product of the weight value of the tag and the feature matching score between each candidate multimedia data and the multimedia data to be matched.
Optionally, for some feature tags, the similarity between each candidate multimedia data and the multimedia data to be matched is positively correlated with the feature matching score between the two. That is, the higher the similarity between each candidate multimedia data and the multimedia data to be matched, the higher the feature matching degree score between the two. For example, if both wear glasses, the feature matching score is high.
For other feature tags, the similarity between each candidate multimedia data and the multimedia data to be matched is inversely related to the feature matching score between the two. That is, the lower the similarity between each candidate multimedia data and the multimedia data to be matched, the higher the score of the two. For example, if the two sexes are the same, the feature matching score is low. If the two sexes are opposite, the characteristic matching degree score is high.
It should be noted that, the correlation between the similarity of each feature tag and the feature matching score may be set according to the actual scene and specific requirements, which is not limited specifically.
And step A4, sorting the label matching scores of the candidate multimedia data, and determining target multimedia data.
In one example, the data may be ranked according to a tag matching score from high to low, and the candidate multimedia data with the highest score may be used as the target multimedia data. The tag matching scores between the multimedia data to be matched and the plurality of candidate multimedia data may be recorded in the matching table in the order from large to small or from small to large.
In some embodiments, to improve matching accuracy, the target multimedia data may also satisfy one or more of the following conditions.
And the special effect editing mode of the target multimedia data under the condition C1 is the same as that of the multimedia data to be matched. For example, if the target multimedia data and the multimedia data to be matched are both subjected to special effect editing with the mask added, the special effect editing modes of the target multimedia data and the multimedia data to be matched are the same. In another exemplary embodiment, if a certain special effect template corresponds to a first special effect and a second special effect, the target multimedia data adopts the first special effect, and the multimedia data to be matched adopts the second special effect, then the editing modes of the first special effect and the second special effect are the same.
And the user to which the condition C2 and the target multimedia data belong is an online user. That is, if the user opens the interface of the target application program through the electronic device, or the target application program is in a background running state on the electronic device, the user is considered to be an online user.
And C3, the position distance between the release position of the target multimedia data and the release position of the original multimedia data is smaller than or equal to a preset distance threshold value.
For example, the distance threshold may be a default value for the system, or alternatively, may be a target distance threshold selected by the user from a plurality of selectable distance thresholds. Or, the user to which the target multimedia data belongs and the user to which the original multimedia data belongs are in the same region, such as a region, a city and a province, and the position distance of the user to which the target multimedia data belongs is considered to be smaller than or equal to a preset distance threshold. The distance threshold may be set according to actual situations or specific scenes, which is not limited.
And the condition C4, the historical matching times of the target multimedia data meet the preset times screening condition. The number screening condition may be that the history matching number is within a preset number value range. The preset number of times of value range may be a value set by default by the system, or may be a target number of times of value range selected by the user from a plurality of selectable number of times of value ranges.
In one example, in order to improve the flexibility of matching, if the user cannot screen out the target multimedia data through steps A1 to A4, the target multimedia data may be selected from the candidate multimedia data through at least one of the conditions C1 to C4.
In another example, in order to improve the accuracy of the matching, if the user screens out multiple target multimedia data through steps A1 to A4, the user may continue to further screen out multiple target multimedia data through at least one of the conditions C1 to C4, so as to obtain the target multimedia data.
In still other examples, after the multimedia data to be matched is acquired, the target multimedia data may be screened from the plurality of candidate multimedia data directly using at least one of the conditions C1 to C4.
In still another example, in order to increase the matching rate, the candidate multimedia data may be obtained after screening using at least one of the above conditions C1 to C4.
In some embodiments, if the user performs screening of the target multimedia data under at least two conditions of the conditions C1 to C4, the candidate multimedia data may be screened sequentially according to the preset condition use sequence until the target multimedia data is obtained by screening after using the last condition. Or, when the data of the target multimedia data obtained by using more than one feature is within the preset number range, the target multimedia data can be obtained.
In some embodiments, the target multimedia data is also derived from a second multimedia feature match of the original multimedia data.
In one example, to improve the flexibility of matching, if the user cannot screen out the target multimedia data through steps A1 to A4, the target multimedia data may be obtained by matching the second multimedia features of the original multimedia data from the candidate multimedia data.
In another example, in order to improve the accuracy of the matching, if the user screens out multiple target multimedia data through steps A1 to A4, the user may continue to further screen out multiple target multimedia data through the second multimedia feature of the original multimedia data, so as to obtain the target multimedia data.
In some examples, the second multimedia feature may be a multimedia feature of a target object in the original multimedia data. The second multimedia feature is similar to the first multimedia feature, and the method for querying the target multimedia data by using the second multimedia feature is similar to the method for querying the target multimedia data by using the first multimedia feature, which is not described herein.
S960, generating composite multimedia data based on the multimedia data to be matched and target multimedia data, wherein the target multimedia data is obtained by matching the first multimedia features of the multimedia data to be matched. The specific content of S960 is similar to that of S330, and will not be described again.
S970, the synthesized multimedia data is displayed. The specific content of S970 is similar to that of S340, and will not be described again.
According to the multimedia display method, the target multimedia data with the same characteristics can be accurately matched from the candidate multimedia data by utilizing the first data characteristics of the multimedia data to be matched, so that the generated composite multimedia data comprises the first special effect object and the second special effect object with high characteristic matching degree, and the interestingness of the multimedia display method is improved.
In some embodiments provided by the embodiments of the present disclosure, fig. 10 shows a schematic flow chart of yet another multimedia display method provided by the embodiments of the present disclosure.
In the embodiment of the disclosure, the multimedia display method may be performed by an electronic device. Among them, the electronic devices may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable devices, etc., and stationary terminals such as digital TVs, desktop computers, smart home devices, etc.
As shown in fig. 10, the multimedia display method may include the following steps.
S1010, receiving the original multimedia data. The specific content of S1010 is similar to that of S310, and will not be described again.
S1020, performing special effect editing on the original multimedia data to obtain the multimedia data to be matched. The specific content of S1020 is similar to that of S320, and will not be described again.
And S1030, generating composite multimedia data based on the multimedia data to be matched and the target multimedia data, wherein the target multimedia data is obtained by matching the first multimedia features of the multimedia data to be matched. The specific content of S1030 is similar to that of S330, and will not be described again.
S1040, displaying the composite multimedia data. The specific content of S1040 is similar to that of S340, and will not be described again.
S1050, when detecting the triggering operation of the composite multimedia data, distributing the composite multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs.
In some embodiments, the triggering operation for the synthesized multimedia data is performed when the user wants to interact with the user to which the target multimedia data belongs. The triggering operation may be triggered when the composite multimedia data is generated, or triggered after the composite multimedia data is previewed, and the triggering timing is not limited.
In one embodiment, the electronic device may distribute the composite multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs through the server.
In one embodiment, the electronic device may display the original multimedia data and the target multimedia data within an image/video favorites or presentation bar of the target application of the user to which the data belongs. And adds an identification on the corresponding icon to prompt the user to view the composite multimedia data.
In another embodiment, S1050 may include the following steps.
And D1, sending first prompt information to a user to which the original multimedia data belong, wherein the first prompt information is used for triggering and displaying the synthesized multimedia data and displaying a social homepage of the user to which the target multimedia data belong.
Alternatively, the first prompt information may be issued in a text, picture, voice, or other manner through a chat frame, a display window on the interface, or a broadcast column on the interface. The specific form of the first hint message may be, for example, "you just participated in a false face party with XXX (scene corresponding to composite multimedia video), go to the TA homepage to see/chat with TA'
For example, the first prompt information may include a link such as a text/two-dimensional code of the composite multimedia data display interface, or the user may jump to the composite multimedia data display interface by triggering an information field of the first prompt information. Optionally, in order to facilitate interaction, the first prompt information may further include a link such as a text/two-dimensional code of the user to which the target multimedia data belongs. Or, the composite multimedia data display interface may include a control for accessing a user social homepage to which the target multimedia data belongs, or the composite multimedia data display interface may include a control for adding friends of the user to which the target multimedia data belongs, or the composite multimedia data display interface may include a control for establishing chat with the user to which the target multimedia data belongs.
And D2, sending second prompt information to the user to which the target multimedia data belong, wherein the second prompt information is used for triggering playing of the synthesized multimedia data and displaying of a social homepage of the user to which the original multimedia data belong.
The second prompt information is similar to the first prompt information, and will not be described in detail.
According to the embodiment of the disclosure, the synthesized multimedia data can be issued to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs, so that interaction between the user to which the original multimedia data belongs and the user to which the target multimedia data belongs can be realized through the synthesized multimedia data, the interestingness of multimedia display is improved, and the use experience of use is improved.
Fig. 11 shows a flowchart of a multimedia matching method according to an embodiment of the disclosure.
In the embodiment of the present disclosure, the multimedia matching method may be performed by a server. The server may be a cloud server or a server cluster, and other devices with storage and calculation functions.
As shown in fig. 11, the multimedia matching method may include the following steps.
S1110, receiving the multimedia data to be matched, wherein the multimedia data to be matched is obtained by performing special effect processing on the original multimedia data.
S1120, extracting the first multimedia features from the multimedia data to be matched.
S1130, obtaining a plurality of candidate multimedia data corresponding to the multimedia data to be matched.
S1140, querying target multimedia data matched with the first multimedia feature in the plurality of candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In some embodiments of the present disclosure, S1140 may comprise:
determining at least one feature tag corresponding to the first multimedia feature;
determining, for each candidate multimedia data, a common tag identical to the at least one feature tag;
according to the weight value corresponding to the common tag, calculating the tag matching score between the multimedia data to be matched and each candidate multimedia data;
And sequencing the label matching scores of the candidate multimedia data to determine target multimedia data.
It should be noted that, the multimedia matching method shown in S1110 to S1140 is similar to the multimedia display method shown in S910 to S970, and will not be repeated here.
In some embodiments of the present disclosure, after S1140, the multimedia matching method may further include: and receiving an issuing instruction of the synthesized multimedia data, and issuing the synthesized multimedia data to a user to which the original multimedia data belongs and a user to which the target multimedia data belongs. The issuing instruction is generated by the electronic equipment after detecting the triggering operation of the synthetic multimedia data. The step is similar to S1050, and will not be described here again.
In some embodiments of the present disclosure, after S1140, the multimedia matching method may further include:
and sending first prompt information to the user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthesized multimedia data and displaying the social homepage of the user to which the target multimedia data belongs. The step is similar to the step D1, and will not be described herein.
And sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering playing of the synthesized multimedia data and displaying of a social homepage of the user to which the original multimedia data belongs. The step is similar to the step D2, and will not be described herein.
According to the multimedia matching method, after special effect editing is carried out on the received original multimedia data, composite multimedia data is generated and displayed based on the multimedia data to be matched and target multimedia data obtained through editing. Because the target is obtained by matching the first multimedia features of the multimedia data to be matched, the synthesized multimedia data obtained by the original multimedia data can also comprise the content of the target multimedia data matched with the multimedia data to be matched besides the special effect in the multimedia data to be matched, so that the multimedia data image has various elements, further the beautifying effect of the multimedia data is enriched, the interestingness of the multimedia data display is improved, the users can interact through the multimedia data, the diversity interaction among the users is realized, and the use experience of the users is improved.
The embodiment of the present disclosure further provides a multimedia display device for implementing the above-mentioned multimedia display method, and the following description is made with reference to fig. 12.
In an embodiment of the present disclosure, the multimedia display apparatus may be an electronic device, for example, the multimedia display apparatus may be the first electronic device 101 in the client shown in fig. 1. The electronic device may include a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable electronic device, an integrated machine, an intelligent home device, or a virtual machine or a simulator.
Fig. 12 is a schematic structural diagram of a multimedia display device according to an embodiment of the present disclosure.
As shown in fig. 12, the multimedia display apparatus 1200 may include a data receiving unit 1210, a special effect editing unit 1220, a data synthesizing unit 1230, and a data display unit 1240.
A data receiving unit 1210 configured to receive original multimedia data;
the special effect editing unit 1220 is configured to perform special effect editing on the original multimedia data to obtain multimedia data to be matched;
a data synthesis unit 1230 configured to generate synthesized multimedia data based on the multimedia data to be matched and target multimedia data, the target multimedia data being obtained by matching the first multimedia features of the multimedia data to be matched;
a data display unit 1240 configured to display the composite multimedia data.
After performing special effect editing on the received original multimedia data, the multimedia display device of the embodiment of the disclosure generates and displays composite multimedia data based on the multimedia data to be matched and the target multimedia data obtained by editing. Because the target is obtained by matching the first multimedia features of the multimedia data to be matched, the synthesized multimedia data obtained by the original multimedia data can also comprise the content of the target multimedia data matched with the multimedia data to be matched besides the special effect in the multimedia data to be matched, so that the multimedia data image has various elements, further the beautifying effect of the multimedia data is enriched, the interestingness of the multimedia data display is improved, the users can interact through the multimedia data, the diversity interaction among the users is realized, and the use experience of the users is improved.
In some embodiments of the present disclosure, the special effects editing unit 1220 may be further configured to: responding to the template selection operation of the target special effect template, and carrying out special effect editing on the original multimedia data based on the target special effect template to obtain the multimedia data to be matched;
in other embodiments of the present disclosure, the special effects editing unit 1220 may be further configured to: and performing special effect editing on the original multimedia data based on a target special effect template corresponding to the original multimedia data to obtain the multimedia data to be matched.
In some embodiments of the present disclosure, the multimedia display apparatus 1200 may further include a feature extraction unit, a data acquisition unit, and a data query unit.
A feature extraction unit configured to extract a first multimedia feature from multimedia data to be matched;
the data acquisition unit is configured to acquire a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and a data query unit configured to query target multimedia data matched with the first multimedia feature among the plurality of candidate multimedia data.
In some embodiments of the present disclosure, the data query unit may be further configured to:
determining at least one feature tag corresponding to the first multimedia feature;
Determining, for each candidate multimedia data, a common tag identical to the at least one feature tag;
according to the weight value corresponding to the common tag, calculating the tag matching score between the multimedia data to be matched and each candidate multimedia data;
and sequencing the label matching scores of the candidate multimedia data to determine target multimedia data.
In some embodiments of the present disclosure, the target multimedia data satisfies at least one of:
the special effect editing mode of the target multimedia data is the same as that of the multimedia data to be matched;
the user to which the target multimedia data belongs is an online user;
the position distance between the release position of the target multimedia data and the release position of the original multimedia data is smaller than or equal to a preset distance threshold value;
the historical matching times of the target multimedia data meet preset times screening conditions.
In some embodiments of the present disclosure, the target multimedia data is also obtained from a second multimedia feature match of the original multimedia data.
In some embodiments of the present disclosure, the multimedia display apparatus 1200 may further include a data distribution unit.
And the data release unit is configured to release the composite multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs when the triggering operation of the composite multimedia data is detected.
In some embodiments of the present disclosure, the data issue unit may be further configured to:
sending first prompt information to a user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthesized multimedia data and displaying a social homepage of the user to which the target multimedia data belongs;
and sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering playing of the synthesized multimedia data and displaying of a social homepage of the user to which the original multimedia data belongs.
It should be noted that, the multimedia display device 1200 shown in fig. 12 may perform the steps in the method embodiments shown in fig. 3 to 10, and implement the processes and effects in the method embodiments shown in fig. 3 to 10, which are not described herein.
The embodiment of the disclosure further provides a multimedia matching device for implementing the multimedia matching method, and the description is below with reference to fig. 13. In the embodiment of the present disclosure, the multimedia display device may be a server, for example, the multimedia matching device may be the server 102 in the client shown in fig. 1. The server may be a cloud server or a server cluster, and other devices with storage and calculation functions.
Fig. 13 shows a schematic structural diagram of a multimedia matching device according to an embodiment of the present disclosure.
As shown in fig. 13, the multimedia matching apparatus 1300 may include a data receiving unit 1310, a feature extracting unit 1320, a data acquiring unit 1330, and a data polling unit 1340.
The data receiving unit 1310 is configured to receive multimedia data to be matched, wherein the multimedia data to be matched is obtained by performing special effect processing on the original multimedia data;
a feature extraction unit 1320 configured to extract a first multimedia feature from multimedia data to be matched;
a data obtaining unit 1330 configured to obtain a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
the data query unit 1340 is configured to query the target multimedia data matched with the first multimedia feature among the plurality of candidate multimedia data, where the target multimedia data is used to generate the combined multimedia data with the multimedia data to be matched.
The multimedia matching device of the embodiment of the disclosure generates and displays composite multimedia data based on the multimedia data to be matched and target multimedia data obtained by editing after performing special effect editing on the received original multimedia data. Because the target is obtained by matching the first multimedia features of the multimedia data to be matched, the synthesized multimedia data obtained by the original multimedia data can also comprise the content of the target multimedia data matched with the multimedia data to be matched besides the special effect in the multimedia data to be matched, so that the multimedia data image has various elements, further the beautifying effect of the multimedia data is enriched, the interestingness of the multimedia data display is improved, the users can interact through the multimedia data, the diversity interaction among the users is realized, and the use experience of the users is improved.
In some embodiments of the present disclosure, the data query unit 1340 may be further configured to:
determining at least one feature tag corresponding to the first multimedia feature;
determining, for each candidate multimedia data, a common tag identical to the at least one feature tag;
according to the weight value corresponding to the common tag, calculating the tag matching score between the multimedia data to be matched and each candidate multimedia data;
and sequencing the label matching scores of the candidate multimedia data to determine target multimedia data.
In some embodiments of the present disclosure, the target multimedia data satisfies at least one of:
the special effect editing mode of the target multimedia data is the same as that of the multimedia data to be matched;
the user to which the target multimedia data belongs is an online user;
the position distance between the release position of the target multimedia data and the release position of the original multimedia data is smaller than or equal to a preset distance threshold value;
the historical matching times of the target multimedia data meet preset times screening conditions.
In some embodiments of the present disclosure, the target multimedia data is also derived from a second multimedia feature match of the original multimedia data.
In some embodiments of the present disclosure, the target multimedia data is also derived from a second multimedia feature match of the original multimedia data.
In some embodiments of the present disclosure, the multimedia matching apparatus 1300 may further include a data distribution unit.
And the data issuing unit is configured to issue the synthesized multimedia data to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs in response to the issuing instruction of the synthesized multimedia data. The issuing instruction is generated by the electronic equipment after detecting the triggering operation of the synthetic multimedia data.
In some embodiments of the present disclosure, the data issue unit may be further configured to:
sending first prompt information to a user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthesized multimedia data and displaying a social homepage of the user to which the target multimedia data belongs;
and sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering playing of the synthesized multimedia data and displaying of a social homepage of the user to which the original multimedia data belongs.
It should be noted that, the multimedia matching apparatus 1300 shown in fig. 13 may perform the steps in the method embodiment shown in fig. 11, and implement the processes and effects in the method embodiment shown in fig. 11, which are not described herein.
Embodiments of the present disclosure also provide a computing device that may include a processor and a memory that may be used to store executable instructions. The processor may be configured to read the executable instructions from the memory and execute the executable instructions to implement the multimedia display method and/or the multimedia matching method in the above embodiments.
Fig. 14 shows a schematic structural diagram of a computing device provided by an embodiment of the present disclosure. Referring now in particular to FIG. 14, a schematic diagram of a computing device 1400 is shown for implementing embodiments of the present disclosure.
In some embodiments, in implementing the multimedia display method in the above embodiments, the computing device 1400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like.
In other embodiments, in implementing the multimedia matching method in the above embodiments, the computing device 1400 in the embodiments of the disclosure may include, but is not limited to, a cloud server or a server cluster, and other devices having storage and computing functions.
It should be noted that the computing device 1400 illustrated in fig. 14 is merely an example and should not be taken as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 14, the computing device 1400 may include a processing means (e.g., a central processor, a graphics processor, etc.) 1401, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1402 or a program loaded from a storage means 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for operation of the computing device 1400 are also stored. The processing device 1401, the ROM1402, and the RAM 1403 are connected to each other through a bus 1404. An input/output (I/O) interface 1405 is also connected to the bus 1404.
In general, the following devices may be connected to the I/O interface 1405: input devices 1406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 1408 including, for example, magnetic tape, hard disk, etc.; and communication means 1409. The communication means 1409 may allow the computing device 1400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 14 illustrates a computing device 1400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
The present disclosure also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the multimedia display method or the multimedia matching method in the above embodiments.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs.
The disclosed embodiments also provide a computer program product, which may include a computer program, which when executed by a processor causes the processor to implement the video editing method or the video playing method in the above embodiments.
For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 1409, or installed from the storage means 1408, or installed from the ROM 1402. When the computer program is executed by the processing apparatus 1401, the above-described functions defined in the multimedia display method or the multimedia matching method of the embodiment of the present disclosure are performed.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP, and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the computing device; or may exist alone without being assembled into the computing device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the computing device to perform:
receiving original multimedia data; performing special effect editing on the original multimedia data to obtain multimedia data to be matched; generating composite multimedia data based on the multimedia data to be matched and target multimedia data, wherein the target multimedia data is obtained by matching the first multimedia features of the multimedia data to be matched; and displaying the synthesized multimedia data.
Or receiving the multimedia data to be matched, wherein the multimedia data to be matched is obtained based on special effect processing of the original multimedia data; extracting a first multimedia feature from multimedia data to be matched; acquiring a plurality of candidate multimedia data corresponding to the multimedia data to be matched; and querying target multimedia data matched with the first multimedia features in the plurality of candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (14)

1. A multimedia display method, comprising:
receiving original multimedia data;
performing special effect editing on the original multimedia data to obtain multimedia data to be matched, wherein the special effect editing is used for changing the characteristics of a target object in the original multimedia data or the characteristics of an accessory part of the target object;
generating synthetic multimedia data based on the multimedia data to be matched and target multimedia data, wherein the target multimedia data is obtained by matching a first multimedia feature of the multimedia data to be matched, the target multimedia data and the multimedia data to be matched have the same or matched features, the first multimedia feature comprises a feature of a first special effect object or a feature of an accessory part, and the first special effect object is an object obtained by editing the special effect of the target object in the multimedia data to be matched;
And displaying the synthesized multimedia data.
2. The method according to claim 1, wherein the performing special effect editing on the original multimedia data to obtain the multimedia data to be matched includes:
responding to template selection operation of a target special effect template, and carrying out special effect editing on the original multimedia data based on the target special effect template to obtain the multimedia data to be matched;
or, based on a target special effect template corresponding to the original multimedia data, carrying out special effect editing on the original multimedia data to obtain the multimedia data to be matched.
3. The method of claim 1, wherein prior to said generating composite multimedia data based on said multimedia data to be matched and target multimedia data, the method further comprises:
extracting the first multimedia features from the multimedia data to be matched;
acquiring a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and querying the target multimedia data matched with the first multimedia features in the candidate multimedia data.
4. The method of claim 3, wherein said querying the target multimedia data that matches the first multimedia feature among the plurality of candidate multimedia data comprises:
Determining at least one feature tag corresponding to the first multimedia feature;
determining, for each of the candidate multimedia data, a common tag that is identical to the at least one feature tag;
calculating a tag matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common tag;
and sequencing the tag matching scores of the candidate multimedia data to determine target multimedia data.
5. The method of claim 1, wherein the target multimedia data satisfies at least one of:
the special effect editing mode of the target multimedia data is the same as that of the multimedia data to be matched;
the user to which the target multimedia data belongs is an online user;
the position distance between the release position of the target multimedia data and the release position of the original multimedia data is smaller than or equal to a preset distance threshold value;
the historical matching times of the target multimedia data meet preset times screening conditions.
6. The method of claim 1, wherein the target multimedia data is further derived from a second multimedia feature match of the original multimedia data.
7. The method of claim 1, wherein after said displaying said composite multimedia data, said method further comprises:
and when the triggering operation on the synthesized multimedia data is detected, the synthesized multimedia data is released to the user to which the original multimedia data belongs and the user to which the target multimedia data belongs.
8. The method according to claim 7, wherein said publishing said composite multimedia data to a user to whom said original multimedia data belongs and to which said target multimedia data belongs comprises:
sending first prompt information to a user to which the original multimedia data belongs, wherein the first prompt information is used for triggering and displaying the synthesized multimedia data and displaying a social homepage of the user to which the target multimedia data belongs;
and sending second prompt information to the user to which the target multimedia data belongs, wherein the second prompt information is used for triggering and playing the synthesized multimedia data and displaying a social homepage of the user to which the original multimedia data belongs.
9. A multimedia matching method, comprising:
receiving multimedia data to be matched, wherein the multimedia data to be matched is obtained based on special effect editing on original multimedia data, and the special effect editing is used for changing the characteristics of a target object in the original multimedia data or the characteristics of an accessory part of the target object;
Extracting a first multimedia feature from the multimedia data to be matched;
acquiring a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
and querying target multimedia data matched with the first multimedia features in the candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched, the target multimedia data and the multimedia data to be matched have the same or matched features, the first multimedia features comprise the features of a first special effect object or the features of an accessory part, and the first special effect object is an object obtained by carrying out special effect editing on the target object in the multimedia data to be matched.
10. The method of claim 9, wherein querying the target multimedia data that matches the first multimedia feature from the plurality of candidate multimedia data comprises:
determining at least one feature tag corresponding to the first multimedia feature;
determining, for each of the candidate multimedia data, a common tag that is identical to the at least one feature tag;
Calculating a tag matching score between the multimedia data to be matched and each candidate multimedia data according to the weight value corresponding to the common tag;
and sequencing the tag matching scores of the candidate multimedia data to determine target multimedia data.
11. A multimedia display device, comprising:
a data receiving unit configured to receive original multimedia data;
the special effect editing unit is configured to carry out special effect editing on the original multimedia data to obtain the multimedia data to be matched, wherein the special effect editing is used for changing the characteristics of a target object in the original multimedia data or the characteristics of an accessory part of the target object;
the data synthesis unit is configured to generate synthesized multimedia data based on the multimedia data to be matched and target multimedia data, the target multimedia data is obtained by matching a first multimedia feature of the multimedia data to be matched, the target multimedia data and the multimedia data to be matched have the same or matched features, the first multimedia feature comprises a feature of a first special effect object or a feature of an accessory part, and the first special effect object is an object obtained by carrying out special effect editing on the target object in the multimedia data to be matched;
And a data display unit configured to display the composite multimedia data.
12. A multimedia matching apparatus, comprising:
the data receiving unit is configured to receive multimedia data to be matched, wherein the multimedia data to be matched is obtained based on special effect editing on original multimedia data, and the special effect editing is used for changing the characteristics of a target object in the original multimedia data or the characteristics of an accessory part of the target object;
the feature extraction unit is configured to extract first multimedia features from the multimedia data to be matched;
the data acquisition unit is configured to acquire a plurality of candidate multimedia data corresponding to the multimedia data to be matched;
the data query unit is configured to query target multimedia data matched with the first multimedia features in the candidate multimedia data, wherein the target multimedia data is used for generating combined multimedia data with the multimedia data to be matched, the target multimedia data and the multimedia data to be matched have the same or matched features, the first multimedia features comprise the features of a first special effect object or the features of an accessory part, and the first special effect object is an object obtained by editing the special effect of the target object in the multimedia data to be matched.
13. A computing device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the multimedia display method of any one of the preceding claims 1-8 or the multimedia matching method of any one of the preceding claims 9-10.
14. A computer readable storage medium, characterized in that the storage medium stores a computer program, which when executed by a processor causes the processor to implement the multimedia display method of any one of the preceding claims 1-8 or the multimedia matching method of any one of the preceding claims 9-10.
CN202111136435.5A 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium Active CN113870133B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111136435.5A CN113870133B (en) 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium
PCT/CN2022/115521 WO2023045710A1 (en) 2021-09-27 2022-08-29 Multimedia display and matching methods and apparatuses, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136435.5A CN113870133B (en) 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113870133A CN113870133A (en) 2021-12-31
CN113870133B true CN113870133B (en) 2024-03-12

Family

ID=78991263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136435.5A Active CN113870133B (en) 2021-09-27 2021-09-27 Multimedia display and matching method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN113870133B (en)
WO (1) WO2023045710A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806306B (en) * 2021-08-04 2024-01-16 北京字跳网络技术有限公司 Media file processing method, device, equipment, readable storage medium and product
CN113870133B (en) * 2021-09-27 2024-03-12 抖音视界有限公司 Multimedia display and matching method, device, equipment and medium
CN115941841A (en) * 2022-12-06 2023-04-07 北京字跳网络技术有限公司 Associated information display method, device, equipment, storage medium and program product
CN117370584A (en) * 2023-12-08 2024-01-09 中国信息通信研究院 Method and system for synthesizing multimedia data in depth

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351327A (en) * 2019-08-06 2021-02-09 北京字节跳动网络技术有限公司 Face image processing method and device, terminal and storage medium
CN112528049A (en) * 2020-12-17 2021-03-19 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and computer-readable storage medium
CN112597320A (en) * 2020-12-09 2021-04-02 上海掌门科技有限公司 Social information generation method, device and computer readable medium
CN112988671A (en) * 2019-12-13 2021-06-18 北京字节跳动网络技术有限公司 Media file processing method and device, readable medium and electronic equipment
CN113099129A (en) * 2021-01-27 2021-07-09 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010183317A (en) * 2009-02-05 2010-08-19 Olympus Imaging Corp Imaging device, image composition and display device, image composition and display method, and program
CN105338242A (en) * 2015-10-29 2016-02-17 努比亚技术有限公司 Image synthesis method and device
CN106528588A (en) * 2016-09-14 2017-03-22 厦门幻世网络科技有限公司 Method and apparatus for matching resources for text information
CN108647245B (en) * 2018-04-13 2023-04-18 腾讯科技(深圳)有限公司 Multimedia resource matching method and device, storage medium and electronic device
CN110866086A (en) * 2018-12-29 2020-03-06 北京安妮全版权科技发展有限公司 Article matching system
CN113870133B (en) * 2021-09-27 2024-03-12 抖音视界有限公司 Multimedia display and matching method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351327A (en) * 2019-08-06 2021-02-09 北京字节跳动网络技术有限公司 Face image processing method and device, terminal and storage medium
CN112988671A (en) * 2019-12-13 2021-06-18 北京字节跳动网络技术有限公司 Media file processing method and device, readable medium and electronic equipment
CN112597320A (en) * 2020-12-09 2021-04-02 上海掌门科技有限公司 Social information generation method, device and computer readable medium
CN112528049A (en) * 2020-12-17 2021-03-19 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and computer-readable storage medium
CN113099129A (en) * 2021-01-27 2021-07-09 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2023045710A1 (en) 2023-03-30
CN113870133A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN113870133B (en) Multimedia display and matching method, device, equipment and medium
US11483268B2 (en) Content navigation with automated curation
WO2021238631A1 (en) Article information display method, apparatus and device and readable storage medium
US11321385B2 (en) Visualization of image themes based on image content
US10593085B2 (en) Combining faces from source images with target images based on search queries
US9478054B1 (en) Image overlay compositing
US20210303855A1 (en) Augmented reality item collections
CN113287118A (en) System and method for face reproduction
CN108701207A (en) For face recognition and video analysis to identify the personal device and method in context video flowing
CN104637035B (en) Generate the method, apparatus and system of cartoon human face picture
US11657575B2 (en) Generating augmented reality content based on third-party content
CN111491187B (en) Video recommendation method, device, equipment and storage medium
KR20210118437A (en) Image display selectively depicting motion
US11978110B2 (en) Generating augmented reality content based on user-selected product data
KR101757184B1 (en) System for automatically generating and classifying emotionally expressed contents and the method thereof
CN112235516B (en) Video generation method, device, server and storage medium
US20140153836A1 (en) Electronic device and image processing method
WO2022212672A1 (en) Generating modified user content that includes additional text content
CN115861469A (en) Image identifier creating method and device and electronic equipment
CN110136270A (en) The method and apparatus of adornment data are held in production

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant