WO2024007988A1 - 图像处理方法、装置、电子设备、介质及程序产品 - Google Patents

图像处理方法、装置、电子设备、介质及程序产品 Download PDF

Info

Publication number
WO2024007988A1
WO2024007988A1 PCT/CN2023/104745 CN2023104745W WO2024007988A1 WO 2024007988 A1 WO2024007988 A1 WO 2024007988A1 CN 2023104745 W CN2023104745 W CN 2023104745W WO 2024007988 A1 WO2024007988 A1 WO 2024007988A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
effect material
target effect
dynamic
Prior art date
Application number
PCT/CN2023/104745
Other languages
English (en)
French (fr)
Inventor
林树超
张兴华
韩勖越
王诗涵
杨荣涛
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024007988A1 publication Critical patent/WO2024007988A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Definitions

  • This application relates to the field of Internet technology, and in particular to an image processing method, device, electronic equipment, media and program products.
  • the video ecology caters more to the preferences of users. Users can generate video content and contribute to the video ecosystem through video content.
  • this application provides an image processing method, device, electronic equipment, media and program products.
  • an image processing method including:
  • video content is generated and displayed.
  • obtaining an image to be processed may include:
  • the automatic display of the dynamic image may include:
  • the dynamic image is automatically displayed on the preview interface of the currently captured image.
  • obtaining an image to be processed includes:
  • the automatic display of the dynamic image may include:
  • At least one of the dynamic images is automatically displayed.
  • obtaining target effect material matching the image to be processed includes:
  • target effect material matching the image to be processed is obtained.
  • the usage information may include: the last time of use and/or the number of uses; according to the usage information of each effect material in the material library, select a target effect from each effect material Materials may include:
  • a target effect material is selected from the respective effect materials.
  • rendering the image to be processed based on the target effect material and generating a dynamic image corresponding to the image to be processed may include:
  • the material texture and the image texture are texture-mixed, and combined with the rendering timestamp, a dynamic image corresponding to the image to be processed is generated.
  • generating video content in response to a click operation on the dynamic image may include:
  • the synthetic texture and the pulse code modulation data are synthesized and encoded to generate video content.
  • an image processing device which may include:
  • the image acquisition module to be processed is used to obtain the image to be processed
  • a target effect material acquisition module used to acquire target effect materials matching the image to be processed
  • a dynamic image generation module configured to render the image to be processed based on the target effect material and generate a dynamic image corresponding to the image to be processed
  • a dynamic image display module used to automatically display the dynamic image
  • a video content generation module configured to generate video content in response to a click operation on the dynamic image
  • a video content display module is used to display the video content.
  • the image acquisition module to be processed may be further configured to acquire the currently captured image when a preview operation for the currently captured image is detected, and use the currently captured image as the image to be processed.
  • the dynamic image display module may be further configured to automatically display the dynamic image in the preview interface of the currently captured image.
  • the image acquisition module to be processed may be further configured to acquire a preset number of images closest to the current moment when it is detected that the photo album interface has been entered, and the acquired preset number of images At least one image in is used as the image to be processed;
  • the dynamic image display module may be further configured to automatically display at least one of the dynamic images on the album interface.
  • the target effect material acquisition module may be further used to select the target effect material from each effect material according to the usage information of each effect material in the material library; and/or to obtain the target effect material.
  • the image to be processed is subjected to image recognition to obtain the characteristic information of the image to be processed; based on the characteristic information of the image to be processed, target effect material matching the image to be processed is obtained.
  • the usage information includes: the last time of use and/or the number of uses; the target effect material acquisition module can be further used to implement the following steps according to each effect in the material library Material usage information, select the target effect material from each of the above effect materials:
  • a target effect material is selected from the respective effect materials.
  • the dynamic image generation module can further read a single target effect material; configure the rendering timestamp of the target effect material; create a material texture corresponding to the target effect material; create the to-be- Process the image texture corresponding to the image; mix the material texture and the image texture, and combine it with the rendering timestamp to generate a dynamic image corresponding to the image to be processed.
  • the video content generation module can be further used to obtain audio data, decode the audio data to obtain pulse code modulation data, and decode the image to be processed to obtain original image data. , and create a texture corresponding to the original image data; create a material texture corresponding to the target effect material; synthesize the texture corresponding to the original image data and the material texture corresponding to the target effect material to obtain a synthetic texture; The synthetic texture and the pulse code modulation data are synthesized and encoded to generate video content.
  • an electronic device including: a processor and a memory.
  • the memory stores computer programs and/or instructions.
  • the computer programs and/or instructions are executed by the processor, the computer program and/or instructions are implemented according to the present invention. The method described in the first aspect of the application.
  • a computer-readable storage medium is provided, with computer programs and/or instructions stored thereon, and when the computer programs and/or instructions are executed by a processor, the first aspect of the present application is implemented. the method described.
  • a computer program product including a computer program and/or instructions.
  • the computer program and/or instructions When the computer program and/or instructions are run on a computer, the computer is caused to execute the first step according to the present application. methods described in this regard.
  • the present disclosure provides a computer program, the computer program including program code, which when executed by a processor implements the method according to the first aspect of the present application.
  • Figure 1 shows a schematic diagram of the system architecture of an exemplary application environment that can be applied to the image processing method according to the embodiment of the present application;
  • FIG. 2 is a schematic diagram of the image processing method in the embodiment of the present application.
  • Figure 3 is a flow chart of the image processing method in the embodiment of the present application.
  • Figure 4 is a flow chart for generating dynamic images in an embodiment of the present application.
  • Figure 5 is a schematic diagram of a scene for generating dynamic images in an embodiment of the present application.
  • Figure 6 is a schematic structural diagram of an image processing device in an embodiment of the present application.
  • Figure 7 is a schematic structural diagram of an electronic device in an embodiment of the present application.
  • users In Internet applications, many users share their work and life by making videos and uploading the videos to social platforms (such as short video platforms, etc.). For example, users can capture images and/or videos through a camera application, edit the images and/or videos, generate videos through a video production tool, and upload them to social platforms.
  • social platforms such as short video platforms, etc.
  • embodiments of the present application provide an image processing method, device, electronic equipment, media and program products to improve the conversion rate of images into videos. Since the video ecology caters to the preferences of users, the camera can be improved The communication value of image content captured by photography applications.
  • the target effect material matching the image to be processed can be obtained, based on the target effect material, the image to be processed is rendered, and a dynamic image corresponding to the image to be processed is generated.
  • dynamic images are more vivid and vivid.
  • users can be guided to generate video content based on the images to be processed, improving the conversion rate of images into videos. Since the video ecology caters to the preferences of users, by increasing the conversion rate of images into videos, the communication value of image content produced by traditional camera shooting applications can be improved.
  • FIG. 1 shows a schematic diagram of the system architecture of an exemplary application environment that can be applied to the image processing method according to the embodiment of the present application.
  • System architecture 100 includes: terminal device 101, network 102 and server 103.
  • the network 102 is a medium used to provide a communication link between the terminal device 101 and the server 103 .
  • Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • Terminal devices 101 include but are not limited to desktop computers, portable computers, smart phones, tablet computers, and the like.
  • a camera shooting application can be installed in the terminal device 101, and the user can capture images or videos through the camera shooting application, and can also view the images or videos in the photo album.
  • the server 103 may be a server corresponding to the camera shooting application.
  • the number of terminal devices, networks and servers in Figure 1 is only illustrative. You can have any number of end devices, networks, and servers depending on your implementation needs.
  • the server 103 may be a server cluster composed of multiple servers.
  • the image processing method provided by the embodiment of the present application can be executed by a camera shooting application in the terminal device 101 .
  • the user captures and generates an image through a camera application in the terminal device 101, and the image is the image to be processed.
  • the camera shooting application can obtain the image to be processed from the terminal device 101, and obtain the target effect material matching the image to be processed from the local material library of the terminal device 101 and/or the server 103.
  • the target effect material is used to generate dynamic images. Effect material. Based on the target effect material, the image to be processed is rendered and a dynamic image corresponding to the image to be processed is generated.
  • users are guided to generate corresponding video content based on the images to be processed.
  • Figure 2 is a schematic diagram of an image processing method in an embodiment of the present application. It can be seen that by generating a dynamic image corresponding to the image to be processed and automatically displaying the dynamic image, the user can be guided to generate the video content based on the image to be processed, and the conversion rate of converting the image into a video can be improved. Furthermore, the communication value of the image to be processed is improved.
  • Figure 3 is a flow chart of an image processing method in an embodiment of the present application, which may include the following steps:
  • Step S310 Obtain the image to be processed.
  • Step S320 Obtain the target effect material matching the image to be processed.
  • Step S330 Render the image to be processed based on the target effect material, and generate a dynamic image corresponding to the image to be processed.
  • Step S340 Automatically display dynamic images.
  • Step S350 In response to a click operation on the dynamic image, video content is generated and displayed.
  • the image processing method of the embodiment of the present application can obtain the target effect material that matches the image to be processed for the image to be processed in the camera shooting application (such as the image captured by the user in real time, or the image stored in the album, etc.), based on The target effect material is used to render the image to be processed and generate a dynamic image corresponding to the image to be processed.
  • dynamic images are more vivid and vivid.
  • users can be guided to generate the video content based on the images to be processed, improving the conversion rate of images into videos. Since the video ecology caters to the preferences of users, by increasing the conversion rate of images into videos, the communication value of image content produced by traditional camera shooting applications can be improved.
  • step S310 the image to be processed is obtained.
  • the core user path is for the user to take an image and preview the shooting effect of the image.
  • the image to be processed may be an image captured by the user in real time through a camera application.
  • the currently captured image may be acquired and used as an image to be processed.
  • users can also enter the album interface of the camera shooting application to edit the images that have been taken.
  • a preset number of images closest to the current moment are acquired, and at least one image among the acquired preset number of images is used as an image to be processed.
  • the preset quantity can be 1 or an integer greater than 1.
  • you can obtain the 8 images closest to the current moment that is, obtain 8 images to be processed.
  • you can also obtain the 8 images closest to the current moment and then select several images from them as images to be processed. For example, you can select randomly, or select according to time order, image attributes, etc.
  • the same processing process can be performed on each image to be processed.
  • step S320 target effect material matching the image to be processed is obtained.
  • Developers of camera shooting applications can use special effects editing tools (i.e. editors) to generate effect materials for users to use.
  • Types of effect materials include: filters, makeup, special effects, animations, stickers, text, audio, etc.
  • the image is edited to generate an image containing the effect material.
  • the target effect material is the effect material that matches the image to be processed.
  • Each terminal device can have a corresponding material library locally, and a variety of effect materials are stored in the material library. For example, when editing an image, the user can first download the effect material and then use the effect material. The downloaded effect material can be stored in the local material library. It is understood that the material library can be continuously updated. For the effect materials in the material library, users do not need to download them from the Internet and can use them directly next time. In some embodiments, after user authorization, the usage information of each effect material in the material library can be obtained, and the target effect material is selected from each effect material according to the usage information of each effect material in the material library.
  • the usage information of a single effect material includes but is not limited to: the last use time of the effect material and/or the number of times the effect material has been used.
  • each effect material can be sorted according to the last use time to obtain the first sorting result. For example, each effect material is sorted according to the order of the last use time from nearest to farthest. The higher the effect material is sorted, the closer the last use time of the effect material is to the current moment.
  • each effect material in the material library can be sorted according to the number of uses to obtain a second sorting result. For example, sort according to the order of usage number from high to low to obtain the second sorting result. The higher the ranking of the effect material is, the more times the user uses the effect material, and the more the effect material is liked by users.
  • image recognition can also be performed on the image to be processed to obtain the characteristic information of the image to be processed.
  • the captured images can generally be divided into three categories: portraits, landscapes, and objects.
  • portrait characteristics can be divided into gender characteristics, appearance characteristics, age characteristics, etc.
  • landscape characteristics can be divided into These include urban architectural features, meteorological features, plant features, etc.
  • item features can be refined into interest features. For example, if the item contained in the image is football, then it can be analyzed that the user may be interested in sports, and thus the interest features can be obtained.
  • target effect material matching the image to be processed can be obtained.
  • target effect materials can be obtained from the cloud, and the effect materials in the cloud can be used by operators Configuration can include pre-configured effect materials based on network hotspots and holidays, which can be changed as network hotspots and holidays change, with high flexibility.
  • step S330 the image to be processed is rendered based on the target effect material, and a dynamic image corresponding to the image to be processed is generated.
  • the target effect material can be directly used to render the image to be processed. You can also render the image to be processed according to the video template.
  • the video template can be a template generated by the video creator during the video production process.
  • the video template can contain a variety of different effect materials. Other users can use this video template to generate different content. , video content with similar formats.
  • a video template matching the target effect material After obtaining the target effect material, a video template matching the target effect material can be obtained. Since one or more effect materials are used in a single video template, a corresponding relationship between the video template and the effect materials can be established, and one video template can correspond to one or more effect materials. Based on this correspondence, a video template corresponding to the target effect material can be obtained, that is, a video template matching the target effect material. Based on the target effect material and video template, the image to be processed can be rendered using a rendering tool (such as OpenGL) to generate a dynamic image corresponding to the image to be processed.
  • a rendering tool such as OpenGL
  • the material animation rendering protocol can include relevant texture parameters, such as rendering width, rendering height, rendering level, rendering start timestamp, rendering end timestamp, etc.
  • the process of generating dynamic images is shown in Figure 4, which can include the following steps:
  • Step S410 Read a single target effect material.
  • Step S420 Configure the rendering timestamp of the target effect material.
  • Step S430 Create a material texture corresponding to the target effect material.
  • the texture parameters of the target effect material can be configured according to the material animation rendering protocol, for example, the texture width, texture height, texture transparency, etc. can be configured.
  • Step S440 Determine whether the target effect material is the last target effect material.
  • step S450 If the target effect material is the last target effect material, execute step S450; otherwise, return to step S410, read the next target effect material, and repeat the above steps until the last target effect material is processed.
  • Step S450 Create a picture texture corresponding to the image to be processed.
  • Step S460 Mix the material texture and the image texture, and combine them with the rendering timestamp to generate a dynamic image corresponding to the image to be processed.
  • the number of target effect materials is two, which are filters and particle animations.
  • the material textures of the filters can be created in sequence, such as configuring the texture parameters and configuring the rendering timestamp of the filter.
  • create material textures for particle animation such as configuring texture parameters and configuring the rendering timestamp of particle animation, and finally convert the image into particle animation with filter effects.
  • step S340 the dynamic image is automatically displayed to guide the user to generate corresponding video content based on the image to be processed.
  • the dynamic image can be automatically displayed in the preview interface of the currently captured image. For example, dynamic images can be displayed through pop-ups or jumps. If the image to be processed is an image in an album, multiple dynamic images can be automatically displayed on the album interface. For example, you can display multiple dynamic images at the top of the album interface.
  • Figure 5 is a schematic diagram of a scene for generating dynamic images in an embodiment of the present application, which can include two scenes: a scene where the user directly captures an image through a camera shooting application, and a scene where the user enters the album interface. According to different scenarios, dynamic images can be displayed to users to guide users to generate video content.
  • step S350 video content is generated and displayed in response to a click operation on the dynamic image.
  • video content can also be generated and displayed in response to a click operation on the dynamic image (such as a single-click operation or a double-click operation, etc.).
  • a click operation on the dynamic image such as a single-click operation or a double-click operation, etc.
  • the user can slide to select the dynamic image that the user is interested in from multiple dynamic images to generate corresponding video content. It can be seen that the process of generating video content is simple and fast, and users can experience it directly without the need for secondary operations.
  • the method of generating video content is different from the aforementioned method of generating dynamic images.
  • the decoding of the video data stream will be more complex than the image. Therefore, the process of generating video content will be more complicated.
  • audio data for example, audio data configured in the background
  • the audio data can be decoded to obtain pulse code modulation (PCM) data.
  • PCM pulse code modulation
  • Decode the image to be processed to obtain the original image data for example, it can be RGBA data
  • create a texture corresponding to the original image data Create a material texture corresponding to the target effect material, and process the texture corresponding to the original image data and the material texture corresponding to the target effect material to obtain a synthetic texture.
  • the synthetic texture and pulse code modulation data are synthesized and encoded to generate video content.
  • the image processing method of the embodiment of the present application provides dynamic images of image-to-video conversion through local real-time rendering when the user triggers local generation of images or enters the album interface, and guides the user to generate corresponding video content.
  • By combining the characteristics of camera shooting applications, cloud services, and image characteristics we provide users with various effect materials, improve the richness and diversity of image conversion to video, and achieve the purpose of further improving video conversion.
  • the conversion rate of users converting images into videos can be improved.
  • users can directly click on dynamic images to easily and quickly generate Corresponding video content. Since the video ecology caters to the preferences of users, it can increase the communication value of image content produced by traditional camera shooting applications.
  • the image processing device 600 includes:
  • the image to be processed acquisition module 610 is used to acquire the image to be processed
  • the target effect material acquisition module 620 is used to obtain the target effect material that matches the image to be processed
  • the dynamic image generation module 630 is used to render the image to be processed based on the target effect material and generate a dynamic image corresponding to the image to be processed;
  • the dynamic image display module 640 is used to display dynamic images to guide users to generate corresponding video content based on the images to be processed;
  • a video content generation module 650 configured to generate video content in response to a click operation on a dynamic image
  • Video content display module 660 is used to display video content.
  • the image acquisition module 610 to be processed may be further configured to acquire the currently captured image when a preview operation for the currently captured image is detected, and use the currently captured image as the image to be processed;
  • the dynamic image display module 640 can be further used to automatically display dynamic images in the preview interface of the currently captured image.
  • the image acquisition module 610 to be processed may be further configured to acquire a preset number of images closest to the current moment when it is detected that the photo album interface has been entered, and add the acquired preset number of images to At least one image is used as the image to be processed;
  • the dynamic image display module 640 can be further used to automatically display at least one dynamic image on the album interface.
  • the target effect material acquisition module 620 can be further used to select the target effect material from each effect material according to the usage information of each effect material in the material library; and/or perform image recognition on the image to be processed. , obtain the characteristic information of the image to be processed; based on the characteristic information of the image to be processed, obtain the target effect material matching the image to be processed; and/or effect materials pre-configured according to network hot spots or holidays.
  • the usage information includes: the last time of use and/or the number of uses; the target effect material acquisition module 620 can further implement the usage information of each effect material in the material library through the following steps: Select the target effect material from each effect material:
  • the dynamic image generation module 630 may be further used to read a single target effect material; configure the rendering timestamp of the target effect material; create a material texture corresponding to the target effect material; and create an image texture corresponding to the image to be processed. ;Mix the material texture and the image texture, combined with the rendering timestamp, to generate a dynamic image corresponding to the image to be processed.
  • the video content generation module 650 can be further used to obtain audio data, decode the audio data to obtain pulse code modulation data, decode the image to be processed, obtain original image data, and create original image data. Corresponding texture; create a material texture corresponding to the target effect material; synthesize the texture corresponding to the original image data and the material texture process corresponding to the target effect material to obtain a synthetic texture; synthesize and encode the synthetic texture and pulse code modulation data to generate a video content.
  • an electronic device including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the executable instructions to implement the instructions according to the present application.
  • the image processing method in the embodiment is also provided, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the executable instructions to implement the instructions according to the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device in an embodiment of the present application, in which the image processing solution according to the embodiment of the present disclosure can be implemented. It should be noted that the electronic device 700 shown in FIG. 7 is only an example, and should not bring any limitations to the functions and scope of use of the embodiments of the present application.
  • the electronic device 700 includes a central processing unit (CPU) 701 that can operate according to a program stored in a read-only memory (ROM) 702 or loaded from a storage portion 708 into a random access memory (RAM) 703 And perform various appropriate actions and processing.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for system operation are also stored.
  • the central processing unit 701, ROM 702 and RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to bus 704.
  • the following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, etc.; an input section 706 including a cathode ray, etc. an output section 707 for a tube (CRT), a liquid crystal display (LCD), etc., speakers, etc.; a storage section 708 including a hard disk, etc.; and a communication section 709 including a network interface card such as a local area network (LAN) card, a modem, etc.
  • the communication section 709 performs communication processing via a network such as the Internet.
  • Driver 710 is also connected to I/O interface 705 as needed.
  • Removable media 711 such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive 710 as needed, so that a computer program read therefrom is installed into the storage portion 708 as needed.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present application include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication portion 709 and/or installed from removable media 711 .
  • the central processing unit 701 various functions defined in the device of the present application are executed.
  • a computer-readable storage medium is also provided, on which a computer program is stored.
  • the computer program is executed by a processor, the above image processing method is implemented.
  • the computer-readable storage medium shown in this application may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: electrical connections having one or more wires, portable computer disks, hard drives, random access memory, read only memory, erasable programmable read only memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, optical cable, radio frequency, etc., or any suitable combination of the above.
  • a computer program product is also provided.
  • the computer program product When the computer program product is run on a computer, it causes the computer to execute the above image processing method.
  • a computer program is also provided.
  • the computer program includes program code. When executed by a processor, the program code implements the image processing method described in the embodiment of the present disclosure.

Abstract

本申请涉及图像处理方法、装置、电子设备、介质及程序产品,应用于互联网技术领域,所述方法包括:获取待处理图像;获取与待处理图像匹配的目标效果素材;基于目标效果素材,对待处理图像进行渲染,生成待处理图像对应的动态图像;自动展示动态图像;并响应于针对动态图像的点击操作,生成并展示视频内容。

Description

图像处理方法、装置、电子设备、介质及程序产品
相关申请的交叉引用
本申请是以申请号为202210798152.5、申请日为2022年7月6日的中国申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本申请涉及互联网技术领域,尤其涉及一种图像处理方法、装置、电子设备、介质及程序产品。
背景技术
目前,在互联网主流的内容生态中,视频生态比较迎合用户的喜爱。用户可以生成视频内容,并通过视频内容对视频生态进行投稿。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本申请提供了一种图像处理方法、装置、电子设备、介质及程序产品。
根据本申请的第一方面,提供了一种图像处理方法,包括:
获取待处理图像;
获取与所述待处理图像匹配的目标效果素材;
基于所述目标效果素材对所述待处理图像进行渲染,生成所述待处理图像对应的动态图像;
自动展示所述动态图像;
响应于针对所述动态图像的点击操作,生成并展示视频内容。
根据本申请的一些实施例,获取待处理图像,可包括:
当检测到针对当前拍摄的图像的预览操作时,获取所述当前拍摄的图像,并将所述当前拍摄的图像作为待处理图像;
所述自动展示所述动态图像,可包括:
在所述当前拍摄的图像的预览界面,自动展示所述动态图像。
根据本申请的一些实施例,获取待处理图像,包括:
当检测到进入相册界面时,获取距离当前时刻最近的预设数量个图像,并将获取到的预设数量个图像中的至少一个图像作为待处理图像;
所述自动展示所述动态图像,可包括:
在所述相册界面,自动展示至少一个所述动态图像。
根据本申请的一些实施例,所述获取与所述待处理图像匹配的目标效果素材,包括:
根据素材库中各个效果素材的使用情况信息,从所述各个效果素材中选取目标效果素材;和/或
对所述待处理图像进行图像识别,得到所述待处理图像的特征信息;
基于所述待处理图像的特征信息,获取与所述待处理图像匹配的目标效果素材。
根据本申请的一些实施例,所述使用情况信息可包括:最后一次使用时间和/或使用次数;根据所述素材库中各个效果素材的使用情况信息,从所述各个效果素材中选取目标效果素材,可包括:
根据最后一次使用时间对素材库中各个效果素材进行排序,得到第一排序结果;
根据使用次数对所述素材库中各个效果素材进行排序,得到第二排序结果;
根据所述第一排序结果和/或所述第二排序结果,从所述各个效果素材中选取目标效果素材。
根据本申请的一些实施例,所述基于所述目标效果素材对所述待处理图像进行渲染,生成所述待处理图像对应的动态图像,可包括:
读取单个目标效果素材;
配置所述目标效果素材的渲染时间戳;
创建所述目标效果素材对应的素材纹理;
创建所述待处理图像对应的图像纹理;
将所述素材纹理和所述图像纹理进行纹理混合,结合渲染时间戳,生成所述待处理图像对应的动态图像。
根据本申请的一些实施例,响应于针对所述动态图像的点击操作,生成视频内容,可包括:
获取音频数据,对所述音频数据进行解码,得到脉冲编码调制数据;
对所述待处理图像进行解码,得到图像原始数据,并创建所述图像原始数据对应的纹理;
创建所述目标效果素材对应的素材纹理;
将所述图像原始数据对应的纹理和所述目标效果素材对应的素材纹理进程合成,得到合成纹理;
将所述合成纹理和所述脉冲编码调制数据进行合成、编码,生成视频内容。
根据本申请的第二方面,提供了一种图像处理装置,可包括:
待处理图像获取模块,用于获取待处理图像;
目标效果素材获取模块,用于获取与所述待处理图像匹配的目标效果素材;
动态图像生成模块,用于基于所述目标效果素材,对所述待处理图像进行渲染,生成所述待处理图像对应的动态图像;
动态图像展示模块,用于自动展示所述动态图像;
视频内容生成模块,用于响应于针对所述动态图像的点击操作,生成视频内容;
视频内容展示模块,用于展示所述视频内容。
根据本申请的一些实施例,待处理图像获取模块可进一步用于当检测到针对当前拍摄的图像的预览操作时,获取所述当前拍摄的图像,并将所述当前拍摄的图像作为待处理图像;
所述动态图像展示模块,可进一步用于在所述当前拍摄的图像的预览界面,自动展示所述动态图像。
根据本申请的一些实施例,所述待处理图像获取模块,可进一步用于当检测到进入相册界面时,获取距离当前时刻最近的预设数量个图像,并将获取到的预设数量个图像中的至少一个图像作为待处理图像;
所述动态图像展示模块,可进一步用于在所述相册界面,自动展示至少一个所述动态图像。
根据本申请的一些实施例,所述目标效果素材获取模块,可进一步用于根据素材库中各个效果素材的使用情况信息,从所述各个效果素材中选取目标效果素材;和/或对所述待处理图像进行图像识别,得到所述待处理图像的特征信息;基于所述待处理图像的特征信息,获取与所述待处理图像匹配的目标效果素材。
根据本申请的一些实施例,所述使用情况信息包括:最后一次使用时间和/或使用次数;所述目标效果素材获取模块,可进一步用于通过下述步骤实现根据所述素材库中各个效果素材的使用情况信息,从所述各个效果素材中选取目标效果素材:
根据最后一次使用时间对素材库中各个效果素材进行排序,得到第一排序结果;
根据使用次数对所述素材库中各个效果素材进行排序,得到第二排序结果;
根据所述第一排序结果和/或所述第二排序结果,从所述各个效果素材中选取目标效果素材。
根据本申请的一些实施例,所述动态图像生成模块,可进一步读取单个目标效果素材;配置所述目标效果素材的渲染时间戳;创建所述目标效果素材对应的素材纹理;创建所述待处理图像对应的图像纹理;将所述素材纹理和所述图像纹理进行纹理混合,结合渲染时间戳,生成所述待处理图像对应的动态图像。
根据本申请的一些实施例,所述视频内容生成模块,可进一步用于获取音频数据,对所述音频数据进行解码,得到脉冲编码调制数据;对所述待处理图像进行解码,得到图像原始数据,并创建所述图像原始数据对应的纹理;创建所述目标效果素材对应的素材纹理;将所述图像原始数据对应的纹理和所述目标效果素材对应的素材纹理进程合成,得到合成纹理;将所述合成纹理和所述脉冲编码调制数据进行合成、编码,生成视频内容。
根据本申请的第三方面,提供了一种电子设备,包括:处理器和存储器,所述存储器存储有计算机程序和/或指令,所述计算机程序和/或指令被处理器执行时实现根据本申请的第一方面所述的方法。
根据本申请的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序和/或指令,所述计算机程序和/或指令被处理器执行时实现根据本申请的第一方面所述的方法。
根据本申请的第五方面,提供了一种计算机程序产品,包括计算机程序和/或指令,当所述计算机程序和/或指令在计算机上运行时,使得所述计算机执行根据本申请的第一方面所述的方法。
根据本申请的第六方面,本公开提供了一种计算机程序,所述计算机程序包括程序代码,所述程序代码被处理器执行时实现根据本申请的第一方面所述的方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技 术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1示出了可以应用于本申请实施例的图像处理方法的示例性应用环境的系统架构的示意图;
图2为本申请实施例中图像处理方法的一种示意图;
图3为本申请实施例中图像处理方法的一种流程图;
图4为本申请实施例中生成动态图像的一种流程图;
图5为本申请实施例中生成动态图像的场景示意图;
图6为本申请实施例中图像处理装置的一种结构示意图;
图7为本申请实施例中电子设备的一种结构示意图。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面将对本申请的方案进行进一步描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,但本申请还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本申请的一部分实施例,而不是全部的实施例。
在互联网应用中,很多用户通过制作视频,并将视频上传至社交平台(例如短视频平台等)来分享自己的工作和生活。例如,用户可以通过相机拍摄类应用拍摄图像和/或视频,对图像和/或视频进行编辑处理之后,通过视频制作工具生成视频,并上传至社交平台。
然而,在相机拍摄类应用中,由于视频内容比较容易吸引其他用户,用户通过相机拍摄类应用拍摄更多的是图像内容,并且,用户在拍摄图像之后,通常只是将图像存储在本地,图像内容的传播价值较低。虽然用户可以根据图像内容生成视频内容,但是,相机拍摄类应用产出的图像内容的传播价值仍然较低。
为了解决上述问题,本申请实施例提供了一种图像处理方法、装置、电子设备、介质及程序产品,以提高图像转化为视频的转化率,由于视频生态比较迎合用户的喜爱,因此可以提高相机拍摄类应用所拍摄的图像内容的传播价值。
特别地,在本申请的一些实施例中,针对相机拍摄类应用中的待处理图像(例如用户 实时拍摄的图像,或者相册中已存储的图像等),可以获取与待处理图像匹配的目标效果素材,基于目标效果素材,对待处理图像进行渲染,生成待处理图像对应的动态图像。与静态的待处理图像相比,动态图像更生动、形象,通过自动展示动态图像,可以引导用户基于待处理图像生成视频内容,提高图像转化为视频的转化率。由于视频生态比较迎合用户的喜爱,因此,通过提高图像转化为视频的转化率,可以提高传统相机拍摄类应用产出的图像内容的传播价值。
参见图1,图1示出了可以应用于本申请实施例的图像处理方法的示例性应用环境的系统架构的示意图。系统架构100包括:终端设备101、网络102和服务器103。网络102用以在终端设备101和服务器103之间提供通信链路的介质。网络102可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。终端设备101包括但不限于台式计算机、便携式计算机、智能手机和平板电脑等等。终端设备101中可以安装有相机拍摄类应用,用户可以通过该相机拍摄类应用拍摄图像或视频,还可以相册中查看该图像或视频。服务器103可以是该相机拍摄类应用所对应的服务器。
应该理解,图1中的终端设备、网络和服务器的数量仅仅是示意性的。根据实现需要,可以具有任意数量的终端设备、网络和服务器。比如服务器103可以是多个服务器组成的服务器集群等。
本申请实施例所提供的图像处理方法可以由终端设备101中的相机拍摄类应用执行。举例而言,用户通过终端设备101中的相机拍摄类应用拍摄生成一个图像,该图像即为待处理图像。相机拍摄类应用可以从终端设备101获取该待处理图像,从终端设备101本地的素材库和/或服务器103获取与待处理图像匹配的目标效果素材,该目标效果素材是用于生成动态图像的效果素材。基于目标效果素材,对待处理图像进行渲染,生成待处理图像对应的动态图像。通过自动展示动态图像,以引导用户基于待处理图像生成对应的视频内容。
参见图2,图2为本申请实施例中图像处理方法的一种示意图。可以看出,通过生成待处理图像对应的动态图像并自动展示该动态图像,可以引导用户基于待处理图像生成该视频内容,提高图像转化为视频的转化率。进而,提高提高待处理图像的传播价值。
参见图3,图3为本申请实施例中图像处理方法的一种流程图,可以包括以下步骤:
步骤S310,获取待处理图像。
步骤S320,获取与待处理图像匹配的目标效果素材。
步骤S330,基于目标效果素材,对待处理图像进行渲染,生成待处理图像对应的动态图像。
步骤S340,自动展示动态图像。
步骤S350,响应于针对动态图像的点击操作,生成并展示视频内容。
本申请实施例的图像处理方法,针对相机拍摄类应用中的待处理图像(例如用户实时拍摄的图像,或者相册中已存储的图像等),可以获取与待处理图像匹配的目标效果素材,基于目标效果素材,对待处理图像进行渲染,生成待处理图像对应的动态图像。与静态的待处理图像相比,动态图像更生动、形象,通过自动展示动态图像,可以引导用户基于待处理图像生成该视频内容,提高图像转化为视频的转化率。由于视频生态比较迎合用户的喜爱,因此,通过提高图像转化为视频的转化率,可以提高传统相机拍摄类应用产出的图像内容的传播价值。
以下对本申请实施例的图像处理方法进行更加详细的介绍。
在步骤S310中,获取待处理图像。
针对相机拍摄类应用,其核心用户路径是用户拍摄图像并预览图像的拍摄效果。在一些实施例中,待处理图像可以是用户通过相机拍摄类应用实时拍摄的图像。当检测到针对当前拍摄的图像的预览操作时,可以获取当前拍摄的图像,并将当前拍摄的图像作为待处理图像。
另外,用户也可以进入相机拍摄类应用的相册界面,对已经拍摄的图像进行编辑。在一些实施例中,当检测到进入相册界面时,获取距离当前时刻最近的预设数量个图像,并将获取到的该预设数量个图像中的至少一个图像作为待处理图像。预设数量可以是1,也可以是大于1的整数。例如,可以获取距离当前时刻最近的8张图像,即获取8张待处理图像。还例如,也可获取距离当前时刻最近的8张图像,然后从中挑选数张图像作为待处理图像,例如可以随机挑选,或者按照时间顺序、图片属性等挑选。在获取多张待处理图像的情况下,对每张待处理图像可以执行相同的处理过程。
在步骤S320中,获取与待处理图像匹配的目标效果素材。
相机拍摄类应用的开发者可以通过特效编辑工具(即编辑器)生成效果素材供用户使用。效果素材的类型包含:滤镜、妆容、特效、动画、贴纸、文字、音频等。这样,用户拍摄图像时,可以直接选取用户感兴趣的效果素材,生成包含该效果素材的图像。或者,在拍摄生成图像后,对该图像进行编辑,生成包含该效果素材的图像。
目标效果素材是与待处理图像匹配的效果素材。每个终端设备本地均可以具有对应的素材库,素材库中存储有多种效果素材。例如,用户在对图像进行编辑时,可以先下载效果素材,然后再使用该效果素材,下载的该效果素材可以存储在本地的素材库中。可以理解的是,该素材库可以是不断更新的。针对素材库中的效果素材,用户不需要网络下载,下次直接使用即可。在一些实施例中,在经过用户授权后,可以获取素材库中各个效果素材的使用情况信息,根据素材库中各个效果素材的使用情况信息,从各个效果素材中选取目标效果素材。
根据本申请的一些实施例,单个效果素材的使用情况信息包括但不限于:效果素材的最后一次使用时间和/或该效果素材的使用次数。根据素材库中各个效果素材的使用情况信息,可以根据最后一次使用时间对各个效果素材进行排序,得到第一排序结果。例如,按照最后一次使用时间由近及远的顺序,对各个效果素材进行排序,效果素材的排序越靠前,表示该效果素材的最后一次使用时间离当前时刻越近。
类似地,可以根据使用次数对素材库中各个效果素材进行排序,得到第二排序结果。例如,按照使用次数由高到低的顺序进行排序,得到第二排序结果。效果素材的排序越靠前,表示用户使用该效果素材的次数越多,该效果素材越受用户的喜欢。
之后,根据第一排序结果和/或第二排序结果,从各个效果素材中选取目标效果素材。例如,可以从第一排序结果中选取前N1个效果素材,从第二排序结果中选取前N2个效果素材,从前N1个效果素材和前N2个效果素材中选取两者的交集作为目标效果素材。或者,也可以从前N1个效果素材和前N2个效果素材中选取两者的并集作为目标效果素材等。
在一些实施例中,也可以从素材库的各个效果素材中,选取最后一次使用时间离当前时刻小于预设时间段(例如一星期内)的效果素材,以及选取使用次数大于预设次数(例如5次等)的效果素材,再选取两者的交集作为目标效果素材等。
除了可以从素材库中获取目标效果素材,在用户允许的前提下,也可以对待处理图像进行图像识别,得到待处理图像的特征信息。例如,根据相机拍摄类应用的特性,拍摄的图像一般可分为人像、风景、物品三类,按图像特征细化,人像特征可分为性别特征、相貌特征、年龄特征等;风景特征可分为城市建筑特征、气象特征、植物特征等,物品特征可细化为兴趣特征等,例如图像中包含的物品是足球,那么可以分析得到用户可能对运动比较感兴趣,从而得到兴趣特征。基于待处理图像的特征信息,可以获取与待处理图像匹配的目标效果素材。例如可以从云端获取目标效果素材,云端的效果素材可以由运营人员 配置,可以包括根据网络热点、节假日预先配置的效果素材,可随网络热点、节假日的变化进行变更,灵活性较高。
在步骤S330中,基于目标效果素材,对待处理图像进行渲染,生成待处理图像对应的动态图像。
本申请实施例中,可以直接利用目标效果素材对待处理图像进行渲染。也可以按照视频模板对待处理图像进行渲染,视频模板可以是视频创作者在制作视频过程中生成的模板,视频模板中可以包含多种不同的效果素材,其他用户使用该视频模板,可以生成内容不同、形式类似的视频内容。
在获取到目标效果素材后,可以获取与目标效果素材匹配的视频模板。由于单个视频模板中使用有一个或多个效果素材,因此,可以建立视频模板和效果素材的对应关系,一个视频模板可以对应一个或多个效果素材。基于该对应关系,可以得到目标效果素材对应的视频模板,也就是,与目标效果素材匹配的视频模板。基于目标效果素材以及视频模板,可以通过渲染工具(例如OpenGL)对待处理图像进行渲染,生成待处理图像对应的动态图像。
由于动态图像的渲染是包含多帧的,动态图像包含的素材信息可以增加时间区间的概念,因此,可以设计一份素材动图渲染协议。素材动图渲染协议中可以包含相关的纹理参数,例如,渲染宽度、渲染高度、渲染层级、渲染起始时间戳、渲染结束时间戳等。根据素材动图渲染协议,生成动态图像的流程参见图4,可以包括以下步骤:
步骤S410,读取单个目标效果素材。
步骤S420,配置目标效果素材的渲染时间戳。
步骤S430,创建目标效果素材对应的素材纹理。
本申请实施例中,可以根据素材动图渲染协议配置目标效果素材的纹理参数,例如可以配置纹理宽度、纹理高度、纹理透明度等。
步骤S440,判断该目标效果素材是否为最后一个目标效果素材。
如果目标效果素材是最后一个目标效果素材,执行步骤S450,否则,返回步骤S410,读取下一个目标效果素材,重复执行上述步骤,直至处理完最后一个目标效果素材。
步骤S450,创建待处理图像对应的图片纹理。
步骤S460,将素材纹理和图片纹理进行纹理混合,结合渲染时间戳,生成待处理图像对应的动态图像。
举例而言,目标效果素材的数量为两个,分别为滤镜和粒子动画,根据上述方法可以依次创建滤镜的素材纹理,例如配置纹理参数,并配置滤镜的渲染时间戳。创建粒子动画的素材纹理,例如配置纹理参数,并配置粒子动画的渲染时间戳,最终将图像转化为带滤镜效果的粒子动画。
在步骤S340中,自动展示动态图像,以引导用户基于待处理图像生成对应的视频内容。
如果待处理图像为当前拍摄的图像,可以在当前拍摄的图像的预览界面,自动展示动态图像。例如,可以通过弹窗或跳转的方式展示动态图像。如果待处理图像为相册中的图像,可以在相册界面,自动展示多个动态图像。例如,可以在相册界面的顶部展示多个动态图像。参见图5,图5为本申请实施例中生成动态图像的场景示意图,可以包括两种场景:用户直接通过相机拍摄类应用拍摄图像的场景,和用户进入相册界面的场景。根据不同的场景,均可以为用户展示动态图像,以引导用户生成视频内容。
在步骤S350中,响应于针对动态图像的点击操作,生成并展示视频内容。
在展示动态图像后,还可以响应于针对动态图像的点击操作(例如单击操作或双击操作等),生成并展示视频内容。针对多个待处理图像分别对应的动态图像,用户可以从多个动态图像中滑动挑选用户感兴趣的动态图像,生成对应的视频内容。可见,生成视频内容的过程简便、快捷,用户可直接体验,不需要二次操作。
需要说明的是,生成视频内容的方法与前述生成动态图像的方法不同,视频数据流的解码相对图像复杂度会比较高,因此,生成视频内容的过程会比较复杂。在一些实施例中,可以获取音频数据(例如可以是后台配置的音频数据),对音频数据进行解码,得到脉冲编码调制(PCM)数据。对待处理图像进行解码,得到图像原始数据(例如可以是RGBA数据),并创建图像原始数据对应的纹理。创建目标效果素材对应的素材纹理,将图像原始数据对应的纹理和目标效果素材对应的素材纹理进程合成,得到合成纹理。将合成纹理和脉冲编码调制数据进行合成、编码,生成视频内容。
本申请实施例的图像处理方法,在用户触发本地生成图像时或进入相册界面时,通过本地实时渲染的方式,提供图像转视频的动态图像,引导用户生成对应的视频内容。通过结合相机拍摄类应用的特性、云服务、图像特征,为用户提供各类效果素材,提高图像转视频的丰富性、多样性,达到进一步提升视频转化的目的。通过上述引导,可以提高用户将图像转化为视频的转化率。并且,用户可以直接点击动态图像,即可简便、快捷地生成 对应的视频内容。由于视频生态比较迎合用户的喜爱,因此,可以提高传统相机拍摄类应用产出的图像内容的传播价值。
相应于上述方法实施例,本申请实施例还提供了一种图像处理装置,参见图6,图像处理装置600包括:
待处理图像获取模块610,用于获取待处理图像;
目标效果素材获取模块620,用于获取与待处理图像匹配的目标效果素材;
动态图像生成模块630,用于基于目标效果素材,对待处理图像进行渲染,生成待处理图像对应的动态图像;
动态图像展示模块640,用于展示动态图像,以引导用户基于待处理图像生成对应的视频内容;
视频内容生成模块650,用于响应于针对动态图像的点击操作,生成视频内容;
视频内容展示模块660,用于展示视频内容。
根据本申请的一些实施例,待处理图像获取模块610可进一步用于当检测到针对当前拍摄的图像的预览操作时,获取当前拍摄的图像,并将当前拍摄的图像作为待处理图像;
动态图像展示模块640,可进一步用于在当前拍摄的图像的预览界面,自动展示动态图像。
根据本申请的一些实施例,待处理图像获取模块610,可进一步用于当检测到进入相册界面时,获取距离当前时刻最近的预设数量个图像,并将获取到的预设数量个图像中的至少一个图像作为待处理图像;
动态图像展示模块640,可进一步用于在相册界面,自动展示至少一个动态图像。
根据本申请的一些实施例,目标效果素材获取模块620,可进一步用于根据素材库中各个效果素材的使用情况信息,从各个效果素材中选取目标效果素材;和/或对待处理图像进行图像识别,得到待处理图像的特征信息;基于待处理图像的特征信息,获取与待处理图像匹配的目标效果素材;和/或根据网络热点或节假日预先配置的效果素材。
根据本申请的一些实施例,使用情况信息包括:最后一次使用时间和/或使用次数;目标效果素材获取模块620,可进一步于通过下述步骤实现根据素材库中各个效果素材的使用情况信息,从各个效果素材中选取目标效果素材:
根据最后一次使用时间对素材库中各个效果素材进行排序,得到第一排序结果;
根据使用次数对素材库中各个效果素材进行排序,得到第二排序结果;
根据第一排序结果和/或第二排序结果,从各个效果素材中选取目标效果素材。
根据本申请的一些实施例,动态图像生成模块630可进一步用于读取单个目标效果素材;配置目标效果素材的渲染时间戳;创建目标效果素材对应的素材纹理;创建待处理图像对应的图像纹理;将素材纹理和图像纹理进行纹理混合,结合渲染时间戳,生成待处理图像对应的动态图像。
根据本申请的一些实施例,视频内容生成模块650,可进一步用于获取音频数据,对音频数据进行解码,得到脉冲编码调制数据;对待处理图像进行解码,得到图像原始数据,并创建图像原始数据对应的纹理;创建目标效果素材对应的素材纹理;将图像原始数据对应的纹理和目标效果素材对应的素材纹理进程合成,得到合成纹理;将合成纹理和脉冲编码调制数据进行合成、编码,生成视频内容。
上述装置中各模块或单元的具体细节已经在对应的方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
在本申请的示例性实施例中,还提供一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行该可执行指令以实现根据本请的实施例中的图像处理方法。
图7为本申请实施例中电子设备的一种结构示意图,在其中可实现根据本公开的实施例的图像处理方案。需要说明的是,图7示出的电子设备700仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图7所示,电子设备700包括中央处理单元(CPU)701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储部分708加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有系统操作所需的各种程序和数据。中央处理单元701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
以下部件连接至I/O接口705:包括键盘、鼠标等的输入部分706;包括诸如阴极射线 管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分707;包括硬盘等的存储部分708;以及包括诸如局域网(LAN)卡、调制解调器等的网络接口卡的通信部分709。通信部分709经由诸如因特网的网络执行通信处理。驱动器710也根据需要连接至I/O接口705。可拆卸介质711,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器710上,以便于从其上读出的计算机程序根据需要被安装入存储部分708。
特别地,根据本申请的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分709从网络上被下载和安装,和/或从可拆卸介质711被安装。在该计算机程序被中央处理单元701执行时,执行本申请的装置中限定的各种功能。
本申请实施例中,还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述图像处理方法。
需要说明的是,本申请所示的计算机可读存储介质例如可以是—但不限于—电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器、只读存储器、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、射频等等,或者上述的任意合适的组合。
本申请实施例中,还提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述图像处理方法。
本申请实施例中,还提供了一种计算机程序,所述计算机程序包括程序代码,所述程序代码在被处理器执行时实现本公开实施例所述的图像处理方法。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵 盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本申请的具体实施方式,使本领域技术人员能够理解或实现本申请。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (12)

  1. 一种图像处理方法,所述方法包括:
    获取待处理图像;
    获取与所述待处理图像匹配的目标效果素材;
    基于所述目标效果素材对所述待处理图像进行渲染,生成所述待处理图像对应的动态图像;
    自动展示所述动态图像;
    响应于针对所述动态图像的点击操作,生成并展示视频内容。
  2. 根据权利要求1所述的方法,其中,获取待处理图像,包括:
    当检测到针对当前拍摄的图像的预览操作时,获取所述当前拍摄的图像,并将所述当前拍摄的图像作为待处理图像;
    所述自动展示所述动态图像,包括:
    在所述当前拍摄的图像的预览界面,自动展示所述动态图像。
  3. 根据权利要求1所述的方法,其中,获取待处理图像,包括:
    当检测到进入相册界面时,获取距离当前时刻最近的预设数量个图像,并将获取到的所述预设数量个图像中的至少一个图像作为待处理图像;
    所述自动展示所述动态图像,包括:
    在所述相册界面,自动展示至少一个动态图像。
  4. 根据权利要求1所述的方法,其中,所述获取与所述待处理图像匹配的目标效果素材,包括:
    根据素材库中各个效果素材的使用情况信息,从各个效果素材中选取目标效果素材;和/或
    对所述待处理图像进行图像识别,得到所述待处理图像的特征信息;
    基于所述待处理图像的特征信息,获取与所述待处理图像匹配的目标效果素材。
  5. 根据权利要求4所述的方法,其中,所述使用情况信息包括:最后一次使用时间和/或使用次数;并且
    根据所述素材库中各个效果素材的使用情况信息,从所述各个效果素材中选取目标效果素材,包括:
    根据最后一次使用时间对素材库中各个效果素材进行排序,得到第一排序结果;
    根据使用次数对所述素材库中各个效果素材进行排序,得到第二排序结果;
    根据所述第一排序结果和/或所述第二排序结果,从所述各个效果素材中选取目标效果素材。
  6. 根据权利要求1所述的方法,其中,所述基于所述目标效果素材对所述待处理图像进行渲染,生成所述待处理图像对应的动态图像,包括:
    读取单个目标效果素材;
    配置所述目标效果素材的渲染时间戳;
    创建所述目标效果素材对应的素材纹理;
    创建所述待处理图像对应的图像纹理;
    将所述素材纹理和所述图像纹理进行纹理混合,结合渲染时间戳,生成所述待处理图像对应的动态图像。
  7. 根据权利要求1所述的方法,其中,响应于针对所述动态图像的点击操作,生成视频内容,包括:
    获取音频数据,对所述音频数据进行解码,得到脉冲编码调制数据;
    对所述待处理图像进行解码,得到图像原始数据,并创建所述图像原始数据对应的纹理;
    创建所述目标效果素材对应的素材纹理;
    将所述图像原始数据对应的纹理和所述目标效果素材对应的素材纹理进程合成,得到合成纹理;
    将所述合成纹理和所述脉冲编码调制数据进行合成、编码,生成视频内容。
  8. 一种图像处理装置,所述装置包括:
    待处理图像获取模块,用于获取待处理图像;
    目标效果素材获取模块,用于获取与所述待处理图像匹配的目标效果素材;
    动态图像生成模块,用于基于所述目标效果素材,对所述待处理图像进行渲染,生成所述待处理图像对应的动态图像;
    动态图像展示模块,用于自动展示所述动态图像;
    视频内容生成模块,用于响应于针对所述动态图像的点击操作,生成视频内容;
    视频内容展示模块,用于展示所述视频内容。
  9. 一种电子设备,包括:处理器和存储器,在所述存储器上存储有计算机程序和/或指令,所述计算机程序和/或指令被处理器执行时实现权利要求1-7任一项所述的图像处理方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序和/或指令,所述计算机程序和/或指令被处理器执行时实现权利要求1-7任一项所述的图像处理方法。
  11. 一种计算机程序产品,包含计算机程序和/或指令当所述计算机程序和/或指令在计算机上运行时,使得所述计算机执行权利要求1-7任一项所述的图像处理方法。
  12. 一种计算机程序,所述计算机程序包括程序代码,所述程序代码由处理器执行以用于实现根据权利要求1-7中任一项所述的图像处理方法。
PCT/CN2023/104745 2022-07-06 2023-06-30 图像处理方法、装置、电子设备、介质及程序产品 WO2024007988A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210798152.5A CN117409108A (zh) 2022-07-06 2022-07-06 图像处理方法、装置、电子设备、介质及程序产品
CN202210798152.5 2022-07-06

Publications (1)

Publication Number Publication Date
WO2024007988A1 true WO2024007988A1 (zh) 2024-01-11

Family

ID=89454367

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/104745 WO2024007988A1 (zh) 2022-07-06 2023-06-30 图像处理方法、装置、电子设备、介质及程序产品

Country Status (2)

Country Link
CN (1) CN117409108A (zh)
WO (1) WO2024007988A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002223332A (ja) * 2001-01-29 2002-08-09 Mitsubishi Electric Corp 画像処理システム、画像処理方法及びプログラム
CN109168027A (zh) * 2018-10-25 2019-01-08 北京字节跳动网络技术有限公司 即时视频展示方法、装置、终端设备及存储介质
CN110545476A (zh) * 2019-09-23 2019-12-06 广州酷狗计算机科技有限公司 视频合成的方法、装置、计算机设备及存储介质
CN111787379A (zh) * 2020-07-06 2020-10-16 海信视像科技股份有限公司 一种生成视频集锦文件的交互方法及显示设备、智能终端
CN113852755A (zh) * 2021-08-24 2021-12-28 荣耀终端有限公司 拍摄方法、设备、计算机可读存储介质及程序产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002223332A (ja) * 2001-01-29 2002-08-09 Mitsubishi Electric Corp 画像処理システム、画像処理方法及びプログラム
CN109168027A (zh) * 2018-10-25 2019-01-08 北京字节跳动网络技术有限公司 即时视频展示方法、装置、终端设备及存储介质
CN110545476A (zh) * 2019-09-23 2019-12-06 广州酷狗计算机科技有限公司 视频合成的方法、装置、计算机设备及存储介质
CN111787379A (zh) * 2020-07-06 2020-10-16 海信视像科技股份有限公司 一种生成视频集锦文件的交互方法及显示设备、智能终端
CN113852755A (zh) * 2021-08-24 2021-12-28 荣耀终端有限公司 拍摄方法、设备、计算机可读存储介质及程序产品

Also Published As

Publication number Publication date
CN117409108A (zh) 2024-01-16

Similar Documents

Publication Publication Date Title
US9277198B2 (en) Systems and methods for media personalization using templates
CN101300567B (zh) 在Web上的媒体共享和创作的方法
US8667016B2 (en) Sharing of presets for visual effects or other computer-implemented effects
US9270926B2 (en) System and method for distributed media personalization
CN111294663B (zh) 弹幕处理方法、装置、电子设备及计算机可读存储介质
US10546010B2 (en) Method and system for storytelling on a computing device
US20100153520A1 (en) Methods, systems, and media for creating, producing, and distributing video templates and video clips
US20130272679A1 (en) Video Generator System
CN111935528B (zh) 视频生成方法和装置
CN107295377B (zh) 影片制作方法、装置及系统
WO2019227429A1 (zh) 多媒体内容生成方法、装置和设备/终端/服务器
US20210117471A1 (en) Method and system for automatically generating a video from an online product representation
US20180143741A1 (en) Intelligent graphical feature generation for user content
KR20210090273A (ko) 음성패킷 추천방법, 장치, 설비 및 저장매체
US20140282000A1 (en) Animated character conversation generator
Jackson Digital video editing fundamentals
CN112802192B (zh) 一种可实时交互的三维图形图像播放器
CN113590247A (zh) 文本创作方法及计算机程序产品
US7610554B2 (en) Template-based multimedia capturing
EP4276828A1 (en) Integrated media processing pipeline
WO2024007988A1 (zh) 图像处理方法、装置、电子设备、介质及程序产品
KR102099093B1 (ko) 사용자 맞춤형 모션 그래픽 영상 제작 시스템
WO2023207981A1 (zh) 配置文件生成方法、装置、电子设备、介质及程序产品
WO2024024727A1 (ja) 画像処理装置、画像表示装置、画像処理方法、画像表示方法及びプログラム
RU2764375C1 (ru) Способ формирования изображений с дополненной и виртуальной реальностью с возможностью взаимодействия внутри виртуального мира, содержащего данные виртуального мира

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23834755

Country of ref document: EP

Kind code of ref document: A1