CN111951157A - Image processing method, apparatus and storage medium - Google Patents

Image processing method, apparatus and storage medium Download PDF

Info

Publication number
CN111951157A
CN111951157A CN202010912720.0A CN202010912720A CN111951157A CN 111951157 A CN111951157 A CN 111951157A CN 202010912720 A CN202010912720 A CN 202010912720A CN 111951157 A CN111951157 A CN 111951157A
Authority
CN
China
Prior art keywords
image
target
icon
original image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010912720.0A
Other languages
Chinese (zh)
Inventor
张乐亨
应贲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Priority to CN202010912720.0A priority Critical patent/CN111951157A/en
Publication of CN111951157A publication Critical patent/CN111951157A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The embodiment of the application provides an image processing method, image processing equipment and a storage medium. The method comprises the following steps: acquiring an original image; generating an image set according to the original image, wherein the image set comprises a first target image, and the content of the first target image is different from that of the original image and has relevance; and generating a video file according to the original image and the image set. According to the method and the device, a plurality of new images which are similar to the original image in content but are in different time periods are automatically generated through an artificial intelligence technology, or new images which are different from the original image in image style are automatically generated, and then new video streams are automatically generated according to the original image and the new images, so that the video stream contents are consistent and smooth, people can experience the feeling of time lapse through the video streams, manual operation is not needed in the image processing process, and time and labor are saved.

Description

Image processing method, apparatus and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
With the rapid development of internet technology and the wide popularization of smart phones, people can communicate with each other on the internet not only by characters, but also pictures and short videos are produced as a new popular information transmission mode, so that the social form of the network is greatly enriched, and the pictures or the short videos gradually become important components of life and entertainment of people by virtue of the special transmission advantages of the pictures or the short videos, and therefore, the content production of the pictures or the short videos becomes more important.
However, most of the content production of pictures or short videos is to reprocess the original picture or video information to generate a new picture or a new segment of video stream. However, this method is often manual, time-consuming and labor-consuming, and for example, in the case of lacking original video information, it is difficult to provide only one image, and it is also difficult to obtain a continuous and smooth video stream with a content according to the image. Some solutions are to generate a new piece of video stream by adding some animation to the original image. However, the method cannot make the content of the original image change essentially, and it is difficult to achieve the effect of consistent and smooth video stream content.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present application disclose an image processing method, an image processing apparatus, and a storage medium, wherein a plurality of new images similar to the content of an original image but in different time periods are automatically generated by an artificial intelligence technique, and then a new video stream is automatically generated according to the original image and the plurality of new images, so that the content of the video stream can be consistent and smooth, a person can experience the feeling of time lapse through the video stream, or a new image different from the image style of the original image is automatically generated, and the image processing process does not need manual operation, and is time-saving and labor-saving.
In a first aspect, an embodiment of the present application discloses an image processing method, including:
acquiring an original image;
generating an image set according to the original image, wherein the image set comprises a first target image, and the content of the first target image is different from that of the original image and has relevance;
and generating a video file according to the original image and the image set.
In the embodiment of the application, an image set including a first target image can be obtained by processing an original image, and a video file can be obtained according to the original image and the images in the image set, wherein the image frame of the video file further includes the first target image obtained by processing the original image, the first target image is similar to the image content of the original image but in a different background period, and the obtained video file includes a plurality of images similar to the first target image, so that the content of the video file obtained by the method is continuous and smooth, and people can experience the feeling of time lapse by browsing the video file. On the other hand, the essence of the video is a continuous multi-frame image sequence which is mutually associated, the image style migration processing can be only carried out on the original image to obtain the first target image, the image obtained through the processing in the method can replace untimely objects in the image, the overall style migration is carried out on the picture of the image, the whole image processing process does not need manual operation, and time and labor are saved.
In one possible implementation of the first aspect, the image frames of the video file comprise the original image and images of the set of images.
In the embodiment of the present application, the content of the obtained video file is further explained, and the image frames in the video file include the original image and the images in the above image set in addition to the first target image obtained by processing the original image, so that the generated video file is more realistic, and the association between the video file and the original image is enhanced.
In a further possible implementation manner of the first aspect, the generating a set of images from the original image includes:
performing semantic segmentation processing on the original image to obtain a first candidate icon;
generating a first target icon by using an antagonistic neural network model and the first candidate icon, wherein the background period of the first target icon is different from the background period of the first candidate icon;
fusing the first target icon and the original image using a fusion technique to generate the image set including the first target image, the first target image including the first target icon.
In the embodiment of the present application, the process of processing the original image to obtain the first target image is further explained, obtaining a first candidate icon by semantically segmenting an original image, generating a first target icon having relevance with the first candidate icon by utilizing an antagonistic neural network model, finally fusing the generated first target icon and the original image to obtain a first target image, in this way, an image set comprising a plurality of images similar to the first target image, which are different from the local content associated with the first candidate icon, the rest contents are the same, and the contents of the video file generated by the images in the image set are more consistent and smooth, the relevance of the video file and the original image is enhanced, the reality is better, and people can experience the feeling of time lapse by browsing the video file.
In another possible implementation manner of the first aspect, the generating a video file from the original image and the image set includes:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image.
In the embodiment of the application, the image set comprises a first target image and a plurality of images similar to the first target image, the images are different from the original image in the first candidate icon, the first candidate icon is replaced by the first target icon with different background periods, the method arranges the original image and the images in the image set according to the sequence of the background periods by using the frame interpolation technology to generate the video file, so that the video file can be continuous and smooth, and a person can experience the feeling of time lapse by browsing the video file.
In a further possible implementation manner of the first aspect, the image set includes a second target image, and the content of the second target image is different from and has a correlation with the content of the original image and the content of the first target image; the generating of the set of images from the original image comprises:
performing semantic segmentation processing on the first target image to obtain a second candidate icon;
generating a second target icon by using the antagonistic neural network model and the second candidate icon, wherein the background period of the second target icon is different from the background period of the second candidate icon;
fusing the second target icon with the first target image using a fusion technique to generate the image set including the second target image, the second target image including the second target icon.
In the embodiment of the present application, the process of processing the first target image to obtain the second target image is further explained, obtaining a second candidate icon by semantically segmenting the first target image, generating a second target icon which is associated with the second candidate icon by utilizing an antagonistic neural network model, finally fusing the generated second target icon and the first target image to obtain a second target image, in this way, an image set may be generated from the first target image that includes a plurality of images similar to the second target image, which images are different from the local content associated with the second candidate icon, the rest contents are the same, and the contents of the video file generated by the images in the image set are more consistent and smooth, and the relevance of the video file and the first target image is enhanced, the reality is better, and people can experience the feeling of time lapse by browsing the video file.
In another possible implementation manner of the first aspect, the generating a video file from the original image and the image set includes:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image and the second target image.
In the embodiment of the present application, the image set includes the first target image and the second target image, and further includes a plurality of images similar to the first target image and the second target image, the image similar to the first target image is different from the original image in the first candidate icon, the first candidate icon is replaced with a first target icon with a different background period, the image similar to the second target image is different from the first target image in the second candidate icon, the second candidate icon is replaced with a second target icon with a different background period, the method arranges the original image and the images in the image set in the order of the background periods by using the frame interpolation technique to generate the video file, besides being capable of enabling the video file to be continuous and smooth, people can experience the feeling of time lapse by browsing the video file.
In a further possible implementation of the first aspect, the set of images comprises a third target image having an image style different from and having a correlation with the image style of the original image; the generating of the set of images from the original image comprises:
acquiring style migration data;
and performing image style migration on the original image according to the style migration data to obtain the image set comprising the third target image, wherein the image style of the third target image is the image style specified by the style migration data.
In the embodiment of the present application, the process of processing the original image to obtain the third target image is described, and the third target image may be obtained by obtaining the style migration data and then performing the image style migration on the original image according to the style migration data, where the image style of the third target image is different from the image style of the original image and is the image style specified by the style migration data. By the method, the image set comprising a plurality of images similar to the third target image can be generated from the original image, the images are the same as the original image except for the image style, the content of the video file generated by the images in the image set is more consistent and smooth, the relevance between the video file and the original image is enhanced, the reality is higher, and people can experience the feeling of time lapse by browsing the video file.
In another possible implementation manner of the first aspect, the generating a video file from the original image and the image set includes:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image and the third target image.
In the embodiment of the application, the image set comprises a third target image and a plurality of images similar to the third target image, the images are different from the original images in image style, the original images are subjected to image style migration to obtain the images with the image style specified by the style migration data, the method utilizes the frame interpolation technology to arrange the original images and the images in the image set according to the sequence of the background periods, a video file is generated, the video file can be continuous and smooth, and people can experience the feeling of time lapse by browsing the video file.
In another possible implementation manner of the first aspect, after obtaining the image set including the third target image, the method further includes:
performing semantic segmentation processing on the third target image to obtain a third candidate icon;
generating a third target icon using an antagonistic neural network model and the third candidate icon, the third target icon being in a background period that is different from the background period of the third candidate icon and the same as the background period specified by the style migration data;
fusing the third target icon and the third target image by using a fusion technology to generate the image set comprising a fourth target image, wherein the fourth target image comprises the fourth target icon, the image style of the fourth target image is the same as that of the third target image, and the content of the fourth target image is different from that of the third target image and has relevance.
In the embodiment of the present application, the process of processing the third target image to obtain the fourth target image is explained, obtaining a third candidate icon by semantically segmenting the third target image, generating a third target icon which is associated with the third candidate icon by using an antagonistic neural network model, finally fusing the generated third target icon and the third target image to obtain a fourth target image, in this way, an image set may be generated from the third target image, which image set comprises a plurality of images similar to the fourth target image, which images are different from the local content associated with the third candidate icon, the rest contents are the same, and the contents of the video file generated by the images in the image set are more consistent and smooth, and the relevance of the video file and the third target image is enhanced, the reality is better, and people can experience the feeling of time lapse by browsing the video file.
In another possible implementation manner of the first aspect, the generating a video file from the original image and the image set includes:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image, the third target image and the fourth target image.
In the embodiment of the present application, the image set includes the first target image, the third target image, and the fourth target image, and further includes a plurality of images similar to the first target image, the third target image, and the fourth target image, the image similar to the first target image differs from the original image in the first candidate icon, the first candidate icon is replaced with the first target icon with a different background period, the image similar to the third target image differs from the first target image in the image style, the image style of the first target image is migrated to the image style specified by the style migration data, the image similar to the fourth target image differs from the third target image in the third candidate icon, the third candidate icon is replaced with the third target icon with a different background period, and the original image and the images in the image set are sequentially displayed in the background period by using the frame interpolation technique The video files are generated through sequence arrangement, so that the video files can be coherent and smooth, and people can experience the feeling of time lapse by browsing the video files.
In a second aspect, an embodiment of the present application discloses another image processing method, including:
acquiring an original image and first style migration data;
and converting the original image into a target image, wherein the image style of the target image is the image style specified by the first style migration data and is different from the image style of the original image.
In the embodiment of the application, another image processing method is provided, namely, a method for performing image style migration on an image is provided, and the method comprises the steps of firstly obtaining an original image and first style migration data, converting the original image into a target image with the same image style as that specified by the first style migration data according to the first style migration data, and performing overall style migration on the whole picture of the original image, so that the original image can have multiple unique artistic styles while retaining original content, and the whole image processing process does not need manual operation, thereby saving time and labor.
In a possible implementation manner of the second aspect, the converting the original image into the target image includes:
performing semantic segmentation processing on the original image to obtain a first candidate icon set, wherein the first candidate icon set comprises at least one candidate icon, the at least one candidate icon comprises a first candidate icon, and the first candidate icon is a component of the original image;
acquiring a first target icon when the background period of the first candidate icon conflicts with the background period specified by the first style migration data, wherein the background period of the first target icon is consistent with the background period specified by the first style migration data;
fusing the first target icon and the original image by utilizing a fusion technology to generate a first image, wherein the first image comprises the first target icon;
and carrying out image style migration on the first image according to the first style migration data to generate the target image.
In the embodiment of the application, a method for performing image style migration on an image is further supplemented and improved, that is, in the process of image style migration, the original image needs to be subjected to semantic segmentation, whether a background period where a first candidate icon in the original image is located conflicts with a background period specified by first style migration data is judged, if yes, the first candidate icon is replaced by a first target icon to generate a new first image, and then image style migration is performed on the first image to obtain a target image. By the method, logic errors possibly generated in image style migration and related to the background period of the image can be avoided, and the generated target image can be more authentic and correct.
In yet another possible implementation manner of the second aspect, the obtaining the first target icon includes:
generating the first target icon by using an antagonistic neural network model and the first candidate icon, wherein the background period of the first target icon is different from the background period of the first candidate icon; or, the first target icon is obtained from a database.
In the embodiment of the present application, a plurality of possible options for acquiring the first target icon are provided, which may be the first target icon generated by the antagonistic neural network model and having a different background period from the first candidate icon, or may be the first target icon directly acquired from the database and having an existing background period different from the background period of the first candidate icon.
In yet another possible embodiment of the second aspect, the method further comprises:
and when the background period of the candidate icon in the first candidate icon set does not conflict with the background period specified by the first style migration data, performing image style migration on the original image according to the first style migration data to generate the target image, wherein the image content of the target image is the same as the image content of the original image.
In the embodiment of the application, the method for performing image style migration on the original image under the condition that the background period where the first candidate icon in the original image is located is not in conflict with the background period specified by the first style migration data is described, so that the whole style migration on the whole picture of the original image can be realized, the original image has multiple unique artistic styles while retaining original content, and the whole image processing process does not need manual operation, thereby saving time and labor.
In yet another possible implementation manner of the second aspect, the image-style migrating the first image according to the first-style migration data includes:
obtaining second style migration data, wherein the second style migration data is different from the first style migration data;
dividing the first image into at least two regions, the at least two regions including a first region and a second region;
and carrying out image style migration on the first area of the first image according to the first style migration data, and carrying out image style migration on the second area of the first image according to the second style migration data to generate the target image.
In this embodiment, the method for performing image style migration on an image is further supplemented, that is, second style migration data different from the first style migration data is acquired, the obtained first image is divided into at least two areas, and different image style migrations are performed on different areas, for example, an image style migration is performed on a first area in the first image according to the first style migration data, and an image style migration is performed on a second area in the first image according to the second style migration data, so as to generate the target image. Therefore, the overall style migration is respectively carried out on different areas of the first image, the first image can have multiple unique artistic styles while original content is reserved, the whole image processing process does not need manual operation, and time and labor are saved.
In another possible implementation manner of the second aspect, the performing image style migration on the second area of the first image according to the second style migration data includes:
performing semantic segmentation on the second area of the first image to obtain a second candidate icon set, wherein the second candidate icon set comprises second candidate icons, and the second candidate icons are components of the second area;
acquiring a second target icon when the background period of the second candidate icon is in conflict with the background period specified by the second style migration data, wherein the background period of the second target icon is consistent with the background period specified by the second style migration data;
fusing the second target icon and the second area by utilizing a fusion technology to generate a target second area, wherein the target second area comprises the second target icon;
and carrying out image style migration on the target second area according to the second style migration data.
In the embodiment of the present application, a method for performing image style migration on an image is further supplemented and improved, that is, in the process of image style migration, a second region of a first image needs to be semantically segmented, whether a background period where a second candidate icon in the second region of the first image is located conflicts with a background period specified by second style migration data is determined, if the second candidate icon conflicts with the background period specified by the second style migration data, a second target icon is used to replace the second candidate icon, a new target second region is generated, and then image style migration is performed on the target second region according to the second style migration data, so as to obtain a target image. By the method, logic errors possibly generated in image style migration and related to the background period of the image can be avoided, and the generated target image can be more authentic and correct.
In yet another possible implementation manner of the second aspect, the obtaining the second target icon includes:
generating the second target icon by using an antagonistic neural network model and the second candidate icon, wherein the background period of the second target icon is different from the background period of the second candidate icon;
or, obtaining the second target icon from a database.
In the embodiment of the present application, a plurality of possible options for obtaining the second target icon are provided, which may be the second target icon generated by the antagonistic neural network model and having a different background period from the second candidate icon, or may be the second target icon directly obtained from the database and having an existing background period different from the background period of the second candidate icon.
In a third aspect, an embodiment of the present application discloses an electronic device for image processing, which includes a memory and a processor, where the memory stores a computer program, and when the computer program runs on the processor, the electronic device performs the method as described above.
In a fourth aspect, embodiments of the present application disclose a computer-readable storage medium having a computer program stored thereon, which, when run on one or more processors, performs the method as described above.
According to the embodiment of the application, the video file is obtained by processing the original image, the image frame of the video file comprises a plurality of target images obtained by processing the original image, and the target images are similar to the image content of the original image but are different in background period, so that the content of the video file obtained by the method is continuous and smooth, the relevance between the video file and the original image is enhanced, the reality and the accuracy are better, and people can experience the feeling of time lapse by browsing the video file; on the other hand, the original image is converted into the target image with the same image style as the image style specified by the style migration data according to the style migration data, so that the whole picture of the original image is subjected to overall style migration, the original image can have multiple unique artistic styles while the original content is retained, and the whole image processing process does not need manual operation, thereby saving time and labor.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 4a is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 4b is a schematic diagram illustrating a visual effect of image processing according to an embodiment of the present disclosure;
FIG. 4c is a schematic diagram illustrating a visual effect of another image processing provided in the embodiment of the present application;
FIG. 4d is a schematic diagram illustrating a visual effect of another image processing provided by an embodiment of the present application;
fig. 5a is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 5b is a schematic diagram illustrating a visual effect of another image processing provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination". Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that, step numbers such as 201, 202 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform 202 first and then 201 in the specific implementation, but these should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The apparatus may be embodied in various forms. For example, the devices described in the present application may include mobile terminals such as mobile phones, tablet computers, notebook computers, palmtop computers, Personal Digital Assistants (PDAs), Portable Media Players (PMPs), navigation devices, wearable devices, smart bands, pedometers, and fixed terminals such as digital TVs, desktop computers, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (global system for Mobile communications), GPRS (general packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that may optionally adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS terrestrial radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN 202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided.
In order to more clearly describe the scheme of the present application, some knowledge related to image processing is introduced below.
Image semantic segmentation: semantic segmentation is a very important field in computer vision, and refers to identifying an image at a pixel level, i.e., marking the class of objects to which each pixel in the image belongs. For example, for a picture, the pixels belonging to people in the picture are classified into one type, the pixels belonging to motorcycles are also classified into one type, and in addition, the background pixels are also classified into one type. Semantic segmentation differs from example segmentation in that if there are multiple people in a photo, for example, the semantic segmentation only needs to classify the pixels of all people into one class, but example segmentation also classifies the pixels of different people into different classes. That is, the example segmentation is further than the semantic segmentation. From the macroscopic view, semantic segmentation is a high-level task, and paves a road for realizing complete understanding of scenes. Scene understanding is a core computer vision problem, of which more and more applications provide nutrition by inferring knowledge from images. Some applications include automatic driving of automobiles, man-machine interaction, virtual reality and the like, in recent years, with the popularization of deep learning, many semantic segmentation problems are solved by adopting a deep-level structure, most commonly, a convolutional neural network is helpful for image recognition and plays a great promoting role in the development of the field of semantic segmentation, and the precision and the efficiency of the convolutional neural network greatly exceed those of other methods.
Antagonistic neural networks: an antagonistic neural network, also known as an antagonistic generating network, does not allow two machines to compete. But rather a network program that enhances the capabilities of artificial intelligence through a special form of countermeasure. The adversarial neural network is mainly composed of two parts, one is a generation network, which is to make a machine generate new content according to what is seen. The second is a discrimination network, which is to discriminate false images generated by the generation network, the discrimination network can accurately discriminate whether the images generated by the generation network are real and can discriminate the similarity degree of the images and real objects, then feed back the data to the generation network, and then the generation network simulates again to generate a false image, and in the repeated process, the images generated by the generation network are more and more vivid until the discrimination network can not recognize. At this point, the image generated by the network is generated enough to be spurious. It is common to say that machines can produce enough results to deceive humans through antagonistic neural networks, which are emerging artificial intelligence technologies and need to be developed continuously. But even now, the speech and pictures generated by the antagonistic neural network can completely reach the false-true ground, which is enough to prove that the technology has great potential.
Image fusion: the image fusion refers to that image data which are collected by a multi-source channel and related to the same target are subjected to image processing, computer technology and the like, beneficial information in respective channels is extracted to the maximum extent, and finally high-quality images are synthesized, so that the utilization rate of image information is improved, the computer interpretation precision and reliability are improved, the spatial resolution and the spectral resolution of original images are improved, and monitoring is facilitated. Image fusion, as a branch of information fusion, is a hot spot in current information fusion research. The image fusion data is in the form of an image containing light and shade, color, temperature, distance, and other scene features. These images may be presented in one frame, or in a column. The aim of image fusion is to reduce the uncertainty and redundancy of output on the basis of maximum combination of related information under the actual application aim. The image fusion has obvious advantages of enlarging the time space information contained in the image, reducing the uncertainty, increasing the reliability and improving the robust performance of the system.
The embodiments of the present application will be described below with reference to the drawings.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application, where the method includes, but is not limited to, the following steps:
step 301: an image processing apparatus acquires an original image.
The image processing device can be an intelligent terminal device such as a mobile phone, a computer, a tablet and the like connected with a network, a user can photograph or upload an original image in a PNG format, a JPEG format, a GIF format or other formats to the image processing device, and after the image processing device obtains the original image, the image processing device carries out corresponding processing on the original image according to the requirements of the user.
Step 302: an image set is generated from the original image, the image set including a first target image, the content of the first target image being different from and having an association with the original image.
If a user wishes to obtain a video associated with an original image by providing the original image, the image processing device performs a corresponding operation on the original image, specifically, generates an image set according to the original image, where the image set includes a plurality of first target images, the first target images are similar to the original image in image content but in different background periods, and the image processing device generates a video file according to a specific sequence of the original image and the background periods in which the images in the image set are located. Further, a first target image is generated from the original image, the image processing device may perform semantic segmentation on the original image to obtain a plurality of regions, each region corresponds to an icon on the original image, and an icon on one region is selected as a first candidate icon, for example, the first candidate icon may be a building; then, generating a plurality of first target icons by using the antagonistic neural network model, wherein the first target icons have the same content as the first candidate icons but are located in different background periods, for example, the first target icons are all the same buildings as the first candidate icons but are located in different ages, namely different old and new degrees; and then, respectively fusing the first target icons and the original image by using an image fusion technology to obtain a plurality of first target images correspondingly, wherein any one first target image is different from the original image in the new and old degree of the building on the image.
Step 303: and generating a video file according to the original image and the image set.
An image set can be obtained through the step 302, and this step can arrange a plurality of first target images and original images in the image set according to the background period sequence of the images by using the frame interpolation technique, that is, the first target images and the original images are arranged according to the new or old degree of buildings on the images, and may be arranged in the order from new to old, or arranged in the order from old to new, so as to generate a complete video file. The video file obtained by the method is continuous and smooth in content, the relevance between the video file and the original image is enhanced, the video file is enabled to be more authentic and accurate, people can experience the feeling of time lapse by browsing the video file, and the requirements of users can be well met.
On the other hand, the user can also obtain a third target image having a plurality of unique artistic styles by providing one original image, the image style of the third target image being different from and having relevance to the image style of the original image. The image processing device may further arrange the plurality of first target images and the plurality of third target images included in the original image and the image set in order of the background period of the image by using a frame interpolation technique, and generate the video file. Specifically, the image processing apparatus may convert the original image into the third target image according to the style migration data, where the style migration data may be provided by the user, or may be obtained from a database by the image processing apparatus, and the image style of the converted third target image is the image style specified by the style migration data. The third target image obtained by the method realizes integral style transfer of the whole picture of the original image, so that the original image has multiple unique artistic styles while retaining the original content, and the whole image processing process does not need manual operation, thereby saving time and labor.
Further, after a third target image is generated from the original image, the third target image can be further processed. The image processing device can perform semantic segmentation processing on the third target image to obtain a plurality of regions, each region corresponds to an icon on the third target image, and the icons can form a candidate icon set; if a background period of a candidate icon (which may be a third candidate icon) in the candidate icon set conflicts with the background period specified by the style transition data, for example, the first candidate icon representing a car of ninety years conflicts with the background period of eighty years specified by the style transition data, the image processing apparatus acquires a third target icon, the third target icon having the same content as the third candidate icon but being different from the background period, for example, the acquired third target icon is a car of eighty years; then, fusing the third target icon and the third target image by using an image fusion technology to generate a fourth target image, wherein the fourth target image is different from the third target image in the background period of the car on the image, and the background period of the car on the fourth target image is consistent with the background period specified by the style migration data; finally, the original image and the plurality of first target images, the plurality of third target images and the plurality of fourth target images included in the image set are arranged in sequence according to the background period of the image by using the frame interpolation technology, and a video file can be generated. The video file obtained by the method has coherent and smooth content, so that people can experience the feeling of time lapse through video stream, the image frame in the video file realizes integral style transfer of the whole picture of the original image, the original image can have multiple unique artistic styles while the original content is kept, the whole image processing process does not need manual operation, and time and labor are saved.
Referring to fig. 4a, fig. 4a is a schematic flowchart of another image processing method according to an embodiment of the present application, where the method includes, but is not limited to, the following steps:
step 401: an image processing apparatus acquires an original image.
In accordance with step 301 described above.
Step 402: and performing semantic segmentation processing on the original image to obtain a first candidate icon.
The image processing equipment performs semantic segmentation on an original image by using an image semantic segmentation technology to obtain a plurality of regions, wherein each region corresponds to an icon on the original image. Referring to fig. 4b, fig. 4b is a schematic diagram illustrating a visual effect of image processing according to an embodiment of the present disclosure. As shown in fig. 4b, after semantic segmentation of a street view image, icons such as buildings, pedestrians, roads, vehicles, street trees, and street lamps are obtained. In order to obtain a video associated with an original image and achieve the effect that a person can experience the feeling of time lapse by browsing the video, one of the icons is selected as a first candidate icon for a subsequent image processing operation.
Step 403: a first target icon is generated using the antagonistic neural network model and the first candidate icon.
Selecting a selected first candidate icon as an input quantity of the antagonistic neural network model, for example, selecting a building in an original image as the first candidate icon, generating a new content according to the building by a generating network in the antagonistic neural network, wherein the generated content is different from the first candidate icon but has relevance and can be a similar model with the building but is different in degree of recency, distinguishing a fake building generated by the generating network by a distinguishing network in the antagonistic neural network, accurately distinguishing whether the building generated by the generating network is real by the distinguishing network and distinguishing the similarity with the building as the first candidate icon, feeding back the distinguishing result data to the generating network, and then simulating to generate a new fake icon by the generating network according to the distinguishing result data and the first candidate icon, in the repeated process, the icon generated by the generated network is more and more close to the first candidate icon and more vivid until the authenticity of the icon cannot be identified by the network, and at the moment, the generated new icon is the first target icon.
Similarly, according to the method for generating the first target icon according to the antagonistic neural network model and the first candidate icon, a plurality of different first target icons can be generated according to the antagonistic neural network model and the first candidate icon, the contents of the first target icons and the first candidate icons are different but have relevance, that is, the first target icons and the first candidate icons can be buildings, but the building freshness of each first target icon is different from the building freshness presented by the first candidate icon.
Step 404: and fusing the first target icon and the original image by utilizing a fusion technology to generate a first target image.
The image processing device fuses the generated first target icon and the original image by using an image fusion technology, and the detailed implementation of the fusion belongs to the technical means that can be realized by a person skilled in the technical field, which is not described herein, but the appearance of the fusion is that the first candidate icon in the original image is replaced by the generated first target icon, so as to generate a new first target image, and the first target image is different from the content of the original image but has an association, that is, the building in the first target image is the same as the building in the original image, but the degree of the building in the first target image is different.
Similarly, according to the method for generating the first target image by fusing the first target icon and the original image according to the fusion technique, the first target icon and the original image can be fused according to the fusion technique to generate a plurality of different first target images, and since a plurality of different first target icons can be obtained in the step 403, in this step, each first target icon can be fused with the original image to obtain one different first target image. The contents of the first target images and the original images are different but have relevance, that is, the first target images and the original images are all buildings, but the freshness of the buildings of each first target image is different from the freshness of the buildings presented by the original images, that is, the background periods of the original images and the first target images are different.
At this time, the image processing apparatus may generate a video file by arranging the original image and the images in the image set in order of the background period in which the images are located, using an interpolation technique. The image set includes a plurality of images similar to the first target image, which are different from the original image in the first candidate icon, and the first candidate icon is replaced with the first target icon having a different background period. The method arranges the original images and the images in the image set according to the sequence of the background period by using the frame interpolation technology to generate the video file, so that the video file is continuous and smooth, and people can experience the feeling of time lapse by browsing the video file.
Step 405: and performing semantic segmentation processing on the first target image to obtain a second candidate icon.
Further, after the image set including the plurality of images similar to the first target image obtained in step 404, the image processing apparatus may not generate the video file immediately based on the original image and the images in the image set arranged in order of the background period in which the images are located. In order to make the finally generated video file have more fluency continuity and reality, the image processing device may further continue to process the image set, for example, perform semantic segmentation on the first target image in the image set, and obtain a plurality of regions by segmentation, where each region corresponds to an icon on the original image, where the icon includes the first target icon. Here, any one of the icons other than the first target icon may be selected as a second candidate icon for a subsequent image processing operation, such as a person standing in front of a building.
Step 406: and generating a second target icon by using the antagonistic neural network model and the second candidate icon.
Similarly to the above step 403, the selected second candidate icon is used as an input of the antagonistic neural network model, for example, a person standing in front of the building in the first target image is selected as the second candidate icon, the generating network in the antagonistic neural network generates new content according to the person, the generated content is different from the second candidate icon but has correlation, may be similar to the person but is different from the person in age, the discriminating network in the antagonistic neural network discriminates the false person generated by the generating network, the discriminating network accurately discriminates whether the person generated by the generating network is real or not and discriminates the similarity degree with the person as the second candidate icon, then the discriminating result data is fed back to the generating network, and then the generating network generates a new false icon again according to the discriminating result data and the second candidate icon simulation, in the repeated process, the icon generated by the generated network is closer to the second candidate icon and more vivid until the authenticity of the network cannot be identified, at this time, the generated new icon is the second target icon, and the second target icon presents a person with the same pattern as the second candidate icon but in a different age stage.
Similarly, according to the method for generating the second target icon according to the antagonistic neural network model and the second candidate icon, a plurality of different second target icons can be generated according to the antagonistic neural network model and the second candidate icon, and the contents of the second target icons and the second candidate icons are different but have relevance, that is, the second target icons and the second candidate icons are the same person, but the age stage of the person represented by each second target icon is different from the age stage of the person represented by the second candidate icon.
Step 407: and fusing the second target icon and the first target image by utilizing a fusion technology to generate a second target image.
The image processing device fuses the generated second target icon and the first target image by using an image fusion technology, and the appearance of the second target icon is that the second candidate icon in the first target image is replaced by the generated second target icon, so that a new second target image is generated, wherein the second target image is different from the first target image in content but has relevance, namely, a person in the second target image and a person in the first target image are the same person but are different in age stage.
Similarly, according to the method for generating the second target image by fusing the second target icon and the first target image according to the fusion technique, the second target icon and the first target image can be fused according to the fusion technique to generate a plurality of different second target images, and since a plurality of different second target icons can be obtained in the step 406, in this step, each second target icon is fused with the first target image, and a different second target image can be obtained. The second target images and the first target image have different contents and have relevance, that is, the second target images and the first target image are the same person, but the age stage of the person represented by each second target image is different from the age stage of the person represented by the first target image, that is, the background period of the first target image and the plurality of second target images is different.
Step 408: and arranging the original images and the images in the image set according to the background period sequence of the images by using an interpolation technology to generate a video file.
At this time, the image processing apparatus may generate a video file by arranging the original image and the images in the image set in order of the background period in which the images are located, using an interpolation technique. The image set includes a plurality of images similar to a first target image and a second target image, the images similar to the first target image are different from the original image in the first candidate icon, the first candidate icon is replaced with a first target icon with a different background period, and the images similar to the second target image are different from the first target image in the second candidate icon, and the second candidate icon is replaced with a second target icon with a different background period. The method arranges the original images and the images in the image set according to the sequence of the background period by using the frame interpolation technology to generate the video file, can ensure that the video file is coherent and smooth, can also strengthen the relevance between the video file and the original images, has more authenticity and accuracy, and can ensure that people can experience the feeling of time lapse by browsing the video file.
On the other hand, the image processing method provided in fig. 4a further corresponds to a visual effect diagram, and refer to fig. 4c, where fig. 4c is a schematic view of a visual effect of another image processing provided in the embodiment of the present application. As shown in fig. 4c, the image a0 is An original image provided by a user or acquired by An image processing device, and the image processing device processes the original image by the image processing method provided in fig. 3 or fig. 4a to obtain the target image An, and from the comparison between the original image a0 and the target image An, it can be seen that the approximate contents of the two images are the same and are composed of main icons such as buildings, people, roads, etc., and the difference is that the buildings in the original image a0 and the buildings in the target image An have a significant difference in recency, and it can be understood from human perception that the background period of the original image a0 and the background period of the target image An are different, and the background period of the original image a0 is earlier than the background period of the target image An. Similarly, the original image is processed according to the image processing method provided in FIG. 3 or FIG. 4a described above, a plurality of images similar to the target image may be obtained, such as image a1, image a (n-1), etc., with the new images generated constituting a set of images, the image processing apparatus arranges the above original image a0 and the images (a1, … …, a (n-1), An) in the image set in the order of the background period in which the images are located, using the frame interpolation technique, may generate one video file, the video file simulates the dynamic process of gradual change of the building from new to old through the difference of the new degree and the old degree of the building in a plurality of images, the video file can be continuous and smooth, the relevance between the video file and an original image can be enhanced, the reality and the accuracy are better, and people can experience the feeling of time lapse by browsing the video file.
Specifically, referring to fig. 4d, fig. 4d is a schematic view of a visual effect of another image processing provided by the embodiment of the present application. As shown in fig. 4d, the image B0 is an original image provided by a user or obtained by an image processing device, and the image processing device processes the original image by the image processing method provided in fig. 3 or fig. 4a to obtain a target image Bn, and as seen from the comparison between the original image B0 and the target image Bn, the two images are the same in general content and are composed of the primary icons of the pavilion, the trees, people, and river, and the like, and the difference is that the pavilion and the trees in the original image B0 and the pavilion and the trees in the target image Bn are obviously different in the season, the pavilion and the trees in the original image B0 are in the spring, and the pavilion and the trees in the target image Bn are in the winter. Similarly, the original image is processed according to the image processing method provided by fig. 3 or fig. 4a, so as to obtain a plurality of images similar to the target image, such as image B1, image B (n-1), etc., it can be seen that image B1 is in summer and image B (n-1) is in fall, the generated new images form an image set, the image processing apparatus arranges the original image B0 and the images (B1, … …, B (n-1), Bn) in the image set according to the background period sequence of the images by using the frame insertion technique, so as to generate a video file, which simulates the dynamic process of time change all the year around through the difference of the states of attic and trees in the plurality of images, so as to make the video file smooth, strengthen the relevance between the video file and the original image, and have more authenticity and accuracy, and lets a person experience the feeling of time elapsing by browsing the video file.
Referring to fig. 5a, fig. 5a is a schematic flowchart of another image processing method according to an embodiment of the present application, including, but not limited to, the following steps:
step 501: the image processing apparatus acquires an original image and first style migration data.
The image processing device can be an intelligent terminal device such as a mobile phone, a computer, a tablet and the like connected with a network, a user can photograph or upload an original image in a PNG format, a JPEG format, a GIF format or other formats to the image processing device, and after the image processing device obtains the original image, the image processing device carries out corresponding processing on the original image according to the requirements of the user. The image processing apparatus also acquires first style migration data that specifies an image style different from an image style of the original image.
Step 502: and performing semantic segmentation processing on the original image to obtain a first candidate icon set comprising first candidate icons.
The image processing equipment performs semantic segmentation on an original image by using an image semantic segmentation technology to obtain a plurality of regions, wherein each region corresponds to an icon on the original image. Referring to fig. 4b, fig. 4b is a schematic diagram illustrating a visual effect of image processing according to an embodiment of the present disclosure. As shown in fig. 4b, after a street view image is semantically segmented, icons of buildings, pedestrians, roads, vehicles, street trees, street lamps, etc. are obtained, and these icons form a first candidate icon set, and the first candidate icon set includes a first candidate icon, for example, the first candidate icon may be a vehicle.
Step 503: it is determined whether the background period in which the first candidate icon is located conflicts with the background period specified by the first style migration data.
The semantic segmentation processing of the original image through the steps obtains the first candidate icon, and the image processing device judges whether the background period where the first candidate icon is located conflicts with the background period specified by the first style migration data. For example, taking the first candidate icon as a vehicle, the vehicle is a vehicle produced in eighties, but the background period specified by the first style migration data is a style of sixties, at this time, the vehicle produced in eighties and the style of images in sixties are obviously in a logic error, and the vehicle produced in eighties and the style of images in sixties are unlikely to appear, so that the vehicle produced in eighties and the style of images in sixties are in conflict with each other, wherein the first style migration data may be provided by a user or acquired from a database by an image processing device, if the determination result is in conflict with each other, the following step 504 is continuously executed, and if the determination result is in conflict with each other, the following step 507 is continuously executed.
Step 504: a first target icon is obtained that coincides with a background period specified by the first style migration data.
In the case where it is determined that the background period in which the first candidate icon is located conflicts with the background period specified by the first genre transition data, the image processing apparatus acquires the first target icon that matches the background period specified by the first genre transition data, for example, in the example of step 503, a vehicle produced in eighties conflicts with an image genre in sixties, and in this case, the first target icon acquired by the image processing apparatus may be a vehicle produced in sixties and thus matches the image genre in sixties. In addition, the image processing apparatus may acquire the first target icon in various manners, and may generate the first target icon having a background timing that matches the background timing specified by the first style migration data using the antagonistic neural network model and the first candidate icon, or may acquire the first target icon having a background timing that matches the background timing specified by the first style migration data from an existing icon database.
Step 505: and fusing the first target icon and the original image by utilizing a fusion technology to generate a first image.
The image processing device fuses the generated first target icon and the original image by using an image fusion technology, and the detailed implementation of the fusion belongs to the technical means that can be realized by a person skilled in the technical field, which is not described herein, but the appearance of the image is that the first candidate icon in the original image is replaced by the generated first target icon, so as to generate a new first image, the content of the first image is different from that of the original image but has relevance, or by taking a vehicle as an example, the vehicle in the first image is similar to the vehicle in the original image but is different from the vehicle in the year.
Step 506: and performing image style migration on the first image according to the first style migration data to generate a target image.
The image processing equipment converts the first image into the target image with the same image style as the image style specified by the first style migration data according to the first style migration data, so that the whole style migration can be carried out on the whole picture of the first image, the first image can have multiple unique artistic styles while the original image content is reserved, the whole image processing process does not need manual operation, and time and labor are saved.
In one possible embodiment, the first image obtained in step 505 is not subjected to the image style migration process according to the first style migration data immediately. The image processing apparatus first acquires second style migration data different from the first style migration data, and then divides the first image into at least two areas, here, taking the division into a first area and a second area as an example for explanation, and the background period of the icons in the first area and the second area of the first image at this time is the same as the background period designated by the first style migration data. The image processing equipment carries out image style migration on a first area in the first image according to the first style migration data, and then carries out image style migration on a second area in the first image according to the second style migration data, the target image can be generated, the target image has different image styles in different areas, the first image can have multiple unique artistic styles while original image content is reserved, manual operation is not needed in the whole image processing process, and time and labor are saved.
Further, in the process of performing image style migration on the second area in the first image according to the second style migration data, there may be a case where a background period where some icons in the second area are located conflicts with a background period specified by the second style migration data, and at this time, semantic segmentation processing needs to be performed on the second area to obtain a second candidate icon set, where the second candidate icon set includes second candidate icons, such as the icons of buildings, pedestrians, roads, vehicles, road trees, street lamps, and the like in fig. 4 b. Then judging whether the background period of the second candidate icon is in conflict with the background period appointed by the second style migration data, if not, directly carrying out image style migration on the second area according to the second style migration data to obtain the target image; if the second target icon is not acquired, the image processing apparatus may acquire the second target icon with the background period specified by the second style migration data, and the image processing apparatus may acquire the second target icon with the background period specified by the second style migration data. And then the image processing device fuses the second target icon and the second area in the first image by using a fusion technology to obtain a new target second area, wherein the target second area comprises the second target icon, and the second candidate icon in the second area which is externally represented as the first image is replaced by the generated second target icon, so that a new target second area is generated, and the content of the target second area is different from that of the second area but has relevance. And finally, the image processing equipment carries out image style migration processing on the target second area in the first image according to the second style migration data, and the target image can be obtained.
Step 507: and performing image style migration on the original image according to the first style migration data to generate a target image.
Under the condition that the background period where the first candidate icon is located is judged not to conflict with the background period specified by the first style migration data, the image processing equipment directly converts the original image into the target image with the same image style as the image style specified by the first style migration data according to the first style migration data, so that the whole picture of the original image can be subjected to overall style migration, the original image can have multiple unique artistic styles while the original image content is reserved, and the whole image processing process does not need manual operation, so that time and labor are saved.
On the other hand, the image processing method provided in fig. 5a further corresponds to a visual effect diagram, and refer to fig. 5b, where fig. 5b is a schematic view of a visual effect of another image processing provided in the embodiment of the present application. As shown in fig. 5b, the image C0 is an original image provided by the user or acquired by the image processing apparatus, and in addition, the image processing apparatus acquires style transition data of a promotional oil painting published in sixty years in image style, and the image processing apparatus processes the original image by the image processing method provided in fig. 3 or fig. 5a described above to obtain the target image C3. The specific process is as follows: the image processing device performs semantic segmentation on the original image C0 to obtain a plurality of regions, each region corresponding to an icon on the original image C0, such as a dog, a bicycle, and a car, and the icons form a candidate icon set, where the candidate icon set includes a first candidate icon, for example, the first candidate icon is a car. The image processing apparatus determines whether the background period in which the first candidate icon is located conflicts with the background period specified by the style migration data, and it is obvious that the car represented by the segmented first candidate icon is a car produced in eighties, but the background period specified by the style migration data is a sixties style, and at this time, the car produced in eighties obviously has a logical error with the image style of sixties, and the car produced in eighties is unlikely to appear in the image style of sixties, so that the two are in conflict, as shown in image C1. The image data device needs to acquire a first target icon that matches the background period specified by the style migration data, that is, a car that is also produced in sixties, and the image processing device acquires an icon of a car that is produced in sixties in various ways, and may generate the first target icon whose background period matches the background period specified by the style migration data using the antagonistic neural network model and the first candidate icon, or may acquire the first target icon whose background period matches the background period specified by the style migration data from an existing icon database. Then, the image processing apparatus fuses the first target icon generated as described above and the original image C0 by using an image fusion technique to generate a new first image C2, where the first image C2 has a content different from that of the original image C0 but has a relationship, that is, the car in the first image C2 and the car in the original image C0 are similar but different from each other in the years. Finally, the image processing device converts the first image C2 into the target image C3 with the same image style as the image style specified by the style migration data according to the style migration data, so that the target image C3 becomes an image of a poster with an image style of sixty years, the original image content is retained, and the poster has a plurality of unique artistic styles, the whole image processing process does not need manual operation, and time and labor are saved.
The method of the embodiments of the present application is explained in detail above, and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image processing apparatus 60 according to an embodiment of the present disclosure, where the image processing apparatus 60 may include an obtaining unit 601, a generating unit 602, and a dividing unit 603, where the units are described as follows:
an acquisition unit 601 configured to acquire an original image;
a generating unit 602, configured to generate an image set from the original image, where the image set includes a first target image whose content is different from and has a correlation with the content of the original image;
the generating unit 602 is further configured to generate a video file according to the original image and the image set.
In the embodiment of the application, an image set including a first target image can be obtained by processing an original image, and a video file can be obtained according to the original image and the images in the image set, wherein the image frame of the video file further includes the first target image obtained by processing the original image, the first target image is similar to the image content of the original image but in a different background period, and the obtained video file includes a plurality of images similar to the first target image, so that the content of the video file obtained by the method is continuous and smooth, and people can experience the feeling of time lapse by browsing the video file. On the other hand, the essence of the video is a continuous multi-frame image sequence which is mutually associated, the image style migration processing can be only carried out on the original image to obtain the first target image, the image obtained through the processing in the method can replace untimely objects in the image, the overall style migration is carried out on the picture of the image, the whole image processing process does not need manual operation, and time and labor are saved.
In one possible implementation, the image frames of the video file include the original image and the images in the set of images.
In the embodiment of the present application, the content of the obtained video file is further explained, and the image frames in the video file include the original image and the images in the above image set in addition to the first target image obtained by processing the original image, so that the generated video file is more realistic, and the association between the video file and the original image is enhanced.
In yet another possible implementation manner, the segmentation unit 603 is configured to perform semantic segmentation on the original image to obtain a first candidate icon;
the generating unit 602 is further configured to generate a first target icon by using an antagonistic neural network model and the first candidate icon, where a background period of the first target icon is different from a background period of the first candidate icon;
the generating unit 602 is further configured to fuse the first target icon and the original image by using a fusion technique, and generate the image set including the first target image, where the first target image includes the first target icon.
In the embodiment of the present application, the process of processing the original image to obtain the first target image is further explained, obtaining a first candidate icon by semantically segmenting an original image, generating a first target icon having relevance with the first candidate icon by utilizing an antagonistic neural network model, finally fusing the generated first target icon and the original image to obtain a first target image, in this way, an image set comprising a plurality of images similar to the first target image, which are different from the local content associated with the first candidate icon, the rest contents are the same, and the contents of the video file generated by the images in the image set are more consistent and smooth, the relevance of the video file and the original image is enhanced, the reality is better, and people can experience the feeling of time lapse by browsing the video file.
In yet another possible implementation manner, the generating unit 602 is further configured to arrange the original image and the images in the image set in order of a background period in which the images are located by using an interpolation technique, and generate the video file, where the image set includes the first target image.
In the embodiment of the application, the image set comprises a first target image and a plurality of images similar to the first target image, the images are different from the original image in the first candidate icon, the first candidate icon is replaced by the first target icon with different background periods, the method arranges the original image and the images in the image set according to the sequence of the background periods by using the frame interpolation technology to generate the video file, so that the video file can be continuous and smooth, and a person can experience the feeling of time lapse by browsing the video file.
In yet another possible implementation manner, the segmentation unit 603 is further configured to perform semantic segmentation processing on the first target image to obtain a second candidate icon;
the generating unit 602 is further configured to generate a second target icon by using an antagonistic neural network model and the second candidate icon, where a background period of the second target icon is different from a background period of the second candidate icon;
the generating unit 602 is further configured to fuse the second target icon and the first target image by using a fusion technique, and generate the image set including the second target image, where the second target image includes the second target icon.
In the embodiment of the present application, the process of processing the first target image to obtain the second target image is further explained, obtaining a second candidate icon by semantically segmenting the first target image, generating a second target icon which is associated with the second candidate icon by utilizing an antagonistic neural network model, finally fusing the generated second target icon and the first target image to obtain a second target image, in this way, an image set may be generated from the first target image that includes a plurality of images similar to the second target image, which images are different from the local content associated with the second candidate icon, the rest contents are the same, and the contents of the video file generated by the images in the image set are more consistent and smooth, and the relevance of the video file and the first target image is enhanced, the reality is better, and people can experience the feeling of time lapse by browsing the video file.
In yet another possible implementation, the generating unit 602 is further specifically configured to arrange the original image and the images in the image set according to a background period sequence in which the images are located by using an interpolation technique, and generate the video file, where the image set includes the first target image and the second target image.
In the embodiment of the present application, the image set includes the first target image and the second target image, and further includes a plurality of images similar to the first target image and the second target image, the image similar to the first target image is different from the original image in the first candidate icon, the first candidate icon is replaced with a first target icon with a different background period, the image similar to the second target image is different from the first target image in the second candidate icon, the second candidate icon is replaced with a second target icon with a different background period, the method arranges the original image and the images in the image set in the order of the background periods by using the frame interpolation technique to generate the video file, besides being capable of enabling the video file to be continuous and smooth, people can experience the feeling of time lapse by browsing the video file.
In yet another possible embodiment, the set of images includes a third target image having an image style different from and associated with the image style of the original image;
the obtaining unit 601 is further configured to obtain style migration data;
the generating unit 602 is further configured to perform image style migration on the original image according to the style migration data to obtain the image set including the third target image, where the image style of the third target image is the image style specified by the style migration data.
In the embodiment of the present application, the process of processing the original image to obtain the third target image is described, and the third target image may be obtained by obtaining the style migration data and then performing the image style migration on the original image according to the style migration data, where the image style of the third target image is different from the image style of the original image and is the image style specified by the style migration data. By the method, the image set comprising a plurality of images similar to the third target image can be generated from the original image, the images are the same as the original image except for the image style, the content of the video file generated by the images in the image set is more consistent and smooth, the relevance between the video file and the original image is enhanced, the reality is higher, and people can experience the feeling of time lapse by browsing the video file.
In yet another possible implementation manner, the generating unit 602 is further configured to arrange the original image and the images in the image set in order of a background period in which the images are located by using an interpolation technique, and generate the video file, where the image set includes the first target image and the third target image.
In the embodiment of the application, the image set comprises a third target image and a plurality of images similar to the third target image, the images are different from the original images in image style, the original images are subjected to image style migration to obtain the images with the image style specified by the style migration data, the method utilizes the frame interpolation technology to arrange the original images and the images in the image set according to the sequence of the background periods, a video file is generated, the video file can be continuous and smooth, and people can experience the feeling of time lapse by browsing the video file.
In yet another possible implementation manner, the segmentation unit 603 is further configured to perform semantic segmentation processing on the third target image to obtain a third candidate icon;
the generating unit 602 is further configured to generate a third target icon by using an antagonistic neural network model and the third candidate icon, where the background period of the third target icon is different from the background period of the third candidate icon and is the same as the background period specified by the style migration data;
the generating unit 602 is further configured to fuse the third target icon and the third target image by using a fusion technique, and generate the image set including a fourth target image, where the fourth target image includes the fourth target icon, an image style of the fourth target image is the same as an image style of the third target image, and a content of the fourth target image is different from and has a relationship with a content of the third target image.
In the embodiment of the present application, the process of processing the third target image to obtain the fourth target image is explained, obtaining a third candidate icon by semantically segmenting the third target image, generating a third target icon which is associated with the third candidate icon by using an antagonistic neural network model, finally fusing the generated third target icon and the third target image to obtain a fourth target image, in this way, an image set may be generated from the third target image, which image set comprises a plurality of images similar to the fourth target image, which images are different from the local content associated with the third candidate icon, the rest contents are the same, and the contents of the video file generated by the images in the image set are more consistent and smooth, and the relevance of the video file and the third target image is enhanced, the reality is better, and people can experience the feeling of time lapse by browsing the video file.
In yet another possible implementation, the generating unit 602 is further configured to arrange the original image and the images in the image set in order of a background period in which the images are located by using an interpolation technique, and generate the video file, where the image set includes the first target image, the third target image, and the fourth target image.
In the embodiment of the present application, the image set includes the first target image, the third target image, and the fourth target image, and further includes a plurality of images similar to the first target image, the third target image, and the fourth target image, the image similar to the first target image differs from the original image in the first candidate icon, the first candidate icon is replaced with the first target icon with a different background period, the image similar to the third target image differs from the first target image in the image style, the image style of the first target image is migrated to the image style specified by the style migration data, the image similar to the fourth target image differs from the third target image in the third candidate icon, the third candidate icon is replaced with the third target icon with a different background period, and the original image and the images in the image set are sequentially displayed in the background period by using the frame interpolation technique The video files are generated through sequence arrangement, so that the video files can be coherent and smooth, and people can experience the feeling of time lapse by browsing the video files.
According to the embodiment of the present application, the units in the apparatus shown in fig. 6 may be respectively or entirely combined into one or several other units to form a structure, or some unit(s) therein may be further split into multiple functionally smaller units to form a structure, which may achieve the same operation without affecting the achievement of the technical effect of the embodiment of the present application. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present application, the terminal-based terminal may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of multiple units.
It should be noted that the implementation of each unit may also correspond to the corresponding description of the method embodiments shown in fig. 3 and 4 a.
In the image processing apparatus depicted in fig. 6, a video file is obtained by processing an original image, and an image frame of the video file comprises a plurality of target images obtained by processing the original image, wherein the target images are similar to the image content of the original image but are in different background periods, so that the content of the video file obtained by the method is coherent and smooth, the relevance between the video file and the original image is enhanced, the reality and the accuracy are better, and people can experience the feeling of time lapse by browsing the video file.
On the other hand, the acquiring unit 601, the generating unit 602, and the dividing unit 603 in the image processing apparatus 60 provided in fig. 6 may also be described as follows:
an acquisition unit 601 configured to acquire an original image and first style migration data;
a generating unit 602, configured to convert the original image into a target image, where an image style of the target image is an image style specified by the first style migration data and is different from an image style of the original image.
In the embodiment of the application, an image processing device is provided, that is, a device for performing image style migration on an image, and the device is characterized in that an original image and first style migration data are acquired, the original image is converted into a target image with the same image style as that specified by the first style migration data according to the first style migration data, and the overall style migration is performed on the whole picture of the original image, so that the original image has multiple unique artistic styles while original content is retained, and the whole image processing process does not need manual operation, thereby saving time and labor.
In a possible implementation manner, the segmentation unit 602 is further configured to perform semantic segmentation processing on the original image to obtain a first candidate icon set, where the first candidate icon set includes at least one candidate icon, and the at least one candidate icon includes a first candidate icon, and the first candidate icon is a component of the original image;
the obtaining unit 601 is further configured to obtain a first target icon when a background period in which the first candidate icon is located conflicts with a background period specified by the first style migration data, where the background period in which the first target icon is located coincides with the background period specified by the first style migration data;
the generating unit 603 is further configured to fuse the first target icon and the original image by using a fusion technique to generate a first image, where the first image includes the first target icon;
the generating unit 603 is further configured to perform image style migration on the first image according to the first style migration data, and generate the target image.
In the embodiment of the present application, a device for performing image style migration on an image is further supplemented and improved, that is, in the process of image style migration, the original image needs to be semantically segmented, whether a background period where a first candidate icon in the original image is located conflicts with a background period specified by first style migration data is judged, if the first candidate icon conflicts with the background period specified by the first style migration data, a first target icon is used to replace the first candidate icon, a new first image is generated, and then image style migration is performed on the first image, so as to obtain a target image. By the method, logic errors possibly generated in image style migration and related to the background period of the image can be avoided, and the generated target image can be more authentic and correct.
In yet another possible implementation, the generating unit 602 is further configured to generate the first target icon by using an antagonistic neural network model and the first candidate icon, where the first target icon is located in a background period different from a background period of the first candidate icon; or, the obtaining unit 601 is further configured to obtain the first target icon from a database.
In the embodiment of the present application, a plurality of possible options for acquiring the first target icon are provided, which may be the first target icon generated by the antagonistic neural network model and having a different background period from the first candidate icon, or may be the first target icon directly acquired from the database and having an existing background period different from the background period of the first candidate icon.
In yet another possible implementation, the generating unit 602 is further configured to, when a background period in which a candidate icon in the first candidate icon set is located does not conflict with a background period specified by the first style migration data, perform image style migration on the original image according to the first style migration data, and generate the target image, where image content of the target image is the same as image content of the original image.
In the embodiment of the application, the method for performing image style migration on the original image under the condition that the background period where the first candidate icon in the original image is located is not in conflict with the background period specified by the first style migration data is described, so that the whole style migration on the whole picture of the original image can be realized, the original image has multiple unique artistic styles while retaining original content, and the whole image processing process does not need manual operation, thereby saving time and labor.
In yet another possible implementation manner, the obtaining unit 601 is further configured to obtain second style migration data, where the second style migration data is different from the first style migration data;
the segmentation unit 603 is further configured to divide the first image into at least two regions, where the at least two regions include a first region and a second region;
the generating unit 602 is further configured to perform image style migration on the first region of the first image according to the first style migration data, perform image style migration on the second region of the first image according to the second style migration data, and generate the target image.
In this embodiment, the method for performing image style migration on an image is further supplemented, that is, second style migration data different from the first style migration data is acquired, the obtained first image is divided into at least two areas, and different image style migrations are performed on different areas, for example, an image style migration is performed on a first area in the first image according to the first style migration data, and an image style migration is performed on a second area in the first image according to the second style migration data, so as to generate the target image. Therefore, the overall style migration is respectively carried out on different areas of the first image, the first image can have multiple unique artistic styles while original content is reserved, the whole image processing process does not need manual operation, and time and labor are saved.
In yet another possible implementation, the segmentation unit 603 is further configured to perform semantic segmentation on the second region of the first image to obtain a second candidate icon set, where the second candidate icon set includes a second candidate icon, and the second candidate icon is a component of the second region;
the obtaining unit 601 is further configured to obtain a second target icon when a background period in which the second candidate icon is located conflicts with a background period specified by the second style migration data, where the background period in which the second target icon is located coincides with the background period specified by the second style migration data;
the generating unit 602 is further configured to fuse the second target icon and the second region by using a fusion technique to generate a target second region, where the target second region includes the second target icon;
the generating unit 602 is further configured to perform image style migration on the target second region according to the second style migration data.
In the embodiment of the present application, a method for performing image style migration on an image is further supplemented and improved, that is, in the process of image style migration, a second region of a first image needs to be semantically segmented, whether a background period where a second candidate icon in the second region of the first image is located conflicts with a background period specified by second style migration data is determined, if the second candidate icon conflicts with the background period specified by the second style migration data, a second target icon is used to replace the second candidate icon, a new target second region is generated, and then image style migration is performed on the target second region according to the second style migration data, so as to obtain a target image. By the method, logic errors possibly generated in image style migration and related to the background period of the image can be avoided, and the generated target image can be more authentic and correct.
In yet another possible implementation, the generating unit 602 is further configured to generate the second target icon by using an antagonistic neural network model and the second candidate icon, where the second target icon is located in a background period different from a background period of the second candidate icon;
or, the obtaining unit 601 is further configured to obtain the second target icon from a database.
In the embodiment of the present application, a plurality of possible options for obtaining the second target icon are provided, which may be the second target icon generated by the antagonistic neural network model and having a different background period from the second candidate icon, or may be the second target icon directly obtained from the database and having an existing background period different from the background period of the second candidate icon.
It should be noted that the implementation of each unit may also correspond to the corresponding description of the method embodiment shown in fig. 5 a.
In the image processing apparatus described in fig. 6, the original image is converted into the target image having the same image style as the image style specified by the style migration data according to the style migration data, so that the entire screen of the original image is subjected to the overall style migration, the original image can have a plurality of unique artistic styles while retaining the original content, and the entire image processing process does not require manual operation, thereby saving time and labor.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an image processing apparatus 70 according to an embodiment of the present disclosure, where the image processing apparatus 70 may include a memory 701 and a processor 702. Further optionally, a bus 703 may be included, wherein the memory 701 and the processor 702 are connected via the bus 703.
The memory 701 is used to provide a storage space, and data such as an operating system and a computer program may be stored in the storage space. The memory 701 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM).
The processor 702 is a module for performing arithmetic operations and logical operations, and may be one or a combination of plural kinds of processing modules such as a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a microprocessor unit (MPU), or the like.
The memory 701 stores a computer program, and the processor 702 calls the computer program stored in the memory 701 to perform the following operations:
acquiring an original image;
generating an image set according to the original image, wherein the image set comprises a first target image, and the content of the first target image is different from that of the original image and has relevance;
and generating a video file according to the original image and the image set.
In the embodiment of the application, an image set including a first target image can be obtained by processing an original image, and a video file can be obtained according to the original image and the images in the image set, wherein the image frame of the video file further includes the first target image obtained by processing the original image, the first target image is similar to the image content of the original image but in a different background period, and the obtained video file includes a plurality of images similar to the first target image, so that the content of the video file obtained by the method is continuous and smooth, and people can experience the feeling of time lapse by browsing the video file. On the other hand, the essence of the video is a continuous multi-frame image sequence which is mutually associated, the image style migration processing can be only carried out on the original image to obtain the first target image, the image obtained through the processing in the method can replace untimely objects in the image, the overall style migration is carried out on the picture of the image, the whole image processing process does not need manual operation, and time and labor are saved.
In one possible implementation, the image frames of the video file include the original image and the images in the set of images.
In one possible implementation, in generating the image set from the original image, the processor 702 is specifically configured to:
performing semantic segmentation processing on the original image to obtain a first candidate icon;
generating a first target icon by using an antagonistic neural network model and the first candidate icon, wherein the background period of the first target icon is different from the background period of the first candidate icon;
fusing the first target icon and the original image using a fusion technique to generate the image set including the first target image, the first target image including the first target icon.
In one possible embodiment, in generating a video file from the original image and the set of images, the processor 702 is specifically configured to:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image.
In a possible implementation, the image set includes a second target image, the content of which is different from and has a correlation with the content of the original image and the content of the first target image; in generating the set of images from the raw images, the processor 702 is specifically configured to:
performing semantic segmentation processing on the first target image to obtain a second candidate icon;
generating a second target icon by using the antagonistic neural network model and the second candidate icon, wherein the background period of the second target icon is different from the background period of the second candidate icon;
fusing the second target icon with the first target image using a fusion technique to generate the image set including the second target image, the second target image including the second target icon.
In one possible embodiment, in generating a video file from the original image and the set of images, the processor 702 is specifically configured to:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image and the second target image.
In a possible embodiment, the set of images includes a third target image having an image style different from and associated with the image style of the original image; in generating the set of images from the raw images, the processor 702 is specifically configured to:
acquiring style migration data;
and performing image style migration on the original image according to the style migration data to obtain the image set comprising the third target image, wherein the image style of the third target image is the image style specified by the style migration data.
In one possible embodiment, in generating a video file from the original image and the set of images, the processor 702 is specifically configured to:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image and the third target image.
In a possible implementation, after obtaining the image set including the third target image, the processor 702 is specifically configured to:
performing semantic segmentation processing on the third target image to obtain a third candidate icon;
generating a third target icon using an antagonistic neural network model and the third candidate icon, the third target icon being in a background period that is different from the background period of the third candidate icon and the same as the background period specified by the style migration data;
fusing the third target icon and the third target image by using a fusion technology to generate the image set comprising a fourth target image, wherein the fourth target image comprises the fourth target icon, the image style of the fourth target image is the same as that of the third target image, and the content of the fourth target image is different from that of the third target image and has relevance.
In one possible embodiment, in generating a video file from the original image and the set of images, the processor 702 is specifically configured to:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image, the third target image and the fourth target image.
It should be noted that the specific implementation of the image processing apparatus may also correspond to the corresponding description of the method embodiments shown in fig. 3 and 4 a.
In the image processing device 70 described in fig. 7, a video file is obtained by processing an original image, and an image frame of the video file comprises a plurality of target images obtained by processing the original image, wherein the target images are similar to the image content of the original image but are in different background periods, so that the content of the video file obtained by the method is consistent and smooth, the relevance between the video file and the original image is enhanced, the reality and the accuracy are better, and people can experience the feeling of time lapse by browsing the video file.
On the other hand, the memory 701 stores a computer program, and the processor 702 calls the computer program stored in the memory 701, and may further perform the following operations:
acquiring an original image and first style migration data;
and converting the original image into a target image, wherein the image style of the target image is the image style specified by the first style migration data and is different from the image style of the original image.
In the embodiment of the application, another image processing method is provided, namely, a method for performing image style migration on an image is provided, and the method comprises the steps of firstly obtaining an original image and first style migration data, converting the original image into a target image with the same image style as that specified by the first style migration data according to the first style migration data, and performing overall style migration on the whole picture of the original image, so that the original image can have multiple unique artistic styles while retaining original content, and the whole image processing process does not need manual operation, thereby saving time and labor.
In one possible implementation, in converting the original image into the target image, the processor 702 is specifically configured to:
performing semantic segmentation processing on the original image to obtain a first candidate icon set, wherein the first candidate icon set comprises at least one candidate icon, the at least one candidate icon comprises a first candidate icon, and the first candidate icon is a component of the original image;
acquiring a first target icon when the background period of the first candidate icon conflicts with the background period specified by the first style migration data, wherein the background period of the first target icon is consistent with the background period specified by the first style migration data;
fusing the first target icon and the original image by utilizing a fusion technology to generate a first image, wherein the first image comprises the first target icon;
and carrying out image style migration on the first image according to the first style migration data to generate the target image.
In a possible implementation manner, in acquiring the first target icon, the processor 702 is specifically configured to:
generating the first target icon by using an antagonistic neural network model and the first candidate icon, wherein the background period of the first target icon is different from the background period of the first candidate icon; or, the first target icon is obtained from a database.
In a possible implementation manner, the processor 702 is specifically further configured to:
and when the background period of the candidate icon in the first candidate icon set does not conflict with the background period specified by the first style migration data, performing image style migration on the original image according to the first style migration data to generate the target image, wherein the image content of the target image is the same as the image content of the original image.
In a possible implementation manner, in terms of performing the image style migration on the first image according to the first style migration data, the processor 702 is further specifically configured to:
obtaining second style migration data, wherein the second style migration data is different from the first style migration data;
dividing the first image into at least two regions, the at least two regions including a first region and a second region;
and carrying out image style migration on the first area of the first image according to the first style migration data, and carrying out image style migration on the second area of the first image according to the second style migration data to generate the target image.
In a possible implementation manner, in terms of performing image style migration on the second region of the first image according to the second style migration data, the processor 702 is further specifically configured to:
performing semantic segmentation on the second area of the first image to obtain a second candidate icon set, wherein the second candidate icon set comprises second candidate icons, and the second candidate icons are components of the second area;
acquiring a second target icon when the background period of the second candidate icon is in conflict with the background period specified by the second style migration data, wherein the background period of the second target icon is consistent with the background period specified by the second style migration data;
fusing the second target icon and the second area by utilizing a fusion technology to generate a target second area, wherein the target second area comprises the second target icon;
and carrying out image style migration on the target second area according to the second style migration data.
In a possible implementation manner, in obtaining the second target icon, the processor 702 is specifically further configured to:
generating the second target icon by using an antagonistic neural network model and the second candidate icon, wherein the background period of the second target icon is different from the background period of the second candidate icon;
or, obtaining the second target icon from a database.
It should be noted that the specific implementation of the image processing apparatus may also correspond to the corresponding description of the method embodiment shown in fig. 5 a.
In the image processing apparatus 70 described in fig. 7, by converting the original image into the target image having the same image style as the image style specified by the style migration data according to the style migration data, the overall style migration of the entire screen of the original image is realized, the original image can have a plurality of unique artistic styles while retaining the original content, and the entire image processing process does not require manual operation, and is time-saving and labor-saving.
The present application further provides an apparatus, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method as described above.
Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on one or more processors, the image processing method as described above may be implemented.
Embodiments of the present application further provide a computer program product, which when running on a processor, can implement the image processing method as described above.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
In summary, by implementing the embodiment of the present application, a video file is obtained by processing an original image, an image frame of the video file includes a plurality of target images obtained by processing the original image, and the target images are similar to the image content of the original image but in different background periods, so that the content of the video file obtained by the method is coherent and smooth, the relevance between the video file and the original image is enhanced, the video file has more authenticity and accuracy, and a person can experience the feeling of time lapse by browsing the video file; on the other hand, the original image is converted into the target image with the same image style as the image style specified by the style migration data according to the style migration data, so that the whole picture of the original image is subjected to overall style migration, the original image can have multiple unique artistic styles while the original content is retained, and the whole image processing process does not need manual operation, thereby saving time and labor.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. The embodiments of the present application are intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (19)

1. An image processing method, comprising:
acquiring an original image;
generating an image set according to the original image, wherein the image set comprises a first target image, and the content of the first target image is different from that of the original image and has relevance;
and generating a video file according to the original image and the image set.
2. The method of claim 1, wherein the image frames of the video file comprise the original image and the images in the set of images.
3. The method of claim 1 or 2, wherein the generating a set of images from the original image comprises:
performing semantic segmentation processing on the original image to obtain a first candidate icon;
generating a first target icon by using an antagonistic neural network model and the first candidate icon, wherein the background period of the first target icon is different from the background period of the first candidate icon;
fusing the first target icon and the original image using a fusion technique to generate the image set including the first target image, the first target image including the first target icon.
4. The method of claim 3, wherein generating a video file from the original image and the set of images comprises:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image.
5. The method according to claim 1 or 2, wherein the image set comprises a second target image, the content of the second target image is different from and has relevance to the content of the original image and the content of the first target image; the generating of the set of images from the original image comprises:
performing semantic segmentation processing on the first target image to obtain a second candidate icon;
generating a second target icon by using the antagonistic neural network model and the second candidate icon, wherein the background period of the second target icon is different from the background period of the second candidate icon;
fusing the second target icon with the first target image using a fusion technique to generate the image set including the second target image, the second target image including the second target icon.
6. The method of claim 5, wherein generating a video file from the original image and the set of images comprises:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image and the second target image.
7. The method according to claim 1 or 2, wherein the set of images comprises a third target image having an image style different from and having a correlation with an image style of the original image; the generating of the set of images from the original image comprises:
acquiring style migration data;
and performing image style migration on the original image according to the style migration data to obtain the image set comprising the third target image, wherein the image style of the third target image is the image style specified by the style migration data.
8. The method of claim 7, wherein generating a video file from the original image and the set of images comprises:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image and the third target image.
9. The method of claim 7, wherein after obtaining the set of images including the third target image, the method further comprises:
performing semantic segmentation processing on the third target image to obtain a third candidate icon;
generating a third target icon using an antagonistic neural network model and the third candidate icon, the third target icon being in a background period that is different from the background period of the third candidate icon and the same as the background period specified by the style migration data;
fusing the third target icon and the third target image by using a fusion technology to generate the image set comprising a fourth target image, wherein the fourth target image comprises the fourth target icon, the image style of the fourth target image is the same as that of the third target image, and the content of the fourth target image is different from that of the third target image and has relevance.
10. The method of claim 9, wherein generating a video file from the original image and the set of images comprises:
arranging the original image and the images in the image set according to the background period sequence of the images by using a frame interpolation technology, and generating the video file, wherein the image set comprises the first target image, the third target image and the fourth target image.
11. An image processing method, comprising:
acquiring an original image and first style migration data;
and converting the original image into a target image, wherein the image style of the target image is the image style specified by the first style migration data and is different from the image style of the original image.
12. The method of claim 11, wherein converting the original image into the target image comprises:
performing semantic segmentation processing on the original image to obtain a first candidate icon set, wherein the first candidate icon set comprises at least one candidate icon, the at least one candidate icon comprises a first candidate icon, and the first candidate icon is a component of the original image;
acquiring a first target icon when the background period of the first candidate icon conflicts with the background period specified by the first style migration data, wherein the background period of the first target icon is consistent with the background period specified by the first style migration data;
fusing the first target icon and the original image by utilizing a fusion technology to generate a first image, wherein the first image comprises the first target icon;
and carrying out image style migration on the first image according to the first style migration data to generate the target image.
13. The method of claim 12, wherein said obtaining a first target icon comprises:
generating the first target icon by using an antagonistic neural network model and the first candidate icon, wherein the background period of the first target icon is different from the background period of the first candidate icon;
or, the first target icon is obtained from a database.
14. The method according to claim 12 or 13, characterized in that the method further comprises:
and when the background period of the candidate icon in the first candidate icon set does not conflict with the background period specified by the first style migration data, performing image style migration on the original image according to the first style migration data to generate the target image, wherein the image content of the target image is the same as the image content of the original image.
15. The method of claim 12, wherein said image-style migrating the first image according to the first-style migration data comprises:
obtaining second style migration data, wherein the second style migration data is different from the first style migration data;
dividing the first image into at least two regions, the at least two regions including a first region and a second region;
and carrying out image style migration on the first area of the first image according to the first style migration data, and carrying out image style migration on the second area of the first image according to the second style migration data to generate the target image.
16. The method of claim 15, wherein said image-style migrating said second region of said first image according to said second-style migration data comprises:
performing semantic segmentation on the second area of the first image to obtain a second candidate icon set, wherein the second candidate icon set comprises second candidate icons, and the second candidate icons are components of the second area;
acquiring a second target icon when the background period of the second candidate icon is in conflict with the background period specified by the second style migration data, wherein the background period of the second target icon is consistent with the background period specified by the second style migration data;
fusing the second target icon and the second area by utilizing a fusion technology to generate a target second area, wherein the target second area comprises the second target icon;
and carrying out image style migration on the target second area according to the second style migration data.
17. The method of claim 16, wherein said obtaining a second target icon comprises:
generating the second target icon by using an antagonistic neural network model and the second candidate icon, wherein the background period of the second target icon is different from the background period of the second candidate icon;
or, obtaining the second target icon from a database.
18. An electronic device, comprising: a processor and a memory, wherein the memory stores program instructions; the program instructions, when executed by the processor, cause the processor to perform the method of any of claims 1 to 17.
19. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium; the computer program, when run on one or more processors, performs the method of any one of claims 1 to 17.
CN202010912720.0A 2020-09-02 2020-09-02 Image processing method, apparatus and storage medium Pending CN111951157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010912720.0A CN111951157A (en) 2020-09-02 2020-09-02 Image processing method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010912720.0A CN111951157A (en) 2020-09-02 2020-09-02 Image processing method, apparatus and storage medium

Publications (1)

Publication Number Publication Date
CN111951157A true CN111951157A (en) 2020-11-17

Family

ID=73366543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010912720.0A Pending CN111951157A (en) 2020-09-02 2020-09-02 Image processing method, apparatus and storage medium

Country Status (1)

Country Link
CN (1) CN111951157A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991158A (en) * 2021-03-31 2021-06-18 商汤集团有限公司 Image generation method, device, equipment and storage medium
CN113096000A (en) * 2021-03-31 2021-07-09 商汤集团有限公司 Image generation method, device, equipment and storage medium
CN114765692A (en) * 2021-01-13 2022-07-19 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
WO2022171114A1 (en) * 2021-02-09 2022-08-18 北京字跳网络技术有限公司 Image processing method and apparatus, and device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110956654A (en) * 2019-12-02 2020-04-03 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN111586321A (en) * 2020-05-08 2020-08-25 Oppo广东移动通信有限公司 Video generation method and device, electronic equipment and computer-readable storage medium
CN111768425A (en) * 2020-07-23 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device and equipment
US20200380639A1 (en) * 2019-05-31 2020-12-03 Apple Inc. Enhanced Image Processing Techniques for Deep Neural Networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200380639A1 (en) * 2019-05-31 2020-12-03 Apple Inc. Enhanced Image Processing Techniques for Deep Neural Networks
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110956654A (en) * 2019-12-02 2020-04-03 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN111586321A (en) * 2020-05-08 2020-08-25 Oppo广东移动通信有限公司 Video generation method and device, electronic equipment and computer-readable storage medium
CN111768425A (en) * 2020-07-23 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device and equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114765692A (en) * 2021-01-13 2022-07-19 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN114765692B (en) * 2021-01-13 2024-01-09 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
WO2022171114A1 (en) * 2021-02-09 2022-08-18 北京字跳网络技术有限公司 Image processing method and apparatus, and device and medium
CN112991158A (en) * 2021-03-31 2021-06-18 商汤集团有限公司 Image generation method, device, equipment and storage medium
CN113096000A (en) * 2021-03-31 2021-07-09 商汤集团有限公司 Image generation method, device, equipment and storage medium
WO2022206158A1 (en) * 2021-03-31 2022-10-06 商汤集团有限公司 Image generation method and apparatus, device, and storage medium

Similar Documents

Publication Publication Date Title
EP3996346B1 (en) Message pushing method, storage medium, and server
CN111951157A (en) Image processing method, apparatus and storage medium
CN112840376B (en) Image processing method, device and equipment
KR20210078539A (en) Target detection method and apparatus, model training method and apparatus, apparatus and storage medium
CN106845390A (en) Video title generation method and device
CN103971391A (en) Animation method and device
WO2020207413A1 (en) Content pushing method, apparatus, and device
CN107729540B (en) Method, apparatus and computer-readable storage medium for photo classification
CN111209423B (en) Image management method and device based on electronic album and storage medium
CN110555171B (en) Information processing method, device, storage medium and system
CN108628985B (en) Photo album processing method and mobile terminal
CN108764051B (en) Image processing method and device and mobile terminal
CN111491123A (en) Video background processing method and device and electronic equipment
CN110865756A (en) Image labeling method, device, equipment and storage medium
CN114037692A (en) Image processing method, mobile terminal and storage medium
CN112950525A (en) Image detection method and device and electronic equipment
CN109859115A (en) A kind of image processing method, terminal and computer readable storage medium
CN114943976B (en) Model generation method and device, electronic equipment and storage medium
CN108255389B (en) Image editing method, mobile terminal and computer readable storage medium
CN116320721A (en) Shooting method, shooting device, terminal and storage medium
CN116071614A (en) Sample data processing method, related device and storage medium
WO2022063189A1 (en) Salient element recognition method and apparatus
CN114065168A (en) Information processing method, intelligent terminal and storage medium
CN113793407A (en) Dynamic image production method, mobile terminal and storage medium
CN110764852B (en) Screenshot method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination