CN117808668A - Image processing method and device and electronic equipment - Google Patents
Image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN117808668A CN117808668A CN202311853084.9A CN202311853084A CN117808668A CN 117808668 A CN117808668 A CN 117808668A CN 202311853084 A CN202311853084 A CN 202311853084A CN 117808668 A CN117808668 A CN 117808668A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- processed
- mask
- repaired
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000008859 change Effects 0.000 claims description 30
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 230000005012 migration Effects 0.000 description 16
- 238000013508 migration Methods 0.000 description 16
- 230000000694 effects Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 239000003550 marker Substances 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000008439 repair process Effects 0.000 description 5
- 230000003416 augmentation Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 235000013305 food Nutrition 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000005530 etching Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010428 oil painting Methods 0.000 description 2
- 239000013535 sea water Substances 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 235000017166 Bambusa arundinacea Nutrition 0.000 description 1
- 235000017491 Bambusa tulda Nutrition 0.000 description 1
- 241001330002 Bambuseae Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 235000015334 Phyllostachys viridis Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 239000011425 bamboo Substances 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000945 filler Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses an image processing method, an image processing device and electronic equipment. The method comprises the following steps: and obtaining an intermediate image corresponding to the image to be processed, wherein the intermediate image is an image obtained by removing a target object in the image to be processed, and filling target contents into a target area of the intermediate image to obtain a corresponding image to be repaired, and the target area is an area where the target object is originally located. Repairing a target area in the image to be repaired based on the image to be repaired and a corresponding mask image to obtain a repaired image, wherein the mask image is used for identifying the area of the target object originally in the image to be processed, changing image parameters of the repaired image to obtain a parameter-changed image, and adding the target object into the parameter-changed image based on the mask image and the image to be processed to obtain a corresponding target image. Therefore, the corresponding target image is more natural and coordinated through the mode.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, and an electronic device.
Background
Image style migration is a technique that applies a specific style to an object (e.g., a person) in an image (e.g., video) that enables the object in the image to exhibit different artistic styles, such as oil painting, watercolor, sketching, etc. The technology has wide application prospect in the fields of art creation, film and television later stage, game design and the like, and can provide more originality and selection for users. However, the related image style migration method has a problem that the processing effect is to be improved.
Disclosure of Invention
In view of the above, the present application proposes an image processing method, an image processing apparatus, and an electronic device to improve the above.
In a first aspect, the present application provides an image processing method, the method including: obtaining an intermediate image corresponding to an image to be processed, wherein the intermediate image is an image obtained by removing a target object in the image to be processed; filling target content into a target area of the intermediate image to obtain a corresponding image to be repaired, wherein the target area is an area where the target object is originally located; repairing a target area in the image to be repaired based on the image to be repaired and a corresponding mask image to obtain a repaired image, wherein the mask image is used for marking an area of the target object originally in the image to be processed; performing image parameter change on the repaired image to obtain a parameter-changed image; and adding the target object to the image with the changed parameters based on the mask image and the image to be processed so as to obtain a corresponding target image.
In a second aspect, the present application provides an image processing apparatus, the apparatus comprising: the device comprises an intermediate image acquisition unit, a processing unit and a processing unit, wherein the intermediate image acquisition unit is used for acquiring an intermediate image corresponding to an image to be processed, and the intermediate image is an image obtained by removing a target object in the image to be processed; the content filling unit is used for filling target content into a target area of the intermediate image to obtain a corresponding image to be repaired, wherein the target area is an area where the target object is originally located; the image restoration unit is used for restoring the target area in the image to be restored based on the image to be restored and the corresponding mask image so as to obtain a restored image, wherein the mask image is used for identifying the area of the target object originally in the image to be processed; an image parameter changing unit, configured to perform image parameter changing on the repaired image, so as to obtain a parameter-changed image; and the image synthesis unit is used for adding the target object to the image with the changed parameters based on the mask image and the image to be processed so as to obtain a corresponding target image.
In a third aspect, the present application provides an electronic device comprising at least a processor, and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the methods described above.
In a fourth aspect, the present application provides a computer readable storage medium having program code stored therein, wherein the program code, when executed by a processor, performs the above-described method.
According to the image processing method, the image processing device and the electronic equipment, the image obtained by removing the target object in the image to be processed is obtained and used as the intermediate image, and then the target area of the intermediate image can be filled with target content to obtain the corresponding image to be repaired. And then, repairing the target area in the image to be repaired based on the image to be repaired and the corresponding mask image to obtain a repaired image. And then, carrying out image parameter change on the repaired image to obtain a parameter-changed image, and adding the target object into the parameter-changed image based on the mask image and the image to be processed to obtain a corresponding target image. Therefore, the image restoration operation can be utilized to restore the image (intermediate image) after the target object is removed, so that the target area generated by the removal of the target object is filled, and the saw teeth and noise points are eliminated, so that the obtained restored image is more complete and smoother. Further, when the image parameters are changed (for example, the image style parameters are changed) based on the restored image to obtain the image with changed parameters, the object is added to the image with changed parameters to obtain the corresponding object image more naturally and cooperatively.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating another application scenario of the image processing method according to the embodiment of the present application;
FIG. 3 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 shows a schematic diagram of an intermediate image in an embodiment of the present application;
FIG. 5 shows a schematic diagram of a mask image in an embodiment of the present application;
FIG. 6 shows a schematic diagram of an image to be repaired in an embodiment of the present application;
FIG. 7 shows a schematic representation of an image after repair in an embodiment of the present application;
FIG. 8 shows a schematic diagram of a target image in an embodiment of the present application;
FIG. 9 illustrates a schematic diagram of changing the position of a target object in an embodiment of the present application;
Fig. 10 shows a flowchart of an image processing method according to still another embodiment of the present application;
FIG. 11 shows a flowchart for obtaining a target video according to an embodiment of the present application;
fig. 12 shows a flowchart of obtaining a moving picture according to an embodiment of the present application;
fig. 13 is a flowchart of an image processing method according to still another embodiment of the present application;
fig. 14 is a block diagram showing the structure of an image processing apparatus according to an embodiment of the present application;
fig. 15 shows a block diagram of another electronic device of the present application for performing an image processing method according to an embodiment of the present application;
fig. 16 is a storage unit for storing or carrying program codes for implementing the image processing method according to the embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Image processing is a technique that involves performing various operations and analysis on an image. The method is widely applied to various fields including computer vision, medical image analysis, digital image processing, remote sensing image processing and the like. By processing the image, we can perform various tasks such as image restoration, image enhancement, feature extraction, object detection and recognition, etc. For example, by performing image style migration on an image, a particular style may be applied to a target object (e.g., a person) in the image (e.g., video). Image style migration enables images to exhibit different artistic styles, such as oil painting, watercolor, sketching, etc. The technology has wide application prospect in the fields of art creation, film and television later stage, game design and the like, and can provide more originality and selection for users.
However, the inventors found in the research that the related image style migration method has a problem that the processing effect is to be improved. Accordingly, the inventors have found the above-described problems in the study and have proposed an image processing method, apparatus, and electronic device in the present application that can improve the above-described problems. In the method, an image obtained by removing a target object in an image to be processed is obtained and used as an intermediate image, and then target contents can be filled in a target area of the intermediate image to obtain a corresponding image to be repaired. And then, repairing the target area in the image to be repaired based on the image to be repaired and the corresponding mask image to obtain a repaired image. And then, carrying out image parameter change on the repaired image to obtain a parameter-changed image, and adding the target object into the parameter-changed image based on the mask image and the image to be processed to obtain a corresponding target image.
Therefore, the image restoration operation can be utilized to restore the image (intermediate image) after the target object is removed, so that the target area generated by the removal of the target object is filled, and the saw teeth and noise points are eliminated, so that the obtained restored image is more complete and smoother. Further, when the image parameters are changed (for example, the image style parameters are changed) based on the restored image to obtain the image with changed parameters, the object is added to the image with changed parameters to obtain the corresponding object image more naturally and cooperatively.
Before describing embodiments of the present application in further detail, embodiments of the present application are described with reference to an application environment.
The application scenario according to the embodiment of the present application will be described first.
In the embodiment of the application, the provided image processing method can be executed by the electronic device. In this manner, which is performed by the electronic device, all steps in the image processing method provided in the embodiment of the present application may be performed by the electronic device. For example, as shown in fig. 1, all steps in the image processing method provided in the embodiment of the present application may be executed by a processor of the electronic device 100.
Alternatively, the image processing method provided in the embodiment of the present application may be executed by a server. Correspondingly, in this manner executed by the server, the server may start executing steps in the image processing method provided in the embodiment of the present application in response to the trigger instruction. The triggering instruction may be sent by an electronic device used by a user, or may be triggered locally by a server in response to some automation event.
In addition, the image processing method provided by the embodiment of the application can be cooperatively executed by the electronic device and the server. In such a manner that the electronic device and the server cooperatively execute, part of the steps in the image processing method provided by the embodiment of the present application are executed by the electronic device, and the other part of the steps are executed by the server. For example, as shown in fig. 2, the electronic device 100 may perform an image processing method including: after the intermediate image corresponding to the image to be processed is obtained, the electronic device 100 may transmit the intermediate image to the server 200, after receiving the intermediate image, the server 200 may execute subsequent steps to obtain a target image, and then transmit the target image to the electronic device 100, where the electronic device 100 may store, display or share the target image after receiving the target image.
In this way, the steps performed by the electronic device and the server are not limited to those described in the above examples, and in practical applications, the steps performed by the electronic device and the server may be dynamically adjusted according to practical situations. For example, in one manner, the electronic device 100 may perform an image processing method including: acquiring a plurality of target video frame images included in the video to be processed, acquiring the image to be processed based on the plurality of target video frame images, then, the electronic device 100 may transmit the image to be processed acquired based on the plurality of target video frame images to the server 200, and after receiving the image to be processed, the server 200 may execute subsequent steps to obtain the target video.
It should be noted that, the electronic device 100 may be a tablet computer, a wearable device, an intelligent voice assistant, or other devices besides the smart phone shown in fig. 1 and 2. The server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud services, cloud computing, cloud storage, network services, cloud communication, middleware services, CDNs (Content Delivery Network, content delivery networks), and artificial intelligence platforms. In the case where the image processing method provided in the embodiment of the present application is executed by a server cluster or a distributed system formed by a plurality of physical servers, different steps in the image processing method may be executed by different physical servers, or may be executed by servers built based on the distributed system in a distributed manner.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 3, an image processing method provided in an embodiment of the present application includes:
s110: and obtaining an intermediate image corresponding to the image to be processed, wherein the intermediate image is an image obtained by removing the target object in the image to be processed.
In the embodiment of the present application, the image to be processed may be an image obtained by executing the method provided in the embodiment of the present application and corresponding to the target image. Alternatively, the image to be processed may be understood as an image to be subjected to an image parameter change.
In embodiments of the present application, there are a variety of ways to determine the image to be processed.
As one way, one or more frames of images in the selected video may be taken as the image to be processed. For example, in one case, a user desires to obtain a more specific style image from one video. In this case, the user may select, by the device, one frame image from the video as the image to be processed after determining the video. The video can be recorded by the device through a camera, can be transmitted by other devices, or can be obtained from a network by the device.
As another way, the preview image currently displayed by the electronic device may be taken as the image to be processed. In this way, the electronic device can perform image acquisition in real time through the camera and display the acquired image in the preview area, and further, the user can operate the electronic device to take the image (one frame image) displayed in the preview area as the image to be processed.
As yet another approach, the electronic device may obtain the image to be processed from the album. In this way, the electronic device may display pictures in the album in the case of starting the album, and select a picture from the album as the image to be processed in response to a user's selection.
As a further way, the electronic device may take a picture transmitted by another device as an image to be processed. For example, in the case where the user of the electronic device chat with other friends through the instant messaging program, the other friends may send a picture through the instant messaging program, and in this case, the electronic device may use the sent picture as the image to be processed.
In the embodiment of the application, the image parameters of the part of the image content in the image to be processed may be mainly changed. The partial image content for changing the image parameters may be content other than the target object in the image to be processed. Alternatively, in the case where the target object is a foreground object in the image to be processed, the partial image content for making the image parameter change may be understood as the background content in the image to be processed. Alternatively, the target object may also be an object selected by the user. For example, a person, animal, plant, etc. may be selected for the user.
In the case that the partial image content of which the image parameters are changed is the content other than the target object in the image to be processed, the target object may be removed from the image to be processed first to obtain a corresponding intermediate image. Correspondingly, an intermediate image is also understood to be an image which retains the content of the image to be processed, which content is subject to a change in the image parameters. For example, as shown in fig. 4, in the case where two persons in the image to be processed are both target objects (for example, foreground objects), the two persons may be removed from the image to be processed to obtain corresponding intermediate images. In the embodiment of the application, the region of the target object originally in the image to be processed may be regarded as the target region, and in this case, black may be filled in the target region by default in the obtained intermediate image.
As one way, the removal of the target object from the image to be processed may be achieved by masking the image. In this way, after the image to be processed is obtained, semantic segmentation processing can be performed on the image to be processed, and a mask image corresponding to the semantic segmentation processing is obtained. The mask image then identifies the region of the target object to be removed in the image to be processed (target region). Illustratively, as shown in fig. 5, in the embodiment of the present application, the mask image may be understood as a binary image having the same size as the original image (e.g., the image to be processed). Only pixels of two colors, black and white, may be included in the binary image. The mask image may include a mask region, where the mask region may be used to identify a region where the target object is located. For example, pixels of a mask region in a mask image may all be white, and pixels of a non-mask region may all be black. Where non-masked areas are understood to be all areas outside the masked areas.
S120: filling target content in a target area of the intermediate image to obtain a corresponding image to be repaired.
In obtaining the intermediate image, the target region is filled with black as a filling color by default. In this case, however, the black filling content affects the overall change effect after the subsequent change of the image parameters. For example, when the style migration algorithm is used to change the parameters of the image, the target area may exceed the original range after being processed by the style migration algorithm, so that the target object is attached to the image with the changed parameters, and the edge of the target object has a section of area, which is further offensive and unnatural. In addition, when the target region is naturally filled with black, the entire image after style migration (image after parameter change) is darker than the original image.
Then, in order to improve the foregoing, after the intermediate image is obtained, the target area in the intermediate image is filled with the target content. The target content is understood to be image content other than black filler content.
In the embodiment of the application, the target content may be determined according to the image to be processed. As one way, the target content may be determined from the entirety of the image to be processed. Alternatively, the target content may be determined based on a partial region of the image to be processed.
Illustratively, as shown in fig. 6, the resulting image to be repaired may be the case shown in fig. 6 based on the intermediate image shown in fig. 4. As shown in fig. 6, in the case where the target area in the intermediate image is black, after filling the target area with the target content, the color represented by the target area may be changed. For example, in the case of determining the target content based on the image to be recognized, the colors of the target region and other regions of the image to be repaired may be made closer to each other so as to promote the effect of the subsequent repair operation.
S130: and repairing the target area in the image to be repaired based on the image to be repaired and the corresponding mask image to obtain a repaired image.
After the image to be repaired is obtained, the target area in the image to be repaired can be repaired. In this embodiment of the present application, repairing an image (for example, an image to be repaired) may be understood as enabling a missing portion (for example, a target area) in the image to be more coordinated with image contents of other areas, so that an image obtained after repairing may be more complete and natural.
For example, as shown in fig. 7, in the repaired image obtained through the repair operation, the content at the target area is not the target content filled by the image to be repaired, but the content more similar to the other areas except the target area is filled, so that the repaired image is more natural and complete in vision. For example, in the example shown in fig. 7, the target area is originally two persons, and the sea water and beach are around the two persons, and after the repairing operation, the target area where the two persons are originally located is filled with the sea water around the removed persons.
In the embodiment of the application, the mask image is an image for identifying an area where the target object is originally located in the image to be processed, and in the process of performing the repairing operation, an area (i.e., a target area) in the image to be repaired, which needs to be repaired, can be identified through the mask image, so as to complete the repairing operation.
As a way, in order to make the mask image better cover the target area, the mask area in the mask image may be enlarged to obtain the mask image with the enlarged mask area, and then the target area in the image to be repaired may be repaired based on the image to be repaired and the mask image with the enlarged mask area, so as to obtain the repaired image. It should be noted that, although the mask image may clearly identify the region to be repaired in the image to be repaired, there may be a case that all the regions in the target region cannot be repaired due to an error of the repair algorithm, which may cause that a part of the regions in the target region may not be successfully repaired. In order to improve the problem, the mask area in the mask image can be enlarged, so that the mask image of the enlarged mask area is larger than the target area in the image to be repaired, and the target area can be repaired when the repair operation is performed.
Alternatively, the mask area in the mask image may be increased based on a convolution kernel to obtain a mask image with increased mask area, wherein the size of the convolution kernel is determined based on the size of the image to be processed. It should be noted that, in the embodiment of the present application, the operation of increasing the mask area in the mask image may be understood as performing an expansion etching operation on the mask area, where the size of the convolution kernel may affect the fine granularity of the expansion etching operation, if the size of the convolution kernel is too large, the coverage range of the convolution kernel is wide, but the final expansion effect may be relatively coarse, and if the size of the convolution kernel is too small, the final expansion effect may be finer, but the influence range is small. Therefore, in the embodiment of the application, the size of the convolution kernel can be adaptively updated according to the size of the image to be processed, so that the size of the convolution kernel is ensured to be a relatively proper value. Alternatively, the formula for obtaining the size of the convolution kernel may be as follows:
where H and W are the height and width of the image to be processed, kernal size To determine the size of the convolution kernel. In such an adaptive calculation mode, if the size of the image to be processed is 600×800, the convolution kernel size may be 7, which may generally achieve a better effect.
S140: and carrying out image parameter change on the repaired image to obtain the image with the parameter changed.
After the restored image is obtained, image parameter changes may be initiated to perform image parameter changes on the restored image. In the embodiment of the application, changing the image parameters may be understood as enabling the image (reference to the changed image or the target image) with changed image parameters to have different visual experiences. Wherein the changing of the image parameters may comprise changing parameters regarding the style of the image, or may comprise changing the hue of the image, etc.
For example, the image to be processed may be a live image captured by a camera, and then changing the image parameters may include changing parameters related to the image style such that the resulting image with the changed parameters may be a change in the image style. For example, in the case where the image to be processed is a live-action image acquired by a camera, the image style of the image after the parameter change may be an ink style, a watercolor style, a sketch style, or the like.
In the embodiment of the application, the image parameters of the repaired image can be changed according to the determined adjustment target, so that the image with the changed parameters can meet the adjustment target. For example, if the determined adjustment target is a cartoon style, the image style of the obtained image with the parameters changed will be the cartoon style. For another example, if the determined adjustment target is a cold tone, the resulting color of the parameter-altered image will be a cold tone.
There are a number of ways in which the adjustment target can be determined.
As one way, the adjustment target may be determined by the user. In this manner, alternative adjustment options may be provided by the device, and the user may select an adjustment target from among the alternative adjustment options. For example, the alternative adjustment options include style F1, style F2, style F3, and style F4. If the user selects style F4, style F4 may be used as the adjustment target.
As one way, it may be determined according to the preference of the user. The preference category for the style of the image is different for different users. For example, some users prefer ink-style images, and some users prefer sketch-style images. In this case, the device may then obtain the user's preferences, and then determine the adjustment objective based on the user's preferences. Alternatively, in embodiments of the present application, the device may determine the user's preference in terms of image processing according to the user's historical editing operations on the image.
As a way, it may be determined automatically by the device from the image to be processed. In this manner, at least one characteristic of the content, color, light, composition, and theme of the image to be processed may be analyzed, and then the adjustment target is determined based on the at least one characteristic of the image to be processed.
As one way, it may be determined based on where the user is located. It should be noted that the perceived visual experience may be different when the user is at different locations. For example, if the user is in a forest or bamboo forest, it may be more likely to feel a visual experience about nature, freshness. As another example, if the user is in a child paradise, it may be more likely that a visual experience is perceived regarding loveliness. In this way, the adjustment target is determined by the position of the user, so that the image or the target image with the parameters changed, which is finally adjusted, is closer to the visual perception of the user to the current environment, and further the user experience is improved.
After the device determines the adjustment target, the user can be notified by means of a prompt message, so that the user can confirm whether to use the adjustment target determined by the device.
S150: and adding the target object to the image with the changed parameters based on the mask image and the image to be processed to obtain a corresponding target image.
After the image with the changed parameters is obtained, the processing of the image to be processed is basically finished, and then the target object determined before can be added back to the image with the changed parameters, so that the processing of partial image contents except the target object is realized, and the original display effect of the target object is reserved. The mask region in the mask image marks the original position of the target object in the image to be processed, so that the target object can be accurately increased to the original position by using the mask image. Wherein adding the target object to the parameter-changed image to obtain the corresponding target image may be performed based on the following formula:
Result=Image×(Mask÷255)+Background×(1-(Mask÷255))
The Image is an Image to be processed, background is an Image with changed parameters, mask is a Mask Image, and Result is an obtained target Image.
As shown in fig. 8, for example, two persons in the image to be processed are target objects, and comparing the image to be processed and the target image can find that the display effect of the two persons in the target image is consistent with the display effect in the image to be processed. However, the image style of the portions of the image content other than the two persons is changed by the processing of the method provided in the present application. In addition, as shown in fig. 8, the method provided in this embodiment not only can ensure the migration effect of the style of the background (the image content of the part other than the target object), but also can process the edge transition between the foreground (the target object) and the background, so that the obtained target image is more natural and smooth.
In addition, in the embodiment of the present application, when adding the target object to the image after the parameter change, it may also be selected to add the target object to another area (i.e., an area other than the area in the image to be processed where the target object is originally located). As one way, the position of the mask region in the mask image may be changed to obtain a mask image in which the position of the mask region is changed, and then the target object is added to the image in which the parameters are changed based on the mask image in which the position of the mask region is changed and the image to be processed, so as to obtain a corresponding target image. In embodiments of the present application, there are a number of ways to determine the area to which the target object needs to be moved. As a way, the self-movement may be performed by the user of the device according to his own preferences. In this manner, the device may display an editing interface in which a marker is displayed that may be used to characterize the target object. And, the background of the editing interface may be the image to be processed. And the user may drag the marker so that the marker may be moved in the editing interface. Wherein, because the background of the editing interface is in the image to be processed, the user drags the marker to move in the editing interface, which can be understood as that the drag marker moves in the image to be processed. In this case, the area to which the marker is finally moved may be regarded as the area to which the target object is required to be moved, and thus the corresponding mask image may be adjusted according to the area to which the marker is finally moved.
As illustrated in fig. 9, in the image to be processed illustrated in fig. 9, both persons are target objects, and the two persons illustrated in fig. 9 are located at both ends of the drawing, respectively. The obtained target image may be as shown in fig. 9 by adjusting the mask area in the corresponding mask image. In the target image shown in fig. 9, the position of the target object is shifted from the original position at both ends to the middle.
According to the image processing method, the image (intermediate image) after the target object is removed can be repaired by utilizing the image repairing operation, so that the target area generated by the target object is filled, and saw teeth and noise points are eliminated, so that the obtained repaired image is more complete and smoother. Further, when the image parameters are changed (for example, the image style parameters are changed) based on the restored image to obtain the image with changed parameters, the object is added to the image with changed parameters to obtain the corresponding object image more naturally and cooperatively.
Referring to fig. 10, an image processing method provided in an embodiment of the present application includes:
S210: and acquiring a plurality of target video frame images included in the video to be processed.
In the embodiment of the present application, the video to be processed may be understood as a video to be subjected to image parameter change. The image parameter change may include changing a video style parameter of the video, or may include changing a color tone of the video.
As one way, the video currently recorded by the device may be taken as the video to be processed. As yet another approach, the video to be processed may be obtained from an album of devices. In this way, the device may display videos in the album in the event of starting the album, and select one video from the album as a video to be processed in response to a user's selection. As yet another way, the electronic device may use the video transmitted from the other device as the video to be processed. For example, in the case where the user of the device chat with other friends through the instant messaging program, the other friends may send a video through the instant messaging program, and in this case, the electronic device may use the sent video as the video to be processed.
In this embodiment of the present application, the plurality of target video frame images may be understood as all video frame images included in the video to be processed, and may also be understood as part of video frame images included in the video to be processed.
S220: an image to be processed is acquired based on a plurality of target video frame images.
In this embodiment, the image to be processed may be understood as an image in which image parameter changes are performed among a plurality of target video frame images.
As one way, a part of the target video frame images among the plurality of target video frame images may be taken as the image to be processed. Alternatively, all of the plurality of target video frame images may be regarded as the image to be processed.
Alternatively, the selected image to be processed may be determined according to the final processing target. For example, in the case where the final processing target is to obtain one video, all target video frame images in the plurality of target video frame images may be regarded as the images to be processed. For example, in the case where the final processing target is to obtain one moving picture, at least two target video frame images among the plurality of target video frame images may be taken as the image to be processed. For another example, in the case where the final processing target is to obtain one picture, one target video frame image of a plurality of target video frame images may be taken as the image to be processed.
S230: and obtaining an intermediate image corresponding to the image to be processed, wherein the intermediate image is an image obtained by removing the target object in the image to be processed.
S240: filling target content into a target area of the intermediate image to obtain a corresponding image to be repaired, wherein the target area is an area where a target object is originally located.
S250: repairing a target area in the image to be repaired based on the image to be repaired and a corresponding mask image to obtain a repaired image, wherein the mask image is used for identifying the area of the target object originally in the image to be processed.
S260: and carrying out image parameter change on the repaired image to obtain the image with the parameter changed.
S270: and adding the target object to the image with the changed parameters based on the mask image and the image to be processed to obtain a corresponding target image.
In one manner, when a plurality of target video frame images are all images to be processed, the user performs overall style migration on the video to be processed, for example, migration of the style of the video to be processed into an ink style or a sketch style. In this case, after obtaining the target image corresponding to each of the plurality of target video frame images (the plurality of images to be processed), it is equivalent to obtaining the plurality of target images, and further, the target video can be generated based on the plurality of target images. Wherein the target video is generated based on a plurality of target images, it is understood that the video frames in the generated target video are composed of the plurality of target images. In this case, when the target video is played, it can be understood that a plurality of target images are sequentially displayed based on a predetermined frame rate.
As illustrated in fig. 11, the plurality of target video frame images includes a target video frame image F1, a target video frame image F2, and a target video frame image F3.. In the case where each target video frame image is taken as an image to be processed, the steps S230 to S270 may be performed for each target video frame image. Further, a target image T1, a target image T2, and a target image T3. Then, the target image T1, the target image T2, and the target image T3.
It should be noted that, the plurality of images (or, it can be understood that the video frames) included in the video are all arranged in a certain order, so that when the video is played, the plurality of images included in the video are played in the order. In this case, as one way, the arrangement order of the target images in the target video, the arrangement order of the target video frame images corresponding to the target images in the video to be processed may be the same. For example, the target image T1 is the first frame image (the image arranged at the first position) in the target video, and the target video frame image F1 corresponding to the target image T1 is also the first frame image in the video to be processed.
Alternatively, the order of arrangement of the target images in the target video may be reversed with respect to the order of arrangement of the target video frame images corresponding to the target images in the video to be processed. For example, the target video frame image F1 is arranged at a first position in the video to be processed, the target image T1 corresponding to the target video frame image F1 may be arranged at a last position in the target video, the target video frame image F2 is arranged at a second position in the video to be processed, the target image T2 corresponding to the target video frame image F2 may be arranged at a penultimate position in the target video, and so on.
In this embodiment of the present application, the arrangement order of the target images in the target video is the same as the arrangement order of the target video frame images corresponding to the target images in the video to be processed, or is opposite to the arrangement order of the target video frame images corresponding to the target images in the video to be processed, and may be determined by the user of the device according to his own needs.
As one way, in the case where a part of target video frame images among a plurality of target video frame images is taken as an image to be processed, the user may want to generate one moving picture from the video to be processed. For example, a moving picture in Gif format is generated. As illustrated in fig. 12, the plurality of target video frame images includes a target video frame image F1, a target video frame image F2, and a target video frame image F3.. The target video frame image F1, the target video frame image F2, and the target video frame image F3 may be selected as images to be processed, so that a target image T1, a target image T2, and a target image T3 may be obtained, and then a moving picture may be generated based on the target image T1, the target image T2, and the target image T3.
After the target image is obtained, if the target video or the dynamic picture is not generated based on the target image, the target image can be directly output. Outputting the target image may be understood as storing the target image, displaying the target image, or transmitting the target image to another device. In addition, when the target video or the moving picture is obtained, the target video or the moving picture may be output. The output may also be stored, displayed, and shared with other devices.
According to the image processing method provided by the embodiment, the image (intermediate image) after the target object is removed can be repaired by utilizing the image repairing operation, so that a target area generated by the target object is filled, and saw teeth and noise points are eliminated, so that the obtained repaired image is more complete and smoother. Further, when the image parameters are changed (for example, the image style parameters are changed) based on the restored image to obtain the image with changed parameters, the object is added to the image with changed parameters to obtain the corresponding object image more naturally and cooperatively. In addition, in this embodiment, the image to be processed may be acquired based on a plurality of target video frame images, in this case, after a corresponding target image is obtained for each image to be processed, the corresponding target image may be obtained for each image to be processed, and then the target video may be obtained by combining the corresponding target images, so as to implement application of the image processing method provided in the embodiment of the present application to the video processing field, and further implement design of an algorithm scheme and application presentation for video, and create an artistic video scheme of "people in the midst of the picture".
Referring to fig. 13, an image processing method provided in an embodiment of the present application includes:
s310: and obtaining an intermediate image corresponding to the image to be processed, wherein the intermediate image is an image obtained by removing the target object in the image to be processed.
S320: and obtaining target content based on the image to be processed.
As one way, obtaining the target content based on the image to be processed may include: and acquiring a color value of each pixel in the image to be processed, obtaining a first color mean value based on the color value of each pixel, and obtaining target content based on the first color mean value. The target content is obtained based on the first color mean value, which can be understood as that the color value of each pixel in the target content is the first color value.
Alternatively, a partial region in the image to be processed may be acquired, then a color value of each pixel in the partial region may be acquired, and then a second color average value may be obtained based on the color value of each pixel in the partial region, and the target content may be obtained based on the second color average value.
It should be noted that, in the embodiment of the present application, the manner of obtaining the target content may be determined based on the content complexity of the image to be processed. Content complexity, among other things, may be understood as the complexity of processing image content in an image. Wherein the device may determine the content complexity by the kind of object comprised in the image to be processed. Alternatively, the device may determine the content complexity by the kind of color values included in the image to be processed.
Alternatively, in the case where it may be detected that the content complexity is greater than the complexity threshold, then the target content may be determined based on pixels in a partial region in the image to be processed. The target content may be determined based on a color value of each pixel in the image to be processed, in case it is detected that the content complexity is not greater than the complexity threshold.
S330: filling target content into a target area of the intermediate image to obtain a corresponding image to be repaired, wherein the target area is an area where a target object is originally located.
S340: repairing a target area in the image to be repaired based on the image to be repaired and a corresponding mask image to obtain a repaired image, wherein the mask image is used for identifying the area of the target object originally in the image to be processed.
S350: and carrying out image parameter change on the repaired image to obtain the image with the parameter changed.
S360: and adding the target object to the image with the changed parameters based on the mask image and the image to be processed to obtain a corresponding target image.
As one way, performing an image parameter change on the restored image to obtain a parameter-changed image, including: adding background elements into the repaired image to obtain an element added image; and carrying out image parameter change on the image of the added element to obtain an image with changed parameters.
In the embodiment of the application, adding the background element to the repaired image may be understood as adding the mapping operation to the repaired image, so that the display content of the image (or the finally obtained target content) with the changed parameters may be more diversified.
In the embodiment of the application, as a way, the background element added to the repaired image may be selected by the user. Alternatively, the background elements to be added may be selected together by the user when determining the image to be processed or the video to be processed. Furthermore, in the process of executing the processing method provided by the embodiment of the application on the image to be processed or the video to be processed, when the repaired image is obtained, the background element selected by the user can be added into the repaired image. Before the parameters are changed, the background elements are added, so that the added background elements can be used for changing the parameters of the image together, and the styles of the images are consistent.
Alternatively, the added background element may be determined by the device adaptively. Alternatively, the device may determine the added background element from the image content in the image to be processed. In this way, the device can analyze the image content originally included in the image to be processed, and further can use the element adapted to the image content originally included as the background element which is increased.
Alternatively, the device may first determine the subject matter of the image to be processed. The theme may be scenery, figures, animals, food, etc. Then, a map related to the theme may be selected as a background element for the augmentation. For example, if the subject of the image to be processed is a landscape, a map of the landscape class may be selected as the background element for the addition; if the subject of the image to be processed is food, a map of the food class may be selected as the background element for the augmentation.
Alternatively, the device may determine the style of the image to be processed first, and then select a map adapted to the style of the image to be processed as the background element for the addition. For example, if the image to be processed is a conciseness style, a succinct, clean map may be selected as the background element for augmentation; if the image to be processed is a lovely style, a lovely and interesting map may be selected as the background element for the augmentation.
In the embodiment of the application, when the size and the position of the background element to be added are determined, the size and the position can be determined according to the composition and the aesthetic degree of the image to be processed, so that the background element to be added and the image to be processed are integrated, and the image to be processed can be more harmonious.
Optionally, when adding the background element in the repaired image, the transparency and the size of the background element can be adjusted according to the requirement.
According to the image processing method provided by the embodiment, the image (intermediate image) after the target object is removed can be repaired by utilizing the image repairing operation, so that a target area generated by the target object is filled, and saw teeth and noise points are eliminated, so that the obtained repaired image is more complete and smoother. Further, when the image parameters are changed (for example, the image style parameters are changed) based on the restored image to obtain the image with changed parameters, the object is added to the image with changed parameters to obtain the corresponding object image more naturally and cooperatively. Moreover, in the present embodiment, the target content to be filled may be obtained based on the image to be processed, so that the determined target content and the image to be processed can be more adapted.
Referring to fig. 14, an image processing apparatus 400 provided in an embodiment of the present application, the apparatus 400 includes:
the intermediate image obtaining unit 410 is configured to obtain an intermediate image corresponding to the image to be processed, where the intermediate image is an image obtained by removing the target object in the image to be processed.
The content filling unit 420 is configured to fill a target area of the intermediate image with target content to obtain a corresponding image to be repaired, where the target area is an area where the target object is originally located.
The image restoration unit 430 is configured to restore the target area in the image to be restored based on the image to be restored and the corresponding mask image, so as to obtain a restored image, where the mask image is used to identify the area of the target object originally in the image to be processed.
The image parameter changing unit 440 is configured to perform image parameter changing on the restored image to obtain a parameter-changed image.
An image synthesis unit 450, configured to add the target object to the image after the parameter change based on the mask image and the image to be processed, so as to obtain a corresponding target image.
As one way, the intermediate image obtaining unit 410 is further configured to obtain a plurality of target video frame images included in the video to be processed; an image to be processed is acquired based on a plurality of target video frame images.
Optionally, the intermediate image obtaining unit 410 is further specifically configured to use each of the plurality of target video frame images as the image to be processed. The image synthesis unit 450 is further configured to obtain a target video based on the target images corresponding to the plurality of target video frame images.
As one way, the image restoration unit 430 is specifically configured to obtain the target content based on the image to be processed; filling target content in a target area of the intermediate image to obtain a corresponding image to be repaired. Optionally, the image restoration unit 430 is specifically configured to obtain a color value of each pixel in the image to be processed; a first color average value based on a color value of each pixel; and obtaining target content based on the first color mean value.
As one way, the image repairing unit 430 is specifically configured to enlarge a mask area in the mask image, so as to obtain a mask image with the mask area enlarged; and repairing the target area in the image to be repaired based on the image to be repaired and the mask image of the enlarged mask area so as to obtain a repaired image. Optionally, the image restoration unit 430 is specifically configured to increase the mask area in the mask image based on the convolution kernel to obtain a mask image with the mask area increased, where the size of the convolution kernel is determined based on the size of the image to be processed.
As one way, the image synthesis unit 450 is specifically configured to change the position of the mask region in the mask image, so as to obtain a mask image with the position of the mask region changed; and adding the target object to the image with the changed parameters based on the mask image with the changed mask area position and the image to be processed so as to obtain a corresponding target image.
As one way, the image parameter changing unit 440 is specifically configured to add a background element to the restored image, so as to obtain an image with the added element; and carrying out image parameter change on the image of the added element to obtain an image with changed parameters.
According to the image processing device provided by the embodiment, the image (intermediate image) after the target object is removed can be repaired by using the image processing device so as to fill in the target area generated by the removal of the target object, and then the saw teeth and the noise are eliminated, so that the obtained repaired image is more complete and smoother. Further, when the image parameters are changed (for example, the image style parameters are changed) based on the restored image to obtain the image with changed parameters, the object is added to the image with changed parameters to obtain the corresponding object image more naturally and cooperatively.
It should be noted that, in the present application, the device embodiment and the foregoing method embodiment correspond to each other, and specific principles in the device embodiment may refer to the content in the foregoing method embodiment, which is not described herein again.
An electronic device provided in the present application will be described with reference to fig. 15.
Referring to fig. 15, based on the above-mentioned image processing method and apparatus, an electronic device 1000 capable of executing the above-mentioned image processing method is further provided in the embodiments of the present application. The electronic device 1000 comprises one or more (only one is shown in the figure) processors 105, a memory 104, an audio playback module 106 and an audio acquisition means 108 coupled to each other. The memory 104 stores therein a program capable of executing the contents of the foregoing embodiments, and the processor 105 can execute the program stored in the memory 104.
Wherein the processor 105 may include one or more processing cores. The processor 105 utilizes various interfaces and lines to connect various portions of the overall electronic device 1000, perform various functions of the electronic device 1000, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and invoking data stored in the memory 104. Alternatively, the processor 105 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 105 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 105 and may be implemented solely by a single communication chip.
The Memory 104 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (RAM). Memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc.
Further, the electronic device 1000 may include a network module 110 and a sensor module 112 in addition to the devices shown above.
The network module 110 is configured to implement information interaction between the electronic device 1000 and other devices, for example, may establish a connection with other audio playing devices or other electronic devices, and perform information interaction based on the established connection. As one way, the network module 110 of the electronic device 1000 is a radio frequency module, which is configured to receive and transmit electromagnetic waves, and implement mutual conversion between the electromagnetic waves and the electrical signals, so as to communicate with a communication network or other devices. The radio frequency module may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and the like. For example, the radio frequency module may interact with external devices through transmitted or received electromagnetic waves.
The sensor module 112 may include at least one sensor. Specifically, the sensor module 112 may include, but is not limited to: pressure sensors, motion sensors, acceleration sensors, and other sensors.
Wherein the pressure sensor may detect a pressure generated by pressing against the electronic device 1000. That is, the pressure sensor detects a pressure generated by contact or pressing between the user and the electronic device 1000, for example, a pressure generated by contact or pressing between the user's ear and the electronic device 1000. Thus, the pressure sensor may be used to determine whether contact or pressure has occurred between the user and the electronic device 1000, as well as the magnitude of the pressure.
The acceleration sensor may detect the acceleration in each direction (typically, three axes), and may detect the gravity and direction when stationary, and may be used for applications for recognizing the gesture of the electronic device 1000 (such as landscape/portrait screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer, and knocking), and so on. In addition, the electronic device 1000 may further be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, etc., which will not be described herein.
The audio acquisition device 108 is used for acquiring audio signals. Optionally, the audio capturing device 108 includes a plurality of audio capturing devices, which may be microphones.
Referring to fig. 16, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 800 has stored therein program code which can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 800 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 800 has storage space for program code 810 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 810 may be compressed, for example, in a suitable form.
In summary, according to the image processing method, the image processing device and the electronic device provided by the application, an image obtained by removing a target object in an image to be processed is obtained and used as an intermediate image, and then target contents can be filled in a target area of the intermediate image to obtain a corresponding image to be repaired. And then, repairing the target area in the image to be repaired based on the image to be repaired and the corresponding mask image to obtain a repaired image. And then, carrying out image parameter change on the repaired image to obtain a parameter-changed image, and adding the target object into the parameter-changed image based on the mask image and the image to be processed to obtain a corresponding target image. Therefore, the image restoration operation can be utilized to restore the image (intermediate image) after the target object is removed, so that the target area generated by the removal of the target object is filled, and the saw teeth and noise points are eliminated, so that the obtained restored image is more complete and smoother. Further, when the image parameters are changed (for example, the image style parameters are changed) based on the restored image to obtain the image with changed parameters, the object is added to the image with changed parameters to obtain the corresponding object image more naturally and cooperatively.
The scheme provided by the embodiment of the application can be understood as an image style migration technology based on deep learning, and can realize front-background separation, so that fine style conversion is performed on background content, and meanwhile, the definition and the sense of reality of the foreground are maintained. For example, in the process of implementing the video style migration provided by the present application, a user only needs to input a video (for example, a portrait video), and then select one of a plurality of provided styles, so as to generate a video with artistic sense, and a target object (for example, a person) in the video appears to have 'experience of being in a midstream' in a famous picture. The scheme provided by the embodiment of the application has the advantages that the migration effect of the background style can be ensured, and the edge transition between the foreground and the background can be processed, so that the background is more natural and smooth. In addition, the scheme provided by the embodiment of the application is high in running speed, low in calculation resource consumption and capable of adapting to different hardware environments, and the scheme can be easily run no matter a computer or a mobile device. The user does not need to make complex settings, and can obtain a surprise video work (for example, a target video) by simply selecting a favorite style and clicking one-click generation. Compared with other video generation applications, the scheme provided by the embodiment of the application is simpler and easier to use, is closer to the requirements of users, and is a very friendly image style migration technical scheme.
In addition, in the embodiment of the present application, when the image parameter is changed to obtain the image after the parameter is changed, the obtained image after the parameter is changed may be distinguished from the color tone of the image to be processed in terms of color tone. If the user does not currently need to make a change in hue, a small amount of discordance is created. To improve this, the coordination of the foreground (target object) and the background (e.g., the image after the parameter change) can be done by image color coordination (Image Harmonization), achieving a better overall picture effect.
Also, in the embodiment of the present application, the step of processing the image to be processed to obtain the intermediate image may be performed by an image segmentation model or an image matting model, and the step of performing image parameter change on the restored image may be performed by an image parameter update model (for example, an image style migration model). In this case, the method provided by the embodiment of the present application can be understood as a method of performing image processing in combination with a multimode.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, one of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (12)
1. An image processing method, the method comprising:
obtaining an intermediate image corresponding to an image to be processed, wherein the intermediate image is an image obtained by removing a target object in the image to be processed;
filling target content into a target area of the intermediate image to obtain a corresponding image to be repaired, wherein the target area is an area where the target object is originally located;
repairing a target area in the image to be repaired based on the image to be repaired and a corresponding mask image to obtain a repaired image, wherein the mask image is used for marking an area of the target object originally in the image to be processed;
performing image parameter change on the repaired image to obtain a parameter-changed image;
and adding the target object to the image with the changed parameters based on the mask image and the image to be processed so as to obtain a corresponding target image.
2. The method according to claim 1, wherein before the step of obtaining the intermediate image corresponding to the image to be processed, further comprises:
acquiring a plurality of target video frame images included in a video to be processed;
And acquiring an image to be processed based on the plurality of target video frame images.
3. The method of claim 2, wherein the acquiring the image to be processed based on the plurality of target video frame images comprises:
taking the target video frame images as images to be processed;
the step of adding the target object to the image with the changed parameters based on the mask image and the image to be processed to obtain a corresponding target image further comprises the following steps:
and obtaining a target video based on the target images corresponding to the target video frame images.
4. The method according to claim 1, wherein said filling the target area of the intermediate image with target content to obtain a corresponding image to be repaired comprises:
obtaining target content based on the image to be processed;
and filling target contents into the target area of the intermediate image to obtain a corresponding image to be repaired.
5. The method of claim 4, wherein the obtaining target content based on the image to be processed comprises:
acquiring a color value of each pixel in the image to be processed;
a first color mean value based on the color value of each pixel;
And obtaining target content based on the first color mean value.
6. The method according to claim 1, wherein repairing the target area in the image to be repaired based on the image to be repaired and the corresponding mask image to obtain a repaired image comprises:
increasing a mask region in the mask image to obtain a mask image with the mask region increased;
and repairing the target area in the image to be repaired based on the image to be repaired and the mask image of the enlarged mask area so as to obtain a repaired image.
7. The method of claim 6, wherein the enlarging the mask area in the mask image to obtain the mask image with the enlarged mask area comprises:
and increasing a mask area in the mask image based on a convolution kernel to obtain a mask image with the mask area increased, wherein the size of the convolution kernel is determined based on the size of the image to be processed.
8. The method according to claim 1, wherein adding the target object to the parameter-changed image based on the mask image and the image to be processed to obtain a corresponding target image comprises:
Changing the position of a mask region in the mask image to obtain a mask image with the position of the mask region changed;
and adding the target object to the image with the changed parameters based on the mask image with the changed mask area position and the image to be processed so as to obtain a corresponding target image.
9. The method of claim 1, wherein said performing an image parameter change on said repaired image to obtain a parameter-changed image comprises:
adding background elements into the repaired image to obtain an element added image;
and carrying out image parameter change on the image of the added element to obtain an image with changed parameters.
10. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises an intermediate image acquisition unit, a processing unit and a processing unit, wherein the intermediate image acquisition unit is used for acquiring an intermediate image corresponding to an image to be processed, and the intermediate image is an image obtained by removing a target object in the image to be processed;
the content filling unit is used for filling target content into a target area of the intermediate image to obtain a corresponding image to be repaired, wherein the target area is an area where the target object is originally located;
The image restoration unit is used for restoring the target area in the image to be restored based on the image to be restored and the corresponding mask image so as to obtain a restored image, wherein the mask image is used for identifying the area of the target object originally in the image to be processed;
an image parameter changing unit, configured to perform image parameter changing on the repaired image, so as to obtain a parameter-changed image;
and the image synthesis unit is used for adding the target object to the image with the changed parameters based on the mask image and the image to be processed so as to obtain a corresponding target image.
11. An electronic device comprising a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-9.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, wherein the program code, when being executed by a processor, performs the method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311853084.9A CN117808668A (en) | 2023-12-28 | 2023-12-28 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311853084.9A CN117808668A (en) | 2023-12-28 | 2023-12-28 | Image processing method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117808668A true CN117808668A (en) | 2024-04-02 |
Family
ID=90419723
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311853084.9A Pending CN117808668A (en) | 2023-12-28 | 2023-12-28 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117808668A (en) |
-
2023
- 2023-12-28 CN CN202311853084.9A patent/CN117808668A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340711B (en) | Super-resolution reconstruction method, device, equipment and storage medium | |
CN116897375B (en) | Image capture in an augmented reality environment | |
KR20210119438A (en) | Systems and methods for face reproduction | |
KR20200020960A (en) | Image processing method and apparatus, and storage medium | |
CN108024071B (en) | Video content generation method, video content generation device, and storage medium | |
US20140223474A1 (en) | Interactive media systems | |
CN113840158B (en) | Virtual image generation method, device, server and storage medium | |
CN115883853B (en) | Video frame playing method, device, equipment and storage medium | |
JP2022526053A (en) | Techniques for capturing and editing dynamic depth images | |
JP2023539620A (en) | Facial image processing method, display method, device and computer program | |
CN114007099A (en) | Video processing method and device for video processing | |
WO2022089168A1 (en) | Generation method and apparatus and playback method and apparatus for video having three-dimensional effect, and device | |
CN113240687A (en) | Image processing method, image processing device, electronic equipment and readable storage medium | |
US20160086365A1 (en) | Systems and methods for the conversion of images into personalized animations | |
CN114549718A (en) | Rendering method and device of virtual information, augmented reality device and storage medium | |
CN113395441A (en) | Image color retention method and device | |
CN112804245A (en) | Data transmission optimization method, device and system suitable for video transmission | |
CN113625983A (en) | Image display method, image display device, computer equipment and storage medium | |
CN113411537A (en) | Video call method, device, terminal and storage medium | |
CN117830077A (en) | Image processing method and device and electronic equipment | |
CN113989460B (en) | Real-time sky replacement special effect control method and device for augmented reality scene | |
CN114758027A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN116757970B (en) | Training method of video reconstruction model, video reconstruction method, device and equipment | |
CN113393545A (en) | Image animation processing method and device, intelligent device and storage medium | |
CN114222995A (en) | Image processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |