CN111292227A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111292227A
CN111292227A CN201811497920.3A CN201811497920A CN111292227A CN 111292227 A CN111292227 A CN 111292227A CN 201811497920 A CN201811497920 A CN 201811497920A CN 111292227 A CN111292227 A CN 111292227A
Authority
CN
China
Prior art keywords
image
image processing
target object
template
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811497920.3A
Other languages
Chinese (zh)
Inventor
刘高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811497920.3A priority Critical patent/CN111292227A/en
Publication of CN111292227A publication Critical patent/CN111292227A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: receiving an image processing configuration instruction, and configuring image processing parameters according to the configuration instruction; acquiring a first image; identifying a target object in the first image; acquiring a template image; and mixing the target object with the template image according to the image processing parameters to generate a processed image. The embodiment of the disclosure mixes the target object in the image with the template image by configuring the image processing parameters, and solves the technical problems of single image processing mode and inflexible processing effect modification in the prior art.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be realized by downloading an Application program (APP for short) from a network end, for example, the APP with functions of dark light detection, a beauty camera, super pixels and the like can be realized. Various special effects such as beauty, filters, large eyes and thin face, etc. can be formed by combining various basic image processes.
The existing image special effects are generally preset effects, when the image special effects are used, the preset special effects are loaded into an image to be processed, each effect needs to be made in advance, and modification is not flexible.
Disclosure of Invention
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: receiving an image processing configuration instruction, and configuring image processing parameters according to the configuration instruction; acquiring a first image; identifying a target object in the first image; acquiring a template image; and mixing the target object with the template image according to the image processing parameters to generate a processed image.
Further, the receiving an image processing configuration instruction, and configuring image processing parameters according to the configuration instruction, includes: receiving an image processing configuration instruction, and configuring one or more of the type of the target object, the acquisition trigger condition of the template image, the acquisition address of the template image, the trigger condition of image mixing and the mode of image mixing according to the configuration instruction.
Further, the acquiring the first image includes: the method comprises the steps of obtaining a video image, and taking a current video image frame of the video image as a first image.
Further, the acquiring the template image includes: acquiring an acquisition address of the template image according to a first parameter in the image processing parameters; and acquiring a template image according to the acquisition address.
Further, the acquiring the template image includes: acquiring an acquisition address of the template image according to a first parameter in the image processing parameters; acquiring an acquisition triggering condition of the template image according to a second parameter in the image processing parameters; and acquiring the template image from the acquisition address in response to the acquisition trigger condition being triggered.
Further, the identifying the target object in the first image includes: acquiring preset target object characteristic points; and identifying the target object in the first image according to the characteristic points.
Further, the acquiring of the preset target object feature point includes: and acquiring the type of a preset target object and the characteristic point corresponding to the target object according to a third parameter in the image processing parameters.
Further, the mixing the target object with the template image according to the image processing parameter to generate a processed image includes: acquiring a triggering condition of the image mixing according to a fourth parameter in the image processing parameters; and in response to the image mixing triggering condition being triggered, mixing the target object with the template image to generate a processed image.
Further, the mixing the target object with the template image according to the image processing parameter to generate a processed image includes: acquiring an image mixing mode according to a fifth parameter in the image processing parameters; and mixing the target object with the template image according to the mixing mode to generate a processed image.
Further, the mixing the target object with the template image to generate a processed image includes: randomly blending the target object with the template image portion pixels; and repeating the step of random mixing, completing the mixing of the target object and the template image within preset time, and generating a processed image.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the configuration module is used for receiving an image processing configuration instruction and configuring image processing parameters according to the configuration instruction;
the first image acquisition module is used for acquiring a first image;
a target object identification module for identifying a target object in the first image;
the template image acquisition module is used for acquiring a template image;
and the image mixing module is used for mixing the target object with the template image according to the image processing parameters to generate a processed image.
Further, the configuration module is further configured to: receiving an image processing configuration instruction, and configuring one or more of the type of the target object, the acquisition trigger condition of the template image, the acquisition address of the template image, the trigger condition of image mixing and the mode of image mixing according to the configuration instruction.
Further, the first image obtaining module is further configured to:
the method comprises the steps of obtaining a video image, and taking a current video image frame of the video image as a first image.
Further, the template image obtaining module is further configured to:
acquiring an acquisition address of the template image according to a first parameter in the image processing parameters;
and acquiring a template image according to the acquisition address.
Further, the template image obtaining module further includes:
the address acquisition module is used for acquiring an acquisition address of the template image according to a first parameter in the image processing parameters;
the trigger condition acquisition module is used for acquiring the trigger condition of the template image according to a second parameter in the image processing parameters;
and the template image acquisition sub-module is used for responding to the acquisition triggering condition to be triggered and acquiring the template image from the acquisition address.
Further, the target object identification module further includes:
the characteristic point acquisition module is used for acquiring preset target object characteristic points;
and the identification submodule is used for identifying the target object in the first image according to the characteristic points.
Further, the feature point obtaining module is further configured to:
and acquiring the type of a preset target object and the characteristic point corresponding to the target object according to a third parameter in the image processing parameters.
Further, the image blending module is further configured to:
acquiring a triggering condition of the image mixing according to a fourth parameter in the image processing parameters;
and in response to the image mixing triggering condition being triggered, mixing the target object with the template image to generate a processed image.
Further, the image blending module is further configured to:
acquiring an image mixing mode according to a fifth parameter in the image processing parameters;
and mixing the target object with the template image according to the mixing mode to generate a processed image.
Further, the image blending module is further configured to:
randomly blending the target object with the template image portion pixels;
and repeating the step of random mixing, completing the mixing of the target object and the template image within preset time, and generating a processed image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any of the preceding first aspects.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, which stores computer instructions for causing a computer to execute the image processing method according to any one of the foregoing first aspects.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: receiving an image processing configuration instruction, and configuring image processing parameters according to the configuration instruction; acquiring a first image; identifying a target object in the first image; acquiring a template image; and mixing the target object with the template image according to the image processing parameters to generate a processed image. The embodiment of the disclosure mixes the target object in the image with the template image by configuring the image processing parameters, and solves the technical problems of single image processing mode and inflexible processing effect modification in the prior art.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of an embodiment of an image processing method provided in an embodiment of the present disclosure;
fig. 2 is a flowchart of an embodiment of step S104 in an embodiment of an image processing method provided in the present disclosure;
fig. 3 is a schematic structural diagram of an embodiment of an image processing apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an embodiment of a template image obtaining module in an embodiment of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of an embodiment of an image processing method provided in an embodiment of the present disclosure, where the image processing method provided in this embodiment may be executed by an image processing apparatus, the image processing apparatus may be implemented as software, or implemented as a combination of software and hardware, and the image processing apparatus may be integrated in a certain device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, receiving an image processing configuration instruction, and configuring parameters of image processing according to the configuration instruction;
in this embodiment, the image processing system may receive the image processing configuration command through a human-machine interface or a configuration file, where the human-machine interface may include buttons, a selection field, an input field, and the like, and will not be described herein again. The parameters of the image processing are image processing parameters related to an image processing mode in the image processing method.
The receiving an image processing configuration instruction, and configuring image processing parameters according to the configuration instruction, includes: receiving an image processing configuration instruction, and configuring one or more of the type of the target object, the acquisition trigger condition of the template image, the acquisition address of the template image, the trigger condition of image mixing and the mode of image mixing according to the configuration instruction.
It is to be understood that the above-mentioned image processing parameters are only examples and do not constitute a limitation to the present disclosure, and practically any image processing parameters may be used in the present disclosure and will not be described herein again.
Step S102: acquiring a first image;
in this embodiment, acquiring the first image may be through an image sensor, which refers to various devices that can capture images, typical image sensors being video cameras, still cameras, etc. In this embodiment, the image sensor may be a camera on the terminal device, such as a front-facing or rear-facing camera on a smart phone, and an image acquired by the camera may be directly displayed on a display screen of the smart phone.
In an embodiment, the acquiring the first image may be acquiring a current image frame of a video currently captured by the terminal device, and since the video is composed of a plurality of image frames, the processing of the image in this embodiment may be processing the image frame of the video.
Step S103: identifying a target object in the first image;
in this embodiment, the first image includes a target object, and the target object may be any object in the first image, and optionally, the target object is a human body image. In this embodiment, the target object may be a target object to be segmented from a first image, and image segmentation is generally divided into interactive image segmentation and automatic image segmentation, and conventional image processing generally uses interactive image segmentation, which requires human intervention in image segmentation. In the present disclosure, automatic image segmentation is used, and the following description will be given taking human body image segmentation as an example.
Generally, automatic human image segmentation methods can be classified into the following methods: (1) the human body image segmentation method based on the model comprises the steps of firstly detecting a human face according to prior knowledge of the human face, then using a trunk model to search a trunk below the human face, then estimating the position of a lower body according to the segmented trunk, and finally providing seed points for image segmentation by utilizing the estimated trunk and upper limb areas of legs to complete the segmentation of the human body image; (2) a human body image segmentation method based on a hierarchical tree is characterized in that adjacent body parts are modeled firstly, then the whole human body posture is modeled, different postures of a human body are modeled as the summation of nodes on different paths in the hierarchical detection tree, different layers in the hierarchical detection tree correspond to different models of the adjacent human body parts, different paths on the hierarchical detection tree correspond to different human body postures, the detection is performed downwards along a root node of the tree during detection, and different postures of the human body are segmented along the different paths; (3) the method comprises the steps of firstly detecting a human face according to prior knowledge of the human face, then using a trunk model to search a trunk under the human face, then obtaining a reference signal from the detected trunk, then using the independent component analysis method of the reference signal to enable the trunk to be highlighted from an image to finish the trunk segmentation, and finally finishing the segmentation of the whole human body image, wherein the segmentation of other body parts is similar; (4) the human body image segmentation method based on the expectation-maximization algorithm comprises the steps of firstly estimating human body postures in an image by using a pattern structure model to obtain a probability map of the human body postures, and then obtaining a final human body segmentation image by using an image segmentation method on the basis of the probability map. Of course, other human body image segmentation methods may also be used, which are not described in detail in the present disclosure, and any image segmentation method may be introduced into the present disclosure to segment the target object from the first image.
In one embodiment, the identifying a target object in the first image comprises: acquiring preset target object characteristic points; and identifying the target object in the first image according to the characteristic points. The acquiring of the preset target object feature point includes: and acquiring the type of a preset target object and the characteristic point corresponding to the target object according to a third parameter in the image processing parameters. In this embodiment, the target object may be identified by a target object type in the image processing parameters and a feature point corresponding to the target object of the type, the identification method may use the above-mentioned human body image segmentation method based on a model, the feature point is prior knowledge of a human face, and the human body image may be identified and segmented according to the identification method.
It can be understood that the type of the target object and the feature point of the target object may be preset, so that the identification and segmentation of different target objects may be realized through setting, and any object type and identification and segmentation method may be referred to in this disclosure, and will not be described herein again.
Step S104: acquiring a template image;
in this embodiment, the template image may be a picture, or may be a series of video frames or image frames; when the first image acquired in S102 is a plurality of image frames, the image frames of the template image may correspond to the image frames of the first image one to one.
In one embodiment, the acquiring a template image comprises: acquiring an acquisition address of the template image according to a first parameter in the image processing parameters; and acquiring a template image according to the acquisition address. In this embodiment, the first parameter is an acquisition address of the template image or a parameter related to the acquisition address, a storage address of the template image can be obtained through the first parameter, the storage address may be a local storage address or a network storage address, and the template image is acquired according to the acquisition address.
Step S105: and mixing the target object with the template image according to the image processing parameters to generate a processed image.
In one embodiment, the mixing the target object with the template image according to the image processing parameters to generate a processed image includes: acquiring a triggering condition of the image mixing according to a fourth parameter in the image processing parameters; and in response to the image mixing triggering condition being triggered, mixing the target object with the template image to generate a processed image. In the embodiment, the trigger condition may include a first trigger condition, and when the first trigger condition occurs, the template image is used as a first background image to be mixed with a second background image in the first image to form a mixed background, the target image is used as a foreground image, and the mixed background and the foreground image are mixed to form a processed image, where in this case, processing of the background in the first image is equivalent to processing without blocking the target image. Optionally, the target object is a human body image, the first image is a video image acquired by an image acquisition device, the image processing result at this time is to replace a background of the acquired video image with a template image or the mixed background, and the human body image is located in a foreground. In this embodiment, the trigger condition may further include a second trigger condition, and when the second trigger condition occurs, the template image is used as a foreground image, and is mixed with the background in the target image and the first image to generate a processed image, which is equivalent to performing stealth processing on the target object at this time. Optionally, the target object is a human body image, the first image is a video image acquired by an image acquisition device, and the image processing result at this time is to replace a foreground image of the acquired video image with a template image or a mixed image of the human body image in a background, so that a stealth effect of the human body image is presented.
In one embodiment, the mixing the target object with the template image according to the image processing parameters to generate a processed image includes: acquiring an image mixing mode according to a fifth parameter in the image processing parameters; and mixing the target object with the template image according to the mixing mode to generate a processed image. In the embodiment, the mixed mode includes a mixed intensity of the target object and the template image, and the mixed intensity includes a proportion of color values of pixel points of the target object and the template image in the mixed image, or color values of pixel points of the target object and the template image are calculated according to a predetermined function to obtain values of pixel points of the mixed image. Optionally, color values of pixel points of the target object and the template image may be mixed according to a ratio of 1:1, and an effect of embedding the target object into the template image is presented at this time; optionally, the color values of the pixel points of the template image are used for replacing the pixel values of the pixel points of the target object, and at this time, the stealthy effect of the target object is displayed. It is understood that any hybrid mode may be applied to the present disclosure and will not be described in detail herein.
It is to be understood that the plurality of trigger conditions described in this disclosure may be combined arbitrarily, or may be used individually to achieve different effects, which is not described herein again.
In one embodiment, the blending the target object with the template image to generate a processed image includes: randomly blending the target object with the template image portion pixels; and repeating the step of random mixing, completing the mixing of the target object and the template image within preset time, and generating a processed image. In an embodiment where the template image covers the target object, using the step of randomly blending, the target object may be slowly changed, presenting the effect that the target object slowly dips into the template image or slowly disappears.
As shown in fig. 2, an embodiment of acquiring the template image in step S104 in the above embodiment of the image processing method is provided, where the step S104 further includes:
s201: acquiring an acquisition address of the template image according to a first parameter in the image processing parameters;
s202: acquiring an acquisition triggering condition of the template image according to a second parameter in the image processing parameters;
s203: and acquiring the template image from the acquisition address in response to the acquisition trigger condition being triggered.
In this embodiment, the first parameter is an acquisition address of the template image or a parameter related to the acquisition address, and a storage address of the template image can be obtained through the first parameter, where the storage address may be a local storage address or a network storage address. In this embodiment, a second parameter is also obtained, where the second parameter is an obtaining trigger condition for obtaining a template image, and the template image is obtained from the obtaining address only when the starting condition occurs. Optionally, the trigger condition may be an action or a state of the target object; optionally, when the target object is a human body, the actions of five sense organs of the human face, such as opening the mouth, blinking eyes and smiling, or the actions of a human hand, such as left-hand stroke of the palm, right-hand stroke of the palm, and the like, are recognized. The effect of triggering the image processing process can be obtained through the triggering condition, and because the template image is not acquired when the triggering condition does not occur, the image mixing process after triggering is not acquired, the image mixing can be indirectly controlled, and because the triggering is positioned before the template image is acquired, compared with the method that the triggering condition is directly set for the image mixing, the method saves resources.
It is to be understood that the above triggering conditions are only examples and do not constitute a limitation to the present disclosure, and practically any triggering conditions may be applied to the present disclosure, and are not described herein again.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: receiving an image processing configuration instruction, and configuring image processing parameters according to the configuration instruction; acquiring a first image; identifying a target object in the first image; acquiring a template image; and mixing the target object with the template image according to the image processing parameters to generate a processed image. The embodiment of the disclosure mixes the target object in the image with the template image by configuring the image processing parameters, and solves the technical problems of single image processing mode and inflexible processing effect modification in the prior art.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
Fig. 3 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present disclosure, and as shown in fig. 3, the apparatus 300 includes: a configuration module 301, a first image acquisition module 302, a target object recognition module 303, a template image acquisition module 304, and an image blending module 305. Wherein,
a configuration module 301, configured to receive an image processing configuration instruction, and configure an image processing parameter according to the configuration instruction;
a first image obtaining module 302, configured to obtain a first image;
a target object identification module 303, configured to identify a target object in the first image;
a template image obtaining module 304, configured to obtain a template image;
an image blending module 305, configured to blend the target object with the template image according to the image processing parameter, so as to generate a processed image.
Further, the configuration module 301 is further configured to: receiving an image processing configuration instruction, and configuring one or more of the type of the target object, the acquisition trigger condition of the template image, the acquisition address of the template image, the trigger condition of image mixing and the mode of image mixing according to the configuration instruction.
Further, the first image obtaining module 302 is further configured to:
the method comprises the steps of obtaining a video image, and taking a current video image frame of the video image as a first image.
Further, the template image obtaining module 304 is further configured to:
acquiring an acquisition address of the template image according to a first parameter in the image processing parameters;
and acquiring a template image according to the acquisition address.
Further, the target object identifying module 303 further includes:
the characteristic point acquisition module is used for acquiring preset target object characteristic points;
and the identification submodule is used for identifying the target object in the first image according to the characteristic points.
Further, the feature point obtaining module is further configured to:
and acquiring the type of a preset target object and the characteristic point corresponding to the target object according to a third parameter in the image processing parameters.
Further, the image blending module 305 is further configured to:
acquiring a triggering condition of the image mixing according to a fourth parameter in the image processing parameters;
and in response to the image mixing triggering condition being triggered, mixing the target object with the template image to generate a processed image.
Further, the image blending module 305 is further configured to:
acquiring an image mixing mode according to a fifth parameter in the image processing parameters;
and mixing the target object with the template image according to the mixing mode to generate a processed image.
Further, the image blending module 305 is further configured to:
randomly blending the target object with the template image portion pixels;
and repeating the step of random mixing, completing the mixing of the target object and the template image within preset time, and generating a processed image.
The apparatus shown in fig. 3 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Fig. 4 is a schematic structural diagram of an embodiment of a template image obtaining module 304 in an embodiment of an image processing apparatus provided in an embodiment of the present disclosure, as shown in fig. 4, the template image obtaining module 304 includes: an address acquisition module 401, a trigger condition acquisition module 402, and a template image acquisition sub-module 403. Wherein,
an address obtaining module 401, configured to obtain an obtaining address of the template image according to a first parameter in the image processing parameters;
a trigger condition obtaining module 402, configured to obtain a trigger condition for obtaining the template image according to a second parameter of the image processing parameters;
a template image obtaining sub-module 403, configured to obtain the template image from the obtaining address in response to the obtaining trigger condition being triggered.
The module shown in fig. 4 may perform the method of the embodiment shown in fig. 2, and reference may be made to the related description of the embodiment shown in fig. 2 for a part not described in detail in this embodiment. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 2, and are not described herein again.
Referring now to FIG. 5, a block diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (13)

1. An image processing method, comprising:
receiving an image processing configuration instruction, and configuring image processing parameters according to the configuration instruction;
acquiring a first image;
identifying a target object in the first image;
acquiring a template image;
and mixing the target object with the template image according to the image processing parameters to generate a processed image.
2. The image processing method of claim 1, wherein said receiving an image processing configuration instruction, configuring image processing parameters according to said configuration instruction, comprises:
receiving an image processing configuration instruction, and configuring one or more of the type of the target object, the acquisition trigger condition of the template image, the acquisition address of the template image, the trigger condition of image mixing and the mode of image mixing according to the configuration instruction.
3. The image processing method of claim 1, wherein said acquiring a first image comprises:
the method comprises the steps of obtaining a video image, and taking a current video image frame of the video image as a first image.
4. The image processing method of claim 1, wherein said obtaining a template image comprises:
acquiring an acquisition address of the template image according to a first parameter in the image processing parameters;
and acquiring a template image according to the acquisition address.
5. The image processing method of claim 1, wherein said obtaining a template image comprises:
acquiring an acquisition address of the template image according to a first parameter in the image processing parameters;
acquiring an acquisition triggering condition of the template image according to a second parameter in the image processing parameters;
and acquiring the template image from the acquisition address in response to the acquisition trigger condition being triggered.
6. The image processing method of claim 1, wherein the identifying the target object in the first image comprises:
acquiring preset target object characteristic points;
and identifying the target object in the first image according to the characteristic points.
7. The image processing method according to claim 6, wherein the obtaining of the preset target object feature point comprises:
and acquiring the type of a preset target object and the characteristic point corresponding to the target object according to a third parameter in the image processing parameters.
8. The image processing method of claim 1, wherein the mixing the target object with the template image according to the image processing parameters to generate a processed image comprises:
acquiring a triggering condition of the image mixing according to a fourth parameter in the image processing parameters;
and in response to the image mixing triggering condition being triggered, mixing the target object with the template image to generate a processed image.
9. The image processing method of claim 1, wherein the mixing the target object with the template image according to the image processing parameters to generate a processed image comprises:
acquiring an image mixing mode according to a fifth parameter in the image processing parameters;
and mixing the target object with the template image according to the mixing mode to generate a processed image.
10. The image processing method of claim 1, wherein the blending the target object with the template image to generate a processed image comprises:
randomly blending the target object with the template image portion pixels;
and repeating the step of random mixing, completing the mixing of the target object and the template image within preset time, and generating a processed image.
11. An image processing apparatus characterized by comprising:
the configuration module is used for receiving an image processing configuration instruction and configuring image processing parameters according to the configuration instruction;
the first image acquisition module is used for acquiring a first image;
a target object identification module for identifying a target object in the first image;
the template image acquisition module is used for acquiring a template image;
and the image mixing module is used for mixing the target object with the template image according to the image processing parameters to generate a processed image.
12. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing implements the image processing method according to any of claims 1-10.
13. A computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the image processing method of any one of claims 1-10.
CN201811497920.3A 2018-12-07 2018-12-07 Image processing method and device Pending CN111292227A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811497920.3A CN111292227A (en) 2018-12-07 2018-12-07 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811497920.3A CN111292227A (en) 2018-12-07 2018-12-07 Image processing method and device

Publications (1)

Publication Number Publication Date
CN111292227A true CN111292227A (en) 2020-06-16

Family

ID=71025506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811497920.3A Pending CN111292227A (en) 2018-12-07 2018-12-07 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111292227A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986131A (en) * 2020-07-31 2020-11-24 北京达佳互联信息技术有限公司 Image synthesis method and device and electronic equipment
CN114398133A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Display method, display device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104918107A (en) * 2015-05-29 2015-09-16 小米科技有限责任公司 Video file identification processing method and device
CN108022207A (en) * 2017-11-30 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104918107A (en) * 2015-05-29 2015-09-16 小米科技有限责任公司 Video file identification processing method and device
CN108022207A (en) * 2017-11-30 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986131A (en) * 2020-07-31 2020-11-24 北京达佳互联信息技术有限公司 Image synthesis method and device and electronic equipment
CN111986131B (en) * 2020-07-31 2024-03-12 北京达佳互联信息技术有限公司 Image synthesis method and device and electronic equipment
CN114398133A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Display method, display device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110070551B (en) Video image rendering method and device and electronic equipment
CN110070063B (en) Target object motion recognition method and device and electronic equipment
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110070496B (en) Method and device for generating image special effect and hardware device
CN110069974B (en) Highlight image processing method and device and electronic equipment
CN110084154B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110035236A (en) Image processing method, device and electronic equipment
EP4276738A1 (en) Image display method and apparatus, and device and medium
CN110221822A (en) Merging method, device, electronic equipment and the computer readable storage medium of special efficacy
CN110070555A (en) Image processing method, device, hardware device
CN111833461A (en) Method and device for realizing special effect of image, electronic equipment and storage medium
CN116934577A (en) Method, device, equipment and medium for generating style image
CN114913058A (en) Display object determination method and device, electronic equipment and storage medium
CN111292227A (en) Image processing method and device
WO2020077912A1 (en) Image processing method, device, and hardware device
CN113163135B (en) Animation adding method, device, equipment and medium for video
CN110069641B (en) Image processing method and device and electronic equipment
CN111292247A (en) Image processing method and device
CN110209861A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN111292276B (en) Image processing method and device
CN111200705B (en) Image processing method and device
CN111223105B (en) Image processing method and device
CN110097622B (en) Method and device for rendering image, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination