CN113313788A - Image processing method and apparatus, electronic device, and computer-readable storage medium - Google Patents

Image processing method and apparatus, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN113313788A
CN113313788A CN202010120903.9A CN202010120903A CN113313788A CN 113313788 A CN113313788 A CN 113313788A CN 202010120903 A CN202010120903 A CN 202010120903A CN 113313788 A CN113313788 A CN 113313788A
Authority
CN
China
Prior art keywords
frame image
current frame
image
video
main body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010120903.9A
Other languages
Chinese (zh)
Inventor
邢达明
武小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010120903.9A priority Critical patent/CN113313788A/en
Publication of CN113313788A publication Critical patent/CN113313788A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The present disclosure relates to an image processing method and apparatus. The image processing method is applied to the electronic equipment and comprises the following steps: the method comprises the steps that a video is shot by electronic equipment or selected from an album, wherein the video comprises a current frame image and one or more previous frame images of the current frame image; determining a subject in the video; based on the determined main body, obtaining a main body area of the current frame image and background areas of the current frame image and the preorder frame image through a semantic segmentation algorithm; obtaining a residual image of the current frame image based on the background areas of the current frame image and the preamble frame image; and fusing the main body area of the current frame image and the afterimage of the current frame image to obtain an output image of the current frame image. The main body area and the background area are obtained through a semantic segmentation algorithm, the afterimage is formed through inter-frame fusion, the afterimage is fused with the main body area of the current frame image to obtain an output image, and a high-quality picture or video with an afterimage effect can be shot and formed under any environment and condition.

Description

Image processing method and apparatus, electronic device, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
In some current shooting requirements, images with smear effect generated by moving objects, or afterimages, are displayed by delayed photography. In the time-lapse photography, the exposure time of the image sensor is prolonged, when a photographed picture is changed, light reflected from a photographed object is continuously exposed (imaged) on the image sensor to present a continuous moving track, but due to the limitation of the light sensitivity of the photographing equipment, the time-lapse photography cannot be directly adopted under the condition of sufficient light, or the effect of a photographed image is not ideal.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an image processing method applied to an electronic device, the image processing including: shooting a section of video by using a camera device of the electronic equipment or selecting a section of video from an album of the electronic equipment, wherein the video comprises a current frame image and one or more previous frame images of the current frame image; determining a subject in the video; based on the determined main body, obtaining a main body area of the current frame image and background areas of the current frame image and the preorder frame image through a semantic segmentation algorithm; obtaining a residual image of the current frame image based on the background areas of the current frame image and the preamble frame image; and fusing the main body area of the current frame image and the afterimage of the current frame image to obtain an output image of the current frame image.
In one embodiment, determining a subject in a video includes: in response to an external instruction, determining the main body of any frame in the video; and determining the main body in the video through a target tracking algorithm based on the main body of any one of the video frames.
In one embodiment, obtaining an afterimage of a current frame image based on a background region of the current frame image and a background region of a previous frame image comprises: and performing convolution on the background area of the current frame image and the background areas of the pre-set number of the current frame images forward in sequence to obtain a residual image of the current frame image.
In one embodiment, obtaining an afterimage of a current frame image based on a background region of the current frame image and a background region of a previous frame image comprises: and carrying out weighted fusion on the background area of the current frame image and the residual image of the preorder frame image adjacent to the current frame image to obtain the residual image of the current frame image.
In an embodiment, the fusing the main region of the current frame image and the afterimage of the current frame image to obtain the output image of the current frame image includes: and performing edge fusion on the edge of the main body area of the current frame image and the afterimage of the current frame image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus applied to an electronic device, the image processing apparatus including: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for shooting a section of video by using a camera device of the electronic equipment or selecting a section of video from an album of the electronic equipment, and the video comprises a current frame image and one or more previous frame images of the current frame image; a confirmation unit for determining a subject in a video frame; the segmentation unit is used for obtaining a main body area of the current frame image and background areas of the current frame image and the pre-frame image through a semantic segmentation algorithm based on the determined main body; the inter-frame fusion unit is used for obtaining a residual image of the current frame image based on the background area of the current frame image and the pre-frame image; and the current frame image fusion unit is used for fusing the main body area of the current frame image and the afterimage of the current frame image to obtain an output image of the current frame image.
In one embodiment, the validation unit is configured to: in response to an external instruction, determining the main body of any frame in the video; and determining the main body in the video through a target tracking algorithm based on the main body of any frame in the video.
In one embodiment, the inter-frame fusion unit is configured to: and performing convolution on the background area of the current frame image and the background areas of the pre-set number of the current frame images forward in sequence to obtain a residual image of the current frame image.
In one embodiment, the inter-frame fusion unit is configured to: and carrying out weighted fusion on the background area of the current frame image and the residual image of the preorder frame image adjacent to the current frame image to obtain the residual image of the current frame image.
In an embodiment, the current frame image fusion unit is configured to: and performing edge fusion on the edge of the main body area of the current frame image and the afterimage of the current frame image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a memory to store instructions; and a processor for calling the instructions stored in the memory to execute the image processing method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by a processor, perform the image processing method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the main body is determined, a semantic segmentation algorithm is carried out on the basis of the determined main body to obtain a main body area and a background area, the background area is subjected to interframe fusion to form a residual image, and then the residual image and the main body area of the current frame image are continuously fused to obtain a fused output image, so that the residual image effect can be ensured, the main body can be clear, the exposure time does not need to be prolonged, and a high-quality picture or video with the residual image effect can be shot and formed under any environment and condition.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating another image processing method according to an exemplary embodiment.
Fig. 3, 4, 5 are schematic diagrams of semantic segmentation algorithms shown according to an example embodiment.
FIGS. 6, 7, and 8 are schematic diagrams illustrating buffering of preamble frame images according to an exemplary embodiment.
Fig. 9 is a schematic diagram of a body region hard-segmented based on binary values.
Fig. 10 is a schematic view of a body region after edge fusion.
Fig. 11 is a schematic block diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 12 is a schematic block diagram illustrating an apparatus in accordance with an exemplary embodiment.
FIG. 13 is a schematic block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In some related arts, an image having an afterimage effect is obtained by time-lapse photography, and the photographing principle is to extend the exposure time of the image sensor, and when the photographed image changes, the light reflected from the photographed object will be continuously exposed on the image sensor to present a continuous moving track. When the external light is sufficient, the sensor light sensing is adjusted to be the lowest, then the minimum light inlet quantity is set, but the picture is still overexposed.
In other technologies, a gray scale mirror is used for delayed photography, wherein the gray scale mirror is a semitransparent medium used for reducing brightness in a photographing process and is a kind of photographing filter, and the principle is that when light enters the medium, the light is attenuated, so that most of the light is limited and filtered, and only a small amount of light is directly exposed and imaged on a sensor. And this way reduces the amount of incoming light, so that the picture color is reduced.
To solve the above problem, the embodiment of the present disclosure provides an image processing method 10, which can satisfy the requirement that general image capturing forms an afterimage effect with delayed image capturing, and ensures the clarity of a main body and the overall quality of an image is high, as shown in fig. 1, the image processing method 10 may be applied to an electronic device, and the electronic device may have an image capturing device for capturing a video and may also have a storage device capable of storing the video, including steps S11 to S15:
and step S11, shooting a segment of video by using the camera of the electronic equipment or selecting a segment of video from the album of the electronic equipment, wherein the video comprises the current frame image and one or more preceding frame images ahead of the current frame image.
In the embodiment of the disclosure, the mode of acquiring the video may be obtained by shooting in real time by a camera of the electronic device, or by calling a video stored in a local album of the electronic device, or by receiving a video transmitted by another device. Wherein, in order to guarantee the effect, should keep camera device stable when shooing in real time, avoid the full picture smear that the removal of full picture caused in the video of shooing influences final image effect.
In step S12, a subject in the video is determined.
After acquiring the continuous video, a subject in the video is determined. The subject is a person or object that needs to be clearly shown in the finally generated image or video, and a smear effect does not need to be generated on the person or object, so that the subject in the video needs to be determined.
In one embodiment, as shown in fig. 2, step S12 may include: step S121, responding to an external instruction, and determining a main body of any frame in the video; and S122, determining the main body in the video through a target tracking algorithm based on the main body of any frame in the video. In this embodiment, the user may determine the subject in a frame based on a frame of the video through an external instruction similar to a point-to-focus manner or other manners, and after confirming the subject of the frame, determine the subject of each frame in the video through a target tracking algorithm (tracking). Among these, object tracking algorithms, also called object tracking, are a branch of the computer vision field, and usually have the task of continuously determining the position of an object in the first frame of a given video or in one of the frames, and then in the remaining video frames. The accuracy of the target tracking algorithm mainly comprises the following influencing factors: light changes, object shape or angle changes, and the object is occluded. Therefore, in order to ensure that the image processing method 10 of the embodiment of the present disclosure is more reliable and accurate, the shooting process keeps the ambient light stable as much as possible, the angle of the main body relative to the shooting device is consistent as much as possible, and meanwhile, the shooting process is prevented from being blocked as much as possible.
Step S13, based on the determined subject, a subject region of the current frame image and background regions of the current frame image and the preamble frame image are obtained by a semantic segmentation algorithm.
For a video frame picture, a semantic segmentation algorithm (semantic segmentation) is performed on a main body and a background, a main body region formed by hard segmentation according to the boundary of the main body, and a background region formed by the rest of the picture except the main body region. The semantic segmentation algorithm may determine the boundary of objects in an image based on image classification (image classification) and object detection (object detection), so that objects in the image and the background are decomposed into separate entities. As shown in fig. 3, 4, and 5 as an example, the target subject in the video frame is a human body, fig. 3 is an image of a certain frame in the video, the human body is segmented by a semantic segmentation algorithm on the image of the certain frame to obtain a segmentation schematic shown in fig. 4, and the boundary is subjected to binary segmentation, that is, hard segmentation based on the determined subject to obtain a binary segmentation image formed by the subject and the background shown in fig. 5.
The sequence of steps S12 and S13 is not fixed, and in some embodiments, step S13 may be performed after determining the bodies of all frames based on step S12; in other embodiments, the step S12 of determining the subject may be performed for one frame, the step S13 of performing the semantic segmentation algorithm may be performed for the frame, and then the steps S12 and S13 may be performed for the next frame. The sequence of step S12 and step S13 may be adjusted according to actual requirements.
Step S14, based on the background area of each frame of the current frame image and the preamble frame image, a residual image of the current frame image is obtained.
In the background except the main body area, there is a moving object such as a human, an animal, a vehicle, etc., and the background area is fused to form a smear effect required for the background.
In one embodiment, step S14 may include: and performing convolution on the background area of the current frame image and the background areas of the pre-set number of the current frame images forward in sequence to obtain a residual image of the current frame image.
In this embodiment, the number of the used preamble frame images may be determined according to a preset number, and may be all the preamble frame images of the current frame image in the forward direction; or a fixed number, for example, if the current frame image is five preceding preamble frame images, and the first frames of the continuous video cannot satisfy the fixed number, all the preamble frame images of the preambles may be processed. The buffer memory can be called correspondingly, the pre-set number of preamble frame images are stored in the buffer memory, in the process of processing a plurality of frames, the current frame image enters the buffer memory after being processed, and the first preamble frame image is removed from the buffer memory, so that the pre-set number of preamble frame images are held in the buffer memory, and then the next frame is processed. For example, the preset number is five frames, as shown in fig. 6, when the current frame image is the third frame of the video, the first two frames of the current frame image, that is, the first frame and the second frame of the video, are stored in the buffer memory; with the advance of the video, as shown in fig. 7, when the current frame image is the fifth frame of the video, the current frame image and the first four frames of the current frame image are stored in the cache; with the video continuing to advance, as shown in fig. 8, when the current frame image is the sixth frame of the video, since the preset number is five frames, the current frame image is stored in the buffer while the frame farthest from the current frame image, that is, the first frame of the video, is deleted from the buffer, so as to ensure that five frames are kept in the buffer, and thus when step S14 is performed on any current frame image, all video frames in the current buffer can be called for processing.
The convolution calculation may be an operation of multiplying the image and the filter matrix element by element and then summing, which is equivalent to moving a two-dimensional function to all positions of another two-dimensional function, in other words, performing weighted averaging on background regions of multiple frames to generate a new afterimage, where the so-called weighted averaging is to multiply the pixel value of the corresponding coordinate in each background region by the weight corresponding to the frame, and after the sum is calculated, averaging to obtain an average pixel value as the pixel value of the pixel at the corresponding coordinate position of the afterimage. If the image is in the RGB format, weighted average is carried out on R, G, B three-channel values; if the image is a gray image, the gray values are weighted and averaged. In addition, in the convolution calculation, the weight of each frame may be the same or different, and in one embodiment of the present disclosure, when processing a residual image of a current frame image, the weight of a background region of the current frame image is the highest, and the weight corresponding to a preamble frame image decreases as the distance from the current frame image increases, in other words, among a plurality of preamble frame images, the weight corresponding to a preamble frame image adjacent to the current frame image is the highest, and then the weight corresponding to a preamble frame image of a previous frame is the next highest, and so on. Due to the fact that moving targets possibly exist in the background area and the smear effect is generated based on the moving targets, the position of the moving target in the preamble frame image close to the current frame image is closest to the position of the current frame image, the position of the moving target in the preamble frame image far away from the current frame image is possibly far away, the moving target far away is set with relatively low weight, and the smear effect can be better shown. For example: the preset number is 3, the weight of the background area of the current frame image is 0.5, the weight of the background area of the previous preamble frame image of the current frame image is 0.34, the weight of the background area of the next previous preamble frame image is 0.14, and the weight of the background area of the next previous preamble frame image is 0.02. The weight setting can be fitted by a monotone increasing curve in any one of normal distribution, various buffer functions and the like.
In another embodiment, step S14 may further include: and carrying out weighted fusion on the background area of the current frame image and the residual image of the preorder frame image adjacent to the current frame image to obtain the residual image of the current frame image. In the case of continuously processing multiple frames of images, the already processed preamble frame image may serve as a data source in subsequent processing, and the afterimage of the current frame image may also be generated based on the afterimage of the previous preamble frame image. In this embodiment, the afterimage of each frame may be stored in the buffer and used as a basis for generating a corresponding afterimage, specifically, when the current frame image is processed, the afterimage of the previous preamble frame image and the background region of the current frame image may be weighted and fused, and a corresponding weight may be preset, for example, the weight corresponding to the background region of the current frame image is 0.5 or 0.6, and the weight corresponding to the afterimage of the previous preamble frame image is 0.5 or 0.4. By the method of the embodiment, when the afterimage of each frame is reprocessed, the image information of all the frames is provided, and meanwhile, because the weight is set, the effect of the preorder frame image which is farther away from the current frame image on the afterimage of the current frame image is lower, so that the image with high-quality smear effect can be conveniently generated.
Step S15, the main area of the current frame image and the afterimage of the current frame image are fused to obtain an output image of the current frame image.
The main body area of the current frame image is a clear main body area obtained by performing a semantic segmentation algorithm on the current frame image, the afterimage of the current frame image is obtained by performing the above method in any embodiment according to the background areas of the current frame image and the pre-frame image, and the current frame image and the pre-frame image are finally fused to form an output image with a clear main body and a smear effect on a moving target in the background.
In an embodiment, step S15 may further include: and performing edge fusion on the edge of the main body area of the current frame image and the afterimage of the current frame image. Because the semantic segmentation algorithm is hard segmentation, in order to make the output image more natural, the edge of the main region of the current frame image and the residual image may be subjected to edge blending, for example, an alpha blending or other image matting manner is adopted to perform blurring processing on the edge, so that the generated image is more natural. For example, the alpha edge fusion is used to fuse the subject region and the afterimage, as shown in fig. 9 and 10, where fig. 9 is a superposition formula of the alpha edge fusion based on the subject that is hard-divided by binary:
P=α·F+(1-α)·B
wherein, P is the output value pixel value; f is a foreground pixel value, i.e., a subject region pixel value in the embodiments of the present disclosure; b is a background pixel value, i.e., a residual image pixel value in the embodiment of the present disclosure; alpha is a parameter, which is equivalent to transparency, and if the value is 1, the foreground image is directly covered. The alpha edge fusion is performed through the formula to form the edge shown in fig. 10, so that the fused image edge is not sharp and is softer, and the finished image effect is better.
By means of any embodiment, a video clip consisting of one image or a plurality of continuous video frames with a smear effect can be generated through a commonly shot video, shooting conditions are loose, operation is simple, and the quality of the generated image is higher. The mobile phone can be used for previewing and shooting pictures and videos with prominent main bodies and trailing effects on the background in real time; meanwhile, in video shooting, pedestrians who pass by can be imaged into a smear, and privacy of the pedestrians is protected while strong static and dynamic contrast is manufactured.
Based on the same inventive concept, fig. 11 shows an image processing apparatus 100 applied to an electronic device, the image processing apparatus 100 comprising: the acquiring unit 110 is configured to capture a segment of video by using a camera of the electronic device or select a segment of video from an album of the electronic device, where the video includes a current frame image and one or more previous frame images of the current frame image; a confirmation unit 120 for determining a subject in the video; a segmentation unit 130, configured to obtain, based on the determined subject, a subject region of the current frame image and a background region of the current frame image and the preamble frame image through a semantic segmentation algorithm; the inter-frame fusion unit 140 is configured to obtain a residual image of the current frame image based on the background area of the current frame image and the pre-frame image; the current frame image fusion unit 150 is configured to fuse the main region of the current frame image and the afterimage of the current frame image to obtain an output image of the current frame image.
In one embodiment, the validation unit 120 is configured to: in response to an external instruction, determining the main body of any frame in the video; and determining the main body in the video through a target tracking algorithm based on the main body of any frame in the video.
In one embodiment, the inter-frame fusion unit 140 is configured to: and performing convolution on the background area of the current frame image and the background areas of the pre-set number of the current frame images forward in sequence to obtain a residual image of the current frame image.
In one embodiment, the inter-frame fusion unit 140 is configured to: and carrying out weighted fusion on the background area of the current frame image and the residual image of the preorder frame image adjacent to the current frame image to obtain the residual image of the current frame image.
In an embodiment, the current frame image fusion unit 150 is configured to: and performing edge fusion on the edge of the main body area of the current frame image and the afterimage of the current frame image.
With regard to the image processing apparatus 100 in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 12 is a schematic block diagram illustrating an apparatus of any of the previous embodiments in accordance with an exemplary embodiment. For example, the apparatus 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 12, the apparatus 300 may include one or more of the following components: processing component 302, memory 304, power component 306, multimedia component 308, audio component 310, input/output (I/O) interface 312, sensor component 314, and communication component 316.
The processing component 302 generally controls overall operation of the device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 302 may include one or more processors 320 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 302 can include one or more modules that facilitate interaction between the processing component 302 and other components. For example, the processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.
The memory 304 is configured to store various types of data to support operations at the apparatus 300. Examples of such data include instructions for any application or method operating on device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 304 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 306 provides power to the various components of the device 300. The power components 306 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power supplies for the apparatus 300.
The multimedia component 308 includes a screen that provides an output interface between the device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 310 is configured to output and/or input audio signals. For example, audio component 310 includes a Microphone (MIC) configured to receive external audio signals when apparatus 300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.
The I/O interface 312 provides an interface between the processing component 302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for the device 300. For example, sensor assembly 314 may detect an open/closed state of device 300, the relative positioning of components, such as a display and keypad of device 300, the change in position of device 300 or a component of device 300, the presence or absence of user contact with device 300, the orientation or acceleration/deceleration of device 300, and the change in temperature of device 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate wired or wireless communication between the apparatus 300 and other devices. The device 300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 316 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 304 comprising instructions, executable by the processor 320 of the apparatus 300 to perform the above-described method is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 13 is a block diagram illustrating an electronic device 400 according to an example embodiment. For example, the apparatus 400 may be provided as a server. Referring to fig. 13, apparatus 400 includes a processing component 422 that further includes one or more processors and memory resources, represented by memory 442, for storing instructions, such as application programs, that are executable by processing component 422. The application programs stored in memory 442 may include one or more modules that each correspond to a set of instructions. Further, the processing component 422 is configured to execute instructions to perform the above-described methods.
The apparatus 400 may also include a power component 426 configured to perform power management of the apparatus 300, a wired or wireless network interface 450 configured to connect the apparatus 400 to a network, and an input output (I/O) interface 458. The apparatus 400 may operate based on an operating system stored in the memory 442, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (12)

1. An image processing method applied to an electronic device, the method comprising:
shooting a section of video by using a camera of the electronic equipment or selecting a section of video from an album of the electronic equipment, wherein the video comprises a current frame image and one or more forward preamble frame images of the current frame image;
determining a subject in the video;
obtaining a main body area of the current frame image and background areas of the current frame image and the preamble frame image through a semantic segmentation algorithm based on the determined main body;
obtaining a residual image of the current frame image based on the background areas of the current frame image and the preamble frame image;
and fusing the main body area of the current frame image and the afterimage of the current frame image to obtain an output image of the current frame image.
2. The image processing method of claim 1, wherein the determining the subject in the video comprises:
determining the subject of any frame in the video in response to an external instruction;
determining the subject in the video by a target tracking algorithm based on the subject of any frame in the video.
3. The image processing method according to claim 1, wherein obtaining the afterimage of the current frame image based on the background areas of the current frame image and the preamble frame image comprises:
and performing convolution on the background area of the current frame image and the background areas of the pre-preamble frame images of the current frame image in a forward preset number in sequence to obtain a residual image of the current frame image.
4. The image processing method according to claim 1, wherein obtaining the afterimage of the current frame image based on the background areas of the current frame image and the preamble frame image comprises:
and performing weighted fusion on the background area of the current frame image and the residual image of the pre-frame image adjacent to the current frame image to obtain the residual image of the current frame image.
5. The image processing method according to claim 1, wherein the fusing the main region of the current frame image and the afterimage of the current frame image to obtain the output image of the current frame image comprises:
and performing edge fusion on the edge of the main body area of the current frame image and the afterimage of the current frame image.
6. An image processing apparatus applied to an electronic device, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for shooting a section of video by using a camera device of the electronic equipment or selecting a section of video from an album of the electronic equipment, and the video comprises a current frame image and one or more forward preamble frame images of the current frame image;
a confirmation unit for determining a subject in the video;
a segmentation unit, configured to obtain, based on the determined main body, a main body region of the current frame image and background regions of the current frame image and the preamble frame image through a semantic segmentation algorithm;
the inter-frame fusion unit is used for obtaining a residual image of the current frame image based on the background areas of the current frame image and the preamble frame image;
and the current frame image fusion unit is used for fusing the main body area of the current frame image and the afterimage of the current frame image to obtain an output image of the current frame image.
7. The image processing apparatus according to claim 6, wherein the confirmation unit is configured to:
determining the subject of any frame in the video in response to an external instruction;
determining the subject in the video by a target tracking algorithm based on the subject of any frame in the video.
8. The image processing apparatus according to claim 6, wherein the inter-frame fusion unit is configured to:
and performing convolution on the background area of the current frame image and the background areas of the pre-preamble frame images of the current frame image in a forward preset number in sequence to obtain a residual image of the current frame image.
9. The image processing apparatus according to claim 6, wherein the inter-frame fusion unit is configured to:
and performing weighted fusion on the background area of the current frame image and the residual image of the pre-frame image adjacent to the current frame image to obtain the residual image of the current frame image.
10. The image processing apparatus according to claim 6, wherein the current frame image fusion unit is configured to:
and performing edge fusion on the edge of the main body area of the current frame image and the afterimage of the current frame image.
11. An electronic device, comprising:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform the image processing method of any of claims 1-5.
12. A computer-readable storage medium storing instructions which, when executed by a processor, perform the image processing method of any one of claims 1 to 5.
CN202010120903.9A 2020-02-26 2020-02-26 Image processing method and apparatus, electronic device, and computer-readable storage medium Pending CN113313788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010120903.9A CN113313788A (en) 2020-02-26 2020-02-26 Image processing method and apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010120903.9A CN113313788A (en) 2020-02-26 2020-02-26 Image processing method and apparatus, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN113313788A true CN113313788A (en) 2021-08-27

Family

ID=77370729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010120903.9A Pending CN113313788A (en) 2020-02-26 2020-02-26 Image processing method and apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113313788A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923368A (en) * 2021-11-25 2022-01-11 维沃移动通信有限公司 Shooting method and device
WO2023246844A1 (en) * 2022-06-21 2023-12-28 北京字跳网络技术有限公司 Video processing method and apparatus, and device and medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766361A (en) * 2015-04-29 2015-07-08 腾讯科技(深圳)有限公司 Ghosting effect realization method and device
KR101553589B1 (en) * 2015-04-10 2015-09-18 주식회사 넥스파시스템 Appratus and method for improvement of low level image and restoration of smear based on adaptive probability in license plate recognition system
US20160205291A1 (en) * 2015-01-09 2016-07-14 PathPartner Technology Consulting Pvt. Ltd. System and Method for Minimizing Motion Artifacts During the Fusion of an Image Bracket Based On Preview Frame Analysis
CN107333056A (en) * 2017-06-13 2017-11-07 努比亚技术有限公司 Image processing method, device and the computer-readable recording medium of moving object
CN107707835A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN108055477A (en) * 2017-11-23 2018-05-18 北京美摄网络科技有限公司 A kind of method and apparatus for realizing smear special efficacy
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN108574794A (en) * 2018-03-30 2018-09-25 京东方科技集团股份有限公司 Image processing method, device and display equipment, computer readable storage medium
CN108717701A (en) * 2018-05-24 2018-10-30 北京金山安全软件有限公司 Method, device, electronic equipment and medium for manufacturing special effect of movie ghost
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN108933905A (en) * 2018-07-26 2018-12-04 努比亚技术有限公司 video capture method, mobile terminal and computer readable storage medium
CN109040837A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN109671047A (en) * 2017-10-16 2019-04-23 无锡威莱斯电子有限公司 A kind of Vibe Detection dynamic target method based on depth transducer
CN109688346A (en) * 2018-12-28 2019-04-26 广州华多网络科技有限公司 A kind of hangover special efficacy rendering method, device, equipment and storage medium
WO2019084712A1 (en) * 2017-10-30 2019-05-09 深圳市大疆创新科技有限公司 Image processing method and apparatus, and terminal
CN109756680A (en) * 2019-01-30 2019-05-14 Oppo广东移动通信有限公司 Image composition method, device, electronic equipment and readable storage medium storing program for executing
CN109961453A (en) * 2018-10-15 2019-07-02 华为技术有限公司 A kind of image processing method, device and equipment
CN110782469A (en) * 2019-10-25 2020-02-11 北京达佳互联信息技术有限公司 Video frame image segmentation method and device, electronic equipment and storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205291A1 (en) * 2015-01-09 2016-07-14 PathPartner Technology Consulting Pvt. Ltd. System and Method for Minimizing Motion Artifacts During the Fusion of an Image Bracket Based On Preview Frame Analysis
KR101553589B1 (en) * 2015-04-10 2015-09-18 주식회사 넥스파시스템 Appratus and method for improvement of low level image and restoration of smear based on adaptive probability in license plate recognition system
CN104766361A (en) * 2015-04-29 2015-07-08 腾讯科技(深圳)有限公司 Ghosting effect realization method and device
CN107333056A (en) * 2017-06-13 2017-11-07 努比亚技术有限公司 Image processing method, device and the computer-readable recording medium of moving object
CN107707835A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN109671047A (en) * 2017-10-16 2019-04-23 无锡威莱斯电子有限公司 A kind of Vibe Detection dynamic target method based on depth transducer
WO2019084712A1 (en) * 2017-10-30 2019-05-09 深圳市大疆创新科技有限公司 Image processing method and apparatus, and terminal
CN108055477A (en) * 2017-11-23 2018-05-18 北京美摄网络科技有限公司 A kind of method and apparatus for realizing smear special efficacy
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN108574794A (en) * 2018-03-30 2018-09-25 京东方科技集团股份有限公司 Image processing method, device and display equipment, computer readable storage medium
CN108717701A (en) * 2018-05-24 2018-10-30 北京金山安全软件有限公司 Method, device, electronic equipment and medium for manufacturing special effect of movie ghost
CN108933905A (en) * 2018-07-26 2018-12-04 努比亚技术有限公司 video capture method, mobile terminal and computer readable storage medium
CN109040837A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109961453A (en) * 2018-10-15 2019-07-02 华为技术有限公司 A kind of image processing method, device and equipment
CN109688346A (en) * 2018-12-28 2019-04-26 广州华多网络科技有限公司 A kind of hangover special efficacy rendering method, device, equipment and storage medium
CN109756680A (en) * 2019-01-30 2019-05-14 Oppo广东移动通信有限公司 Image composition method, device, electronic equipment and readable storage medium storing program for executing
CN110782469A (en) * 2019-10-25 2020-02-11 北京达佳互联信息技术有限公司 Video frame image segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOHAMMADREZA BABAEE, ET AL: "A deep convolutional neural network for video sequence background subtraction", 《PATTERN RECOGNITION》, pages 635 - 649 *
张鹏 等: "3D游戏中"刀光剑影"特效的实现算法", 《计算机系统应用》, vol. 20, no. 7, pages 192 - 194 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923368A (en) * 2021-11-25 2022-01-11 维沃移动通信有限公司 Shooting method and device
WO2023246844A1 (en) * 2022-06-21 2023-12-28 北京字跳网络技术有限公司 Video processing method and apparatus, and device and medium

Similar Documents

Publication Publication Date Title
CN107798669B (en) Image defogging method and device and computer readable storage medium
EP3010226B1 (en) Method and apparatus for obtaining photograph
CN106131441B (en) Photographing method and device and electronic equipment
CN108154466B (en) Image processing method and device
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN110796012B (en) Image processing method and device, electronic equipment and readable storage medium
CN113313788A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111968052A (en) Image processing method, image processing apparatus, and storage medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN110876014B (en) Image processing method and device, electronic device and storage medium
CN111741187B (en) Image processing method, device and storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN112188096A (en) Photographing method and device, terminal and storage medium
CN107451972B (en) Image enhancement method, device and computer readable storage medium
CN114666490B (en) Focusing method, focusing device, electronic equipment and storage medium
CN114500821B (en) Photographing method and device, terminal and storage medium
CN114422687B (en) Preview image switching method and device, electronic equipment and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN116866495A (en) Image acquisition method, device, terminal equipment and storage medium
CN111461950B (en) Image processing method and device
CN109447929B (en) Image synthesis method and device
CN115953422B (en) Edge detection method, device and medium
US20240005521A1 (en) Photographing method and apparatus, medium and chip
CN115118950B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination