CN116152114A - Image digital content generation method, device, terminal equipment and storage medium - Google Patents

Image digital content generation method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN116152114A
CN116152114A CN202310342011.7A CN202310342011A CN116152114A CN 116152114 A CN116152114 A CN 116152114A CN 202310342011 A CN202310342011 A CN 202310342011A CN 116152114 A CN116152114 A CN 116152114A
Authority
CN
China
Prior art keywords
image
target
moving object
difference
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310342011.7A
Other languages
Chinese (zh)
Inventor
唐昊铭
梁文彬
马宜天
杜健聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leafun Culture Science and Technology Co Ltd
Original Assignee
Guangzhou Leafun Culture Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Leafun Culture Science and Technology Co Ltd filed Critical Guangzhou Leafun Culture Science and Technology Co Ltd
Priority to CN202310342011.7A priority Critical patent/CN116152114A/en
Publication of CN116152114A publication Critical patent/CN116152114A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Abstract

The embodiment of the application discloses a method, a device, terminal equipment and a storage medium for generating image digital content. And superposing the multi-frame second image to obtain a third image, differentiating the third image according to the first image, and reserving the image content which is different from the first image in the third image to obtain a first image. Because the moving object is in a moving state, the image content corresponding to the moving object contained in the multi-frame second image is different from the image content corresponding to the moving object contained in the first image in space position and/or form, and the background area is static and different, so that after the differentiation treatment is carried out, the first image comprises the image content corresponding to the moving object after the multi-frame second image is overlapped, and does not comprise the image content identical to the image content corresponding to the background area of the first image in the third image, the moving object can be represented, and the image display effect after the stylization treatment is improved.

Description

Image digital content generation method, device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for generating digital content of an image, a terminal device, and a storage medium.
Background
At present, an image acquired by an image capturing apparatus generally includes a background area and a moving object, and in the process of performing stylized visual processing on the image, the stylized processing is performed on the background area and the moving object, so that the moving object in the image cannot be focused and imaged, and the display effect of the image after the stylized processing is affected.
Disclosure of Invention
The embodiment of the application discloses a method, a device, a terminal device and a storage medium for generating image digital content, wherein the image digital content can be used for generating an image corresponding to a moving object in a moving state, so that the image display effect after stylized processing is improved.
The embodiment of the application discloses an image digital content generation method, which comprises the following steps:
acquiring a first image corresponding to a target moment and a plurality of frames of second images continuously acquired before the target moment, wherein the first image and the second image both comprise a background area and a moving object;
performing superposition processing on the multi-frame second image to obtain a third image;
and performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
As an optional implementation manner, the differentiating the third image according to the first image, and retaining the image content of the third image, which is different from the first image, so as to obtain a first object image corresponding to the moving object, includes:
performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image to obtain a target difference image corresponding to the target moment;
obtaining a cached multi-frame historical difference image, wherein the historical difference image is a difference image corresponding to a historical moment before the target moment;
superposing the multi-frame historical difference image and the target difference image to obtain the first image;
the method further comprises the steps of:
and caching the target difference image corresponding to the target moment.
As an optional implementation manner, after the obtaining the cached multi-frame historical difference image, the method further includes:
according to the historical moment corresponding to the historical difference image of each frame, the transparency of the historical difference image of each frame is adjusted, and a multi-frame adjusted historical difference image is obtained; the transparency of the history difference image with the previous history moment is smaller than that of the history difference image with the subsequent history moment;
The step of superposing the multi-frame historical difference image and the target difference image to obtain the first object image comprises the following steps:
and superposing the multi-frame adjusted historical difference image and the target difference image to generate the first image.
As an optional implementation manner, for the adjusted historical difference image, the transparency ratio of two frames of historical difference images corresponding to any two adjacent historical moments is a preset parameter.
As an alternative embodiment, the method further comprises:
and eliminating the image content of which the gray value is smaller than a preset gray value in the first image to obtain a second image.
As an alternative embodiment, the method further comprises:
the first image is subjected to a softening process to obtain a second image.
As an optional implementation manner, the acquiring the first image corresponding to the target moment and the multiple frames of the second images acquired continuously before the target moment includes:
acquiring a first shooting image shot by the image pickup device at the target moment and a plurality of frames of second shooting images shot by the image pickup device continuously before the target moment;
Gray processing is carried out on the first shot image so as to obtain a first image;
and carrying out gray scale processing on each second shooting image to acquire each second image.
The embodiment of the application discloses an image digital content generating device, which comprises:
the acquisition module is used for acquiring a first image corresponding to a target moment and a plurality of frames of second images continuously acquired before the target moment, wherein the first image and the second image both comprise a background area and a moving object;
the superposition module is used for carrying out superposition processing on the plurality of second images so as to obtain a third image;
and the processing module is used for carrying out differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
The embodiment of the application discloses a terminal device, which comprises a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor realizes any one of the image digital content generation methods disclosed by the embodiment of the application.
The embodiment of the application discloses a computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements any one of the image digital content generation methods disclosed in the embodiment of the application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
according to the image digital content generation method, a first image corresponding to a target moment and a plurality of frames of second images continuously acquired before the target moment are acquired, wherein the first image and the second image both comprise a target background area and a moving object, the plurality of frames of second images are subjected to superposition processing to obtain a third image, the third image is subjected to differentiation processing according to the first image, and image contents, different from the first image, in the third image are reserved to obtain a first image. Because the moving object is in a moving state, the image content corresponding to the moving object contained in the multi-frame second image is different from the image content corresponding to the moving object contained in the first image in space position and/or form, and the background area is static and different, so that after the differentiation processing is carried out, the first image comprises the image content corresponding to the moving object after the multi-frame second image is overlapped, but does not comprise the image content identical to the image content corresponding to the background area of the first image in the third image, the moving object can be represented, so that the moving object can be focused in the process of carrying out the stylized processing on the obtained first image, and the image display effect after the stylized processing is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image digital content generation method disclosed in an embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing flow of an image digital content generation method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a differentiation processing method disclosed in the embodiment of the present application;
fig. 4 is a schematic image processing flow diagram of a differentiation processing method disclosed in an embodiment of the present application;
FIG. 5 is a flow chart of another method for generating image digital content disclosed in an embodiment of the present application;
FIG. 6 is a flow chart of another method for generating image digital content disclosed in an embodiment of the present application;
FIG. 7 is a flow chart of another method for generating image digital content disclosed in an embodiment of the present application;
fig. 8 is a schematic structural view of an image digital content generating apparatus disclosed in an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a terminal device disclosed in an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and figures herein are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The practice shows that under the condition of no green screen background, an image obtained by shooting a moving object by adopting the image pickup equipment can contain image content corresponding to the background, and the image cannot be imaged and focused on the moving object in the process of stylizing processing in the later period due to the influence of the background, so that the image display effect is influenced. In the related art, a corresponding SDK (Software Development Kit, software development tool) package needs to be built in a processing engine according to different backgrounds, so that image contents corresponding to a moving object in an image are determined through feature points, and an image corresponding to the moving object is obtained. However, in the process of constructing the corresponding SDK package, the algorithm needs to be continuously adjusted and written according to the experimental result so as to ensure that the apparent image of the moving object is determined. It follows that the time required to construct the SDK package is relatively field, resulting in a relatively high cost of early time for acquiring an apparent image of the moving object.
In view of this, the embodiments of the present application provide an image digital content generating method that can acquire an avatar image corresponding to a moving object in a moving state by acquiring a plurality of frames of second images to be superimposed to represent the moving object, without determining the moving object by feature points to determine the avatar image corresponding to the moving object, and that reduces the early time cost of acquiring the avatar image of the moving object. The image-bearing image (including the first image-bearing image and the second image-bearing image) obtained by the image digital content generation method provided by the embodiment of the application is taken as the image data of the stylization processing, so that the stylization processing can be focused on a moving object, the image displayed on the display device can be the image focused on the moving object after the stylization processing, and the image display effect is improved.
Referring to fig. 1, a method for generating digital image content according to an embodiment of the present application is shown, and the method may be applied to a terminal device, where the terminal device may include, but is not limited to, a personal computer, a notebook computer, a tablet computer, and the like. As shown in fig. 1, the method may include steps S100 to S300.
S100, acquiring a first image corresponding to the target moment and a plurality of frames of second images continuously acquired before the target moment.
It should be noted that, as shown in fig. 2, the first image 100 and the second image 200 each include a background area (square as shown in fig. 2) and a moving object (triangle as shown in fig. 2). The background area may be understood as an object that is stationary or not of interest (e.g. does not require a stylizing process), and the moving object may be understood as an object that is in motion or of interest (e.g. does require a stylizing process). The first image 100 corresponds to a target time, the plurality of frames of second images 200 respectively correspond to different historical times, and the historical times are continuous, and since the moving object is in a moving state, the image content corresponding to the moving object contained in each frame of second images 200 generates spatial displacement and/or morphological change. It will be appreciated that the number of frames of the second image 200 to be acquired may be selected according to practical situations, which is not limited in the embodiment of the present application.
In one embodiment, the first image and the second image are captured images captured in real time by the capturing device. Acquiring a first image corresponding to the target time and a plurality of frames of second images continuously acquired before the target time can comprise: a plurality of frames of second shot images shot continuously by the image pickup apparatus before the target time and a first shot image shot at the target time are acquired, the first shot image is taken as a first image, and the plurality of frames of second shot images are taken as a plurality of frames of second images. It should be noted that the photographing region of the photographing apparatus is a background region and a moving object located before the background region, and the photographing apparatus is connected to the terminal apparatus so that the terminal apparatus can acquire the photographing image photographed by the photographing apparatus in real time, and in the case where the moving object is located before the background region, the photographing apparatus can photograph the photographing image including the background region and the moving object. Alternatively, the terminal device may store the photographed image photographed by the photographing device to generate the avatar image corresponding to the moving object. Alternatively, the image pickup apparatus may include, but is not limited to, a monocular camera or a fisheye camera.
In one embodiment, after acquiring the plurality of frames of second captured images captured continuously by the image capturing apparatus before the target time and the first captured images captured at the target time, the terminal apparatus may perform at least one of denoising processing, calibration correction processing, clipping processing, smoothing processing, and the like on the plurality of frames of second captured images and the first captured images. It should be noted that the denoising process is used to reduce noise in an image, and the denoising process may include, but is not limited to, reducing noise in an image by using a spatial domain method or a frequency domain method. The calibration correction process is used to recover the distorted image. The cropping process is used to acquire an image of the target region or target shape. Smoothing is used to blur an image or reduce interference of an image to improve the quality of the image.
In one embodiment, after acquiring a plurality of frames of second captured images captured continuously by the image capturing apparatus before the target time and the first captured images captured at the target time, it includes: the first photographed image is subjected to gray processing to obtain a first image, and the second photographed image is subjected to gray processing to obtain a second image. In the embodiment, the first shot image and the second shot image are subjected to gray scale processing, so that the interference of the colors of the first shot image and the second shot image on subsequent processing can be avoided.
In another embodiment, the first image and the second image are images corresponding to different moments of time taken in the same video content. Alternatively, the video content may be video content stored in the terminal device, or may be video content stored in a server connected to the terminal device. Acquiring a first image corresponding to the target time and a plurality of frames of second images continuously acquired before the target time can comprise: and acquiring a first video image corresponding to the target moment in the video content and a plurality of continuous frames of second video images before the target moment in the video content, wherein the first video image is used as a first image, and the plurality of frames of second video images are used as a plurality of frames of second images. It will be appreciated that the background of the video content should be the same so that after the differentiation process, a first image representation may be made that does not include image content corresponding to the background area in the third image.
In one embodiment, acquiring a first video image corresponding to a target time in the video content and a plurality of continuous frames of a second video image preceding the target time in the video content includes: and carrying out gray scale processing on the first video image to obtain a first image, and carrying out gray scale processing on the second video image to obtain a second image. According to the embodiment, the first video image and the second video image are subjected to gray scale processing, so that the interference of the colors of the first video image and the second video image on subsequent processing can be avoided.
And S200, performing superposition processing on the multi-frame second image to obtain a third image.
It should be noted that, as shown in fig. 2, the multi-frame second image 200 is subjected to the superimposition processing to superimpose the image content corresponding to the moving object in the multi-frame second image 200, so as to obtain the third image 300, that is, the third image 300 includes the moving information of the moving object of the multi-frame second image 200, where the moving information can reflect the spatial position and/or the moving form of the moving object at the historical moment corresponding to each frame of second image 200, so that the area of the image content corresponding to the moving object of the third image 300 is larger, and the moving information of more moving objects is included.
In one embodiment, performing a superposition process on the multiple frames of the second image to obtain a third image includes: and determining a target second image in the multi-frame second image, and performing superposition processing on the target second image to obtain a third image. Wherein the target second image comprises at least two frames. Optionally, determining the target second image in the multiple frames of second images includes: and selecting the second images according to the historical time corresponding to the second images of each frame and the preset time interval, wherein each selected second image is used as each target second image. The third image in this embodiment is an image obtained by superimposing the second images of each target, and the third image obtained by superimposing the same number of second images can reflect the spatial position and the morphological information of the moving object in a longer time period, with respect to the second images corresponding to the successive historic moments.
Alternatively, the number of the second images subjected to the superimposition processing may range from 8 frames to 16 frames. The number of the second images ranges from 8 frames to 16 frames, so that the superimposed third images can be ensured not to deform, and the first image obtained after processing can be in the form of a moving object. It should be noted that if the second images are too few, the third image may be unclear, and if the second images are too many, the third image may be too abstract. Optionally, the number of second images is 8 frames, 10 frames, 12 frames, 14 frames or 16 frames.
In one embodiment, performing the superposition processing on the multiple frames of the second image to obtain the third image may include: and determining one frame of second images in the multiple frames of second images as an initial second image, and overlapping the rest of second images on the initial second image to obtain a third image. Optionally, the superimposing process adds the gray values of the pixel points corresponding to the second images.
And S300, performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
It should be noted that, as shown in fig. 2, the first image 400 may reflect a form of a moving object, and include motion information of the moving object at a plurality of moments. Since the moving object is in a moving state, the multi-frame second image 200 includes image contents corresponding to the moving object, the image contents corresponding to the moving object included in the first image 100 are spatially different in position and/or form, and the background area is stationary different. Therefore, after the differentiation processing, most of the image contents corresponding to the moving object included in the third image 300 are the image contents having the difference from the first image 100, and the image contents corresponding to the background area included in the third image 300 are the same image contents as the first image 100, so that the first image 400 obtained after the differentiation processing can be the moving object, and does not include the image contents corresponding to the background area. It can be understood that, according to the first image 100, the third image 300 is subjected to the differentiation processing, and the image content of the third image 300, which is different from the first image 100, is retained to obtain the first image 400 corresponding to the moving object, which includes the case that the image content of the third image 300, which is different from the first image 100, is used as the first image 400, and also includes the case that the image content obtained by processing the image content of the difference is used as the first image 400, which is not limited in this embodiment of the present application.
In one embodiment, performing a differentiating process on the third image according to the first image, and retaining the image content of the third image, which is different from the first image, so as to obtain a first object image corresponding to the moving object, which may include: and eliminating the image content overlapped with the first image in the third image, wherein the rest image content in the third image is used as a first image corresponding to the moving object.
In another embodiment, performing a differentiating process on the third image according to the first image, and retaining the image content of the third image, which is different from the first image, to obtain a first object image corresponding to the moving object, may include: and selecting the image content, which is different from the first image, in the third image as a first image corresponding to the moving object.
The embodiment of the application provides an image digital content generation method, which comprises the steps of obtaining a first image corresponding to a target moment and a plurality of frames of second images continuously collected before the target moment, wherein the first image and the second image both comprise a target background area and a moving object, performing superposition processing on the plurality of frames of second images to obtain a third image, performing differentiation processing on the third image according to the first image, and reserving image contents, which are different from the first image, in the third image to obtain a first image. Because the moving object is in a moving state, the image content corresponding to the moving object contained in the multi-frame second image is different from the image content corresponding to the moving object contained in the first image in space position and/or form, and the background area is static and different, so that after the differentiation processing is carried out, the first image comprises the image content corresponding to the moving object after the multi-frame second image is overlapped, but does not comprise the image content identical to the image content corresponding to the background area of the first image in the third image, the moving object can be represented, so that the moving object can be focused in the process of carrying out the stylized processing on the obtained first image, and the image display effect after the stylized processing is improved. Meanwhile, the image digital content generation method provided by the embodiment of the application can reduce the early time cost compared with a mode of acquiring the image of the moving object through the characteristic points by superposing multiple frames of images to obtain the image of the moving object.
Referring to fig. 3, which illustrates a differentiating method provided in the embodiment of the present application, as shown in fig. 3, the steps of differentiating a third image according to a first image, and retaining image contents of the third image, which are different from the first image, to obtain a first image corresponding to a moving object may include steps S210 to S230.
And S210, performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image to obtain a target difference image corresponding to the target moment.
It should be noted that, the third image is differentiated according to the first image, and the description of the image content of the third image, which is different from the first image, may refer to the description of the above embodiment, which is not repeated herein.
S220, obtaining a cached multi-frame historical difference image, wherein the historical difference image is a difference image corresponding to a historical moment before the target moment.
It should be noted that, the target time may be understood as a current time, for a previous time of the current time, the terminal device acquires a first image corresponding to the previous time, and a plurality of frames of second images continuously acquired before the previous time, performs superposition processing on the plurality of frames of second images to obtain a corresponding third image, performs differentiation processing on the third image according to the first image corresponding to the previous time to obtain a difference image corresponding to the previous time, and caches the obtained difference image corresponding to the previous time. And for the current moment, the terminal equipment acquires a first image corresponding to the current moment and a plurality of frames of second images which are continuously acquired before the current moment, performs superposition processing on the plurality of frames of second images to obtain a corresponding third image, performs differentiation processing on the third image according to the first image corresponding to the current moment to obtain a difference image corresponding to the current moment, and caches the obtained difference image corresponding to the current moment. For the current moment, the difference image corresponding to the previous moment is a historical difference image, and the difference image corresponding to the current moment is a target difference image. And for the next moment of the current moment, the next moment is taken as a new current moment, the current moment is taken as a historical moment, and the process is repeated. Optionally, the difference image acquired by the terminal device may be stored in a memory of the terminal device, that is, the memory may be cached with a plurality of history difference images corresponding to the history moments respectively. In one embodiment, the differentiating processing is performed on the third image according to the first image, and after the image content, which is different from the first image, in the third image is retained, the obtaining the target difference image corresponding to the target time may further include: and caching the target difference image corresponding to the target moment. The embodiment caches the target difference image to obtain a first image corresponding to the next time of the target time. It can be appreciated that the present embodiment does not limit the buffering time of the target difference image, and the target difference image may be buffered after the first image is obtained, or may be buffered before the first image is obtained.
In one embodiment, the terminal device may include a storage module, and cache a target difference image corresponding to a target time, including: and storing the target difference image corresponding to the target moment into a storage module. In one embodiment, the storage module is configured to store a preset number of historical difference images, and in a case where the target difference images are stored in the storage module, the historical difference image stored first in the storage module is deleted, so as to avoid an excessive amount of data stored in the storage module. It should be noted that the preset number may be selected according to actual needs, so that the obtained first image may reflect the motion trail of the moving object. Alternatively, the preset number may range from 50 frames to 70 frames. Alternatively, the preset number is 50 frames, 60 frames, or 70 frames.
S230, performing superposition processing on the multi-frame historical difference image and the target difference image to obtain a first image.
It should be noted that, as shown in fig. 4, since the first image 100 and the second image are single-frame images, the target difference image 410 may be used to represent a specific contour of a moving object at a target moment, the history difference image 420 corresponds to different history moments, and may be used to represent a specific contour of the moving object at the corresponding moment, and by performing superposition processing on the multi-frame history difference image 420 and the target difference image 410, a motion track (an area surrounded by a dashed line in fig. 4) of the moving object may be left, so that the identifiability of the first image 400 is enhanced.
In one embodiment, performing superposition processing on a plurality of frames of historical difference images and target difference images to obtain a first image may include: and determining a target historical difference image in the multi-frame historical difference image, and performing superposition processing on the target historical difference image and the target difference image to obtain a first image. Wherein the target history difference image includes at least two frames. Optionally, determining the target historical difference image in the multi-frame historical difference image includes: and according to the history time corresponding to each frame of history difference image, selecting each history difference image as each target history difference image. The preset time interval may be selected according to the need, and for example, if the first display image is needed, the motion information of the moving object at a certain or certain historical time may be separately represented, and the historical difference image corresponding to the historical time may be selected as the target historical difference image.
In one embodiment, after obtaining the buffered multi-frame historical difference image, the method may further comprise: and according to the historical moment corresponding to each frame of historical difference image, adjusting the transparency of each frame of historical difference image to obtain a plurality of frames of historical difference images after adjustment. The transparency of the history difference image with the previous history time is smaller than that of the history difference image with the subsequent history time. The overlapping processing is performed on the multi-frame historical difference image and the target difference image to obtain a first image, which may include: and superposing the historical difference images after the multi-frame adjustment and the target difference images to generate a first image.
It should be noted that, in this embodiment, the transparency of each history difference image is adjusted so that the history difference image that is earlier than the corresponding history time is more transparent, so that the first image obtained after the superimposition processing can reflect the form of the moving object at the target time, and can reflect the forms of the moving object at a plurality of history times before the target time, and meanwhile, the interference of the background can also be avoided. In one embodiment, after obtaining the target difference image corresponding to the target time, the method further includes adjusting transparency of the target difference image, where the transparency of the adjusted target difference image is greater than the transparency of the adjusted historical difference image of each frame.
In one embodiment, in the adjusted historical difference images, the ratio of two frames of historical difference images corresponding to any two adjacent historical moments is a preset parameter. The ratio of the transparency of the historical difference image corresponding to the i-th historical moment to the transparency of the historical difference image corresponding to the i+1-th historical moment is a preset parameter, and the preset parameter is smaller than 1. The transparency adjustment manner of the embodiment makes the obtained first image more natural. In one embodiment, the transparency ratio of the adjusted target difference image to the transparency of the history difference image at the last (i.e. the last) time of the corresponding history time is also the preset parameter.
It can be appreciated that the preset parameters may be set according to practical situations, and optionally, the preset parameters may range from 0.93 to 0.97. Alternatively, the preset parameter may be 0.93, 0.94, 0.95, 0.96 or 0.97.
In the embodiment, the multi-frame historical difference image and the target difference image are subjected to superposition processing to obtain the first image, and the target difference image can be compensated through the multi-frame historical image, so that the identifiability of the first image is enhanced.
The first image does not include image content corresponding to the background area, that is, the first image is not affected by the background area, and can be used for displaying and focusing the moving object.
Referring to fig. 5, another image digital content generating method according to an embodiment of the present application is shown, and the image digital content generating method may include steps S100 to S410.
S100, acquiring a first image corresponding to the target moment and a plurality of frames of second images continuously acquired before the target moment.
And S200, performing superposition processing on the multi-frame second image to obtain a third image.
And S300, performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
S410, eliminating the image content with the gray value smaller than the preset gray value in the first image to obtain a second image.
It should be noted that, for the description of step S100 to step S300, please refer to the above embodiments, and the description is omitted here. With continued reference to fig. 2, each second image 200 corresponds to a different historical time, at this time, a possible interfering object (such as a circle shown in fig. 2) may appear in the second image 200 corresponding to one or more historical time, such as an object that briefly appears in the shooting area of the image capturing apparatus, so that there may be at least one frame of content corresponding to the interfering object in the second image 200, and when the first image 100 does not include the interfering object, the interfering object may be mistakenly identified as a moving object. It will be appreciated that, since the third image 300 is formed by overlapping the plurality of frames of the second image 200, the third image 300 may include image contents corresponding to the interference object, and since the interference object is not present in each frame of the second image 200 for overlapping the third image 300, the gray value of the image contents corresponding to the interference object in the third image 300 is smaller than the gray value of the image contents corresponding to the moving object, and thus, the image contents corresponding to the interference object in the first image 400 with the gray value smaller than the preset gray value are eliminated, the image contents corresponding to the interference object may be eliminated, so that the second image 500 does not include the image contents corresponding to the interference object, and the second image 500 may be focused on the moving object.
Alternatively, the preset gray value may be determined according to a maximum gray value in the first avatar image. Optionally, the difference between the maximum gray value and the preset gray value is a preset difference, and the preset difference may be determined empirically. The image content corresponding to the moving object in the first image has the pixel point with the maximum gray value, and the combination of the pixel points with the gray value ranges from a-b to a can summarize the form of the moving object, wherein a is the maximum gray value, b is the preset difference value, and a-b is the preset gray value, so that the pixel point with the gray value smaller than a-b can be eliminated, thereby ensuring that the second image corresponding to the moving object can be obtained, and avoiding the influence of an interference object.
In one embodiment, removing the image content having a gray value less than the preset gray value in the first image to obtain the second image may include: determining image content with gray values smaller than a preset gray value in a first image, matting the image content from the first image, and taking the first image after matting as a second image.
In still another embodiment, the image digital content generation method may further include: and superposing the first image with the third image to obtain a fourth image. Eliminating image content of the first image with gray level smaller than the preset gray level to obtain a second image, comprising: and eliminating the image content of which the gray value is smaller than the preset gray value in the fourth image to obtain a second image.
It should be noted that, in the fourth image obtained by superimposing the first image with the third image, the image content corresponding to the moving object in the fourth image is supplemented with respect to the first image, and although the third image includes the image content corresponding to the background area, the superimposed fourth image also includes the image content corresponding to the background area, since the moving object in the second image of each frame blocks part of the background and the first image does not include the image content corresponding to the background area, the gray level of the image content corresponding to the moving object in the fourth image is still greater than the gray level of the image content corresponding to the background area and the interference object, so that the image content corresponding to the background area and the interference object in the fourth image can be eliminated to obtain the second image, and the second image can reflect the form of the moving object.
The embodiment can eliminate the image content corresponding to the interference image in the first image by eliminating the image content with the gray value smaller than the preset gray value in the first image, so as to obtain the second image focused on the moving object. When the second image is an image subjected to the stylization processing, the stylization processing can be focused on the moving object, and the image subjected to the stylization processing can be displayed, thereby improving the image display effect.
Referring to fig. 6, another image digital content generating method according to an embodiment of the present application is shown, and the image digital content generating method may include steps S100 to S420.
S100, acquiring a first image corresponding to the target moment and a plurality of frames of second images continuously acquired before the target moment.
And S200, performing superposition processing on the multi-frame second image to obtain a third image.
And S300, performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
S420, performing soft processing on the first image to obtain a second image.
It should be noted that, for the description of step S100 to step S300, please refer to the above embodiments, and the description is omitted here. Softening the first image 400 may include blurring and/or rounding the first image to increase the softness of the relatively stiff image content in the first image to more closely match the second image to the moving object.
Referring to fig. 7, another image digital content generating method according to an embodiment of the present application is shown, and the method may include steps S110 to S420.
S100, acquiring a first image corresponding to the target moment and a plurality of frames of second images continuously acquired before the target moment.
And S200, performing superposition processing on the multi-frame second image to obtain a third image.
And S300, performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
S430, eliminating the image content with the gray value smaller than the preset gray value in the first image.
S440, performing soft processing on the eliminated first image to obtain a second image.
It should be noted that, for the description of step S100 to step S440, please refer to the above embodiments, and the description is omitted here.
In this embodiment, a third image including motion information of a rich moving object is obtained by performing superposition processing on a plurality of frames of second images, and differentiation processing is performed on the third image according to the first image, so that image contents, which have differences with the first image, in the third image are retained, so as to obtain image contents corresponding to a background area excluding the third image, and a first image corresponding to the moving object. And then eliminating the image content of which the gray value is smaller than the preset gray value in the first image to eliminate the interference of the interference object on the moving object, so that the eliminated first image can be more apparent and focused on the moving object. And then performing softening treatment on the eliminated first image to obtain a second image with higher softness. When the second image is used as the stylized image, the stylized image may be focused on the moving object, and the stylized image may be displayed, that is, the stylized moving object may be mapped to the display screen, thereby improving the image display effect.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an image digital content generating apparatus according to an embodiment of the present disclosure. The device can be applied to terminal equipment such as personal computers, notebook computers, tablet computers and the like, and is not particularly limited. As shown in fig. 8, the image digital content generating apparatus 800 may include: an acquisition module 810, a superposition module 820, and a processing module 830. The acquiring module 810 is configured to acquire a first image corresponding to a target time, and a plurality of frames of second images continuously acquired before the target time. The superimposing module 820 is configured to perform a superimposing process on the multiple frames of the second image to obtain a third image. The processing module 830 is configured to perform a differentiation process on the third image according to the first image, and preserve image content, which is different from the first image, in the third image, so as to obtain a first image corresponding to the moving object. The first image and the second image both comprise a background area and a moving object.
In one embodiment, the processing module 830 may include a difference processing unit, a buffer acquiring unit, and an overlay processing unit, and the image digital content generating apparatus 800 may further include a buffer module. The difference processing unit is used for carrying out differentiation processing on the third image according to the first image, and preserving the image content of the third image, which is different from the first image, so as to obtain a target difference image corresponding to the target moment. The buffer memory obtaining unit is used for obtaining buffered multi-frame historical difference images, wherein the historical difference images are difference images corresponding to historical moments before the target moment. And the superposition processing unit is used for carrying out superposition processing on the multi-frame historical difference image and the target difference image to obtain a first image. The caching module is used for caching the target difference image corresponding to the target moment.
In one embodiment, the processing module 830 may further include an adjustment unit. The superposition processing unit further comprises a superposition processing subunit. The adjusting unit is used for adjusting the transparency of each frame of history difference image according to the history time corresponding to each frame of history difference image to obtain multi-frame adjusted history difference images; the transparency of the history difference image with the previous history time is smaller than that of the history difference image with the subsequent history time. And the superposition processing subunit is used for carrying out superposition processing on the multi-frame adjusted historical difference image and the target difference image to generate a first image.
In one embodiment, the image digital content generation apparatus 800 may further include a cancellation module. The eliminating module is used for eliminating the image content of which the gray value is smaller than the preset gray value in the first image to obtain a second image.
In one embodiment, the image digital content generation apparatus 800 may further include a softening module. The softening module is used for conducting softening processing on the first image to obtain a second image.
In one embodiment, the acquisition module 810 may include: the device comprises an acquisition unit, a first gray level unit and a second gray level unit. The acquisition unit is configured to acquire a first captured image captured by the image capturing apparatus at a target time and a plurality of frames of second captured images captured by the image capturing apparatus successively before the target time. The first gray scale unit is used for performing gray scale processing on the first photographed image to acquire the first image. The second gray level unit is used for performing gray level processing on each second shooting image so as to acquire each second image.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
As shown in fig. 9, the terminal device 900 may include:
a memory 910 storing executable program code;
a processor 920 coupled with the memory 910;
wherein the processor 920 invokes executable program code stored in the memory 910 to perform any of the image digital content generation methods disclosed in the embodiments of the present application.
The embodiment of the application discloses a computer readable storage medium storing a computer program, wherein the computer program, when executed by the processor, causes the processor to implement any one of the image digital content generation methods disclosed in the embodiment of the application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments and that the acts and modules referred to are not necessarily required in the present application.
In various embodiments of the present application, it should be understood that the size of the sequence numbers of the above processes does not mean that the execution sequence of the processes is necessarily sequential, and the execution sequence of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-accessible memory. Based on such understanding, the technical solution of the present application, or a part contributing to the prior art or all or part of the technical solution, may be embodied in the form of a software product stored in a memory, including several requests for a computer device (which may be a personal computer, a server or a network device, etc., in particular may be a processor in the computer device) to perform part or all of the steps of the above-mentioned method of the various embodiments of the present application.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The foregoing describes in detail a method, apparatus, terminal device and storage medium for generating digital content of an image disclosed in the embodiments of the present application, and specific examples are applied to illustrate the principles and implementation of the present application, where the foregoing description of the embodiments is only used to help understand the method and core idea of the present application. Meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A method of generating digital content of an image, the method comprising:
acquiring a first image corresponding to a target moment and a plurality of frames of second images continuously acquired before the target moment, wherein the first image and the second image both comprise a background area and a moving object;
performing superposition processing on the multi-frame second image to obtain a third image;
and performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
2. The method according to claim 1, wherein the differentiating the third image according to the first image, retaining image contents of the third image having differences from the first image to obtain a first object image corresponding to the moving object, includes:
performing differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image to obtain a target difference image corresponding to the target moment;
Obtaining a cached multi-frame historical difference image, wherein the historical difference image is a difference image corresponding to a historical moment before the target moment;
superposing the multi-frame historical difference image and the target difference image to obtain the first image;
the method further comprises the steps of:
and caching the target difference image corresponding to the target moment.
3. The image digital content generation method according to claim 2, wherein after the obtaining the buffered multi-frame historical difference image, the method further comprises:
according to the historical moment corresponding to the historical difference image of each frame, the transparency of the historical difference image of each frame is adjusted, and a multi-frame adjusted historical difference image is obtained; the transparency of the history difference image with the previous history moment is smaller than that of the history difference image with the subsequent history moment;
the step of superposing the multi-frame historical difference image and the target difference image to obtain the first object image comprises the following steps:
and superposing the multi-frame adjusted historical difference image and the target difference image to generate the first image.
4. The image digital content generation method according to claim 3, wherein, for the adjusted history difference image, a ratio of transparency of two frames of history difference images corresponding to any two adjacent history moments is a preset parameter.
5. The image digital content generation method according to any one of claims 1 to 4, characterized in that the method further comprises:
and eliminating the image content of which the gray value is smaller than a preset gray value in the first image to obtain a second image.
6. The image digital content generation method according to any one of claims 1 to 4, characterized in that the method further comprises:
the first image is subjected to a softening process to obtain a second image.
7. The image digital content generation method according to any one of claims 1 to 4, wherein the acquiring the first image corresponding to the target time and the plurality of frames of the second image successively acquired before the target time includes:
acquiring a first shooting image shot by the image pickup device at the target moment and a plurality of frames of second shooting images shot by the image pickup device continuously before the target moment;
Gray processing is carried out on the first shot image so as to obtain a first image;
and carrying out gray scale processing on each second shooting image to acquire each second image.
8. An image digital content generation apparatus, comprising:
the acquisition module is used for acquiring a first image corresponding to a target moment and a plurality of frames of second images continuously acquired before the target moment, wherein the first image and the second image both comprise a background area and a moving object;
the superposition module is used for carrying out superposition processing on the plurality of second images so as to obtain a third image;
and the processing module is used for carrying out differentiation processing on the third image according to the first image, and reserving image content, which is different from the first image, in the third image so as to obtain a first image corresponding to the moving object.
9. A terminal device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method according to any of claims 1 to 7.
CN202310342011.7A 2023-03-31 2023-03-31 Image digital content generation method, device, terminal equipment and storage medium Pending CN116152114A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310342011.7A CN116152114A (en) 2023-03-31 2023-03-31 Image digital content generation method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310342011.7A CN116152114A (en) 2023-03-31 2023-03-31 Image digital content generation method, device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116152114A true CN116152114A (en) 2023-05-23

Family

ID=86340912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310342011.7A Pending CN116152114A (en) 2023-03-31 2023-03-31 Image digital content generation method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116152114A (en)

Similar Documents

Publication Publication Date Title
CN110324664B (en) Video frame supplementing method based on neural network and training method of model thereof
US8315474B2 (en) Image processing device and method, and image sensing apparatus
KR101699919B1 (en) High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
WO2018136373A1 (en) Image fusion and hdr imaging
JP2009194896A (en) Image processing device and method, and imaging apparatus
US11610461B2 (en) Video compression stream
CN109005334A (en) A kind of imaging method, device, terminal and storage medium
An et al. Single-shot high dynamic range imaging via deep convolutional neural network
JP3866957B2 (en) Image synthesizer
CN113315884A (en) Real-time video noise reduction method and device, terminal and storage medium
CN111242860A (en) Super night scene image generation method and device, electronic equipment and storage medium
US7522189B2 (en) Automatic stabilization control apparatus, automatic stabilization control method, and computer readable recording medium having automatic stabilization control program recorded thereon
CN116152114A (en) Image digital content generation method, device, terminal equipment and storage medium
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN116612015A (en) Model training method, image mole pattern removing method and device and electronic equipment
CN115035013A (en) Image processing method, image processing apparatus, terminal, and readable storage medium
CN114449130A (en) Multi-camera video fusion method and system
JP4598623B2 (en) Image processing device
CN112003996A (en) Video generation method, terminal and computer storage medium
JP2003230045A (en) Signal processing method and apparatus
JP2015177477A (en) image processing apparatus, heat haze suppression method and program
EP4358016A1 (en) Method for image processing
CN115147314B (en) Image processing method, device, equipment and storage medium
JP2007116332A (en) Image processing apparatus
CN116563106A (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination