CN113963000B - Image segmentation method, device, electronic equipment and program product - Google Patents

Image segmentation method, device, electronic equipment and program product Download PDF

Info

Publication number
CN113963000B
CN113963000B CN202111225907.4A CN202111225907A CN113963000B CN 113963000 B CN113963000 B CN 113963000B CN 202111225907 A CN202111225907 A CN 202111225907A CN 113963000 B CN113963000 B CN 113963000B
Authority
CN
China
Prior art keywords
image
depth
foreground
scene
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111225907.4A
Other languages
Chinese (zh)
Other versions
CN113963000A (en
Inventor
焦少慧
程京
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202111225907.4A priority Critical patent/CN113963000B/en
Publication of CN113963000A publication Critical patent/CN113963000A/en
Application granted granted Critical
Publication of CN113963000B publication Critical patent/CN113963000B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The image segmentation method, the device, the electronic equipment and the program product provided by the embodiment of the disclosure are used for acquiring video frame images and depth frame images of live scenes at the same moment; determining a target object segmentation result in the video frame image by using a target object segmentation algorithm; determining a foreground image and a background image of the live scene according to the video frame image and the depth frame image; and respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain the image segmentation result of the live broadcast scene. Compared with the prior art, the method and the device have the advantages that the foreground image and the background image are utilized to respectively correct the target object segmentation result, so that the obtained image segmentation result not only comprises the target object image, but also comprises other object images related to the target object in the live broadcast scene, and the live broadcast effect is effectively improved.

Description

Image segmentation method, device, electronic equipment and program product
Technical Field
The embodiment of the disclosure relates to the field of computers, in particular to an image segmentation method, an image segmentation device, electronic equipment and a program product.
Background
With the development of internet technology, live broadcast scenes based on video live broadcast technology gradually enter people's life. In order to have a better video live broadcast effect, an image segmentation technology is often utilized in a live broadcast scene to remove interference of a background on a host person in the live broadcast scene.
In the prior art, image segmentation techniques can only segment and process people in live scenes. However, other images related to the person cannot be synchronously segmented, such as the accessory prop on the person or the commodity held by the person is removed from being displayed, which affects the live effect.
Disclosure of Invention
In view of the above problems, embodiments of the present disclosure provide an image segmentation method, apparatus, electronic device, and program product, so as to correct a result of target object segmentation by using a foreground image and a background image of a live broadcast scene, so that the corrected image segmentation result is more accurate, and the live broadcast effect is improved.
In a first aspect, an embodiment of the present disclosure provides an image segmentation method, including:
acquiring a video frame image and a depth frame image of a live scene at the same moment;
determining a target object segmentation result in the video frame image by using a target object segmentation algorithm;
Determining a foreground image and a background image of the live scene according to the video frame image and the depth frame image;
and respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain the image segmentation result of the live broadcast scene.
In a second aspect, an embodiment of the present disclosure provides an image segmentation apparatus including:
the acquisition module is used for acquiring video frame images and depth frame images of the live broadcast scene at the same moment;
the first processing module is used for determining a target object segmentation result in the video frame image by utilizing a target object segmentation algorithm;
the second processing module is used for determining a foreground image and a background image of the live broadcast scene according to the video frame image and the depth frame image;
and the third processing module is used for respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain the image segmentation result of the live broadcast scene.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and
a memory;
the memory stores computer-executable instructions;
The at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method of any one of the first aspects.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the method according to any one of the first aspects.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising computer instructions which, when executed by a processor, implement the method of any one of the first aspects.
The image segmentation method, the device, the electronic equipment and the program product provided by the embodiment of the disclosure are used for acquiring video frame images and depth frame images of live scenes at the same moment; determining a target object segmentation result in the video frame image by using a target object segmentation algorithm; determining a foreground image and a background image of the live scene according to the video frame image and the depth frame image; and respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain the image segmentation result of the live broadcast scene. Compared with the prior art, the method and the device have the advantages that the foreground image and the background image are utilized to respectively correct the target object segmentation result, so that the obtained image segmentation result not only comprises the target object image, but also comprises other object images related to the target object in the live broadcast scene, and the live broadcast effect is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of image segmentation in a live scene according to the prior art;
FIG. 2 is a schematic diagram of a network architecture on which the present disclosure is based;
fig. 3 is a flowchart of an image segmentation method according to an embodiment of the present disclosure;
fig. 4 is a schematic layout diagram of a shooting device in a live scene according to the present disclosure;
FIG. 5 is a first interface schematic diagram of an image segmentation method according to the present disclosure;
FIG. 6 is a second interface schematic diagram of an image segmentation method according to the present disclosure;
FIG. 7 is a third interface schematic diagram of an image segmentation method according to the present disclosure;
FIG. 8 is a fourth interface schematic diagram of an image segmentation method according to the present disclosure;
fig. 9 is a block diagram of an image segmentation apparatus according to an embodiment of the present disclosure;
Fig. 10 is a schematic diagram of a hardware structure of an electronic device according to the disclosed embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
With the development of internet technology, live broadcast scenes based on video live broadcast technology gradually enter people's life. In order to have a better video live broadcast effect, an image segmentation technology is often utilized in a live broadcast scene to remove interference of a background on a host person in the live broadcast scene.
Fig. 1 is a schematic diagram illustrating image segmentation in a live scene in which a presenter character and merchandise in the presenter character's hand are present, as shown in the left diagram of fig. 1, as is known in the prior art. The existing image segmentation technology is generally realized based on a person identification algorithm, namely, the position and the outline of a person in a live scene image are determined by carrying out person identification processing on the person in the live scene image, then the person image of the person is segmented from the live scene image by a 'matting' technology, and the rest image in the live scene image is subjected to image beautification or special effect processing so as to obtain the final live scene image, as shown in the right diagram of fig. 1.
It is obvious that, for a live scene as shown in fig. 1, the image content of the "commodity" part in the live scene after the existing image segmentation processing is absent, but other characters independent of the live characters in the background of the live scene are shown.
That is, in the prior art, since the image segmentation technology can only segment and process the characters in the live broadcast scene, but cannot segment other images related to the characters synchronously, such as the accessory prop on the characters or the commodity held by the characters are removed and not displayed, at the same time, the prior art only supports the identification and the matting of the characters, which easily segments and displays other characters unrelated to the current live broadcast, and these will affect the live broadcast effect.
In view of such problems, according to the embodiments of the present disclosure, on the basis of performing segmentation processing based on video frame images of live scenes, depth frame images of live scenes can also be synchronously acquired, and foreground images and background images of live scenes are determined based on the video frame images and the depth frame images; and then, respectively correcting the target object segmentation result by utilizing the foreground image and the background image to obtain the image segmentation result of the live broadcast scene. Compared with the prior art, the method and the device have the advantages that the foreground image and the background image are utilized to respectively correct the target object segmentation result, so that the obtained image segmentation result comprises not only the figure image, but also other images related to the figure in the live broadcast scene, and the live broadcast effect is effectively improved.
Referring to fig. 2, fig. 2 is a schematic diagram of a network architecture according to the present disclosure, where the network architecture shown in fig. 2 may specifically include a photographing apparatus 1 and a server 2.
The photographing device 1 may specifically be a video capturing device, an image capturing device, or a hardware device that may be used to capture and capture video frame images of a live scene and/or depth frame images.
The server 2 may be a separate server or a server cluster disposed at a cloud end, and the image segmentation method provided in the present disclosure is carried in the image segmentation apparatus of the server 2 shown in fig. 2, where the capturing device 1 and the server 2 are connected through a network to upload captured video frame images and depth frame images to the server 2.
Based on the foregoing network architecture, in a first aspect, referring to fig. 3, fig. 3 is a flowchart of an image segmentation method according to an embodiment of the disclosure.
The image segmentation method provided by the embodiment of the disclosure comprises the following steps:
step 301, acquiring a video frame image and a depth frame image of a live scene at the same moment.
Step 302, determining a target object segmentation result in the video frame image by using a target object segmentation algorithm.
Step 303, determining a foreground image and a background image of the live scene according to the video frame image and the depth frame image.
And 304, respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live scene to obtain the image segmentation result of the live scene.
The image dividing method provided in this example is mainly performed by an image dividing apparatus integrated in the server.
Specifically, the photographing devices disposed in the live broadcast scene of the present disclosure may specifically include multiple types of cameras, fig. 4 is a schematic layout diagram of the photographing devices in a live broadcast scene based on the present disclosure, and still taking the live broadcast scene of fig. 1 as an example, in the photographing scene shown in fig. 4, photographing devices, such as a depth camera 41 and an RGB camera 42, will be disposed. Wherein the depth camera 41 and the RGB camera 42 will take a live scene based on the same or similar shooting angles for subsequent processing.
Optionally, some items are also included in the venue, such as a photographic light 40, and a display device 43.
Of course, in other alternative implementations, the depth camera 41 and the RGB camera 42 may be integrated in the same device to output two different images simultaneously when live shot.
Regardless of the layout of the shooting equipment in the live broadcast scene, the current live broadcast scene is synchronously shot, so that the image segmentation device can acquire a video frame image and a depth frame image of the live broadcast scene at the same moment. In other words, the live scene is synchronously shot by using a depth camera and an RGB camera which are arranged in the live scene, so that a video frame image and a depth frame image of the live scene at the same moment are obtained.
The video frame image may specifically be an RGB image of one frame in a video shot by the shooting device, and the image thereof may be represented as [ image coordinates, image color values ]. The depth frame image may be a depth image of one frame shot by the shooting device, and the image may be represented as [ image coordinates, depth coordinates ], where the depth coordinates may refer to the distance between the object at the position of the image coordinates and the shooting device.
By utilizing the time frame alignment technology, the video frame image and the depth frame image of the live broadcast scene at the same moment can be determined in the continuous video frame image and the depth frame image, so that the segmentation processing of the foreground and the background of the live broadcast scene is facilitated. The image segmentation device performs target object segmentation processing on the video frame image by using a target object segmentation algorithm to obtain a target object segmentation result. The target object segmentation result can be specifically used for representing the image position of each target object in the live scene. Furthermore, the target object in this embodiment may specifically refer to an anchor person in a live scene, and the corresponding target object segmentation algorithm may be a portrait segmentation algorithm.
And then, the image segmentation device processes the video frame image and the depth frame image of the live scene at the same moment to determine a foreground image and a background image of the live scene. The foreground image may specifically be an image of a scene target that is closer to the photographing apparatus; accordingly, the background image may specifically be an image of a scene object that is farther from the photographing apparatus.
And the foreground image and the background image are used for respectively correcting the target object segmentation result to obtain the image segmentation result of the live broadcast scene, so that commodities held by the main broadcast hand of a person or ornaments on the main broadcast body of the person in the live broadcast scene can be displayed in the output image of the live broadcast scene as the image segmentation result.
In an alternative embodiment, there are a variety of ways to acquire the foreground and background images, which in this example will be achieved using background modeling processing techniques. Step 303 may specifically include:
step 3031, background modeling processing is performed on the video frame image and the depth frame image, so as to obtain an interface of a foreground and a background in the live broadcast scene.
Step 3032, performing foreground and background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live scene.
In order to make the processing result more accurate, the depth enhancement processing may be performed on the depth frame image first. Since the depth frame images represent image depth values of different image coordinates, in order to further improve accuracy of the result, the depth frame images may be upsampled based on the resolution of the video frame images to complete the aligned depth enhancement process. Then, background modeling is performed using the enhanced image.
In the background modeling process of step 3031, the video frame image and the depth frame image may be first used to generate point cloud data of the live scene, where the point cloud data of the live scene includes image coordinates and depth coordinates of each object in the live scene.
Specifically, the image segmentation device establishes an image coordinate mapping relation between two images based on image positions of the same object in the video frame image and the depth frame image in the two images, and constructs point cloud data of [ image coordinates, image color values and depth coordinates ] of each object in the live scene based on the image coordinate mapping relation.
And then, modeling the point cloud data of the live broadcast scene by using a background modeling algorithm to obtain an image coordinate and a depth coordinate of an interface of the foreground and the background in the live broadcast scene.
The background modeling algorithm may specifically include, but is not limited to, a marking cube algorithm. The marking cube algorithm is an algorithm for extracting an isosurface based on grids. By using the algorithm, a grid with iso-surfaces can be determined from space. The interface between the foreground and the background can be a network equivalent surface obtained by using a marking cube algorithm, and the image depth of the interface can refer to the image depth of a host character in a live broadcast scene or refer to the image depth of a background plate in the live broadcast scene.
For example, fig. 5 is a first interface schematic diagram of an image segmentation method according to the present disclosure. Fig. 5 shows a schematic view of an interface 501 obtained based on the image depth of a background plate, wherein an article a (photographing lamp) 502 near a photographing device, an article 504 to be displayed in the hand of a person a503, an article B (display device) 505 located behind the person a503, and a person B506 displayed on the article B (display device) 505 are included in the live scene.
Further, the depth coordinates of the object B (display device) 505 may be determined by using the point cloud data, and then a grid iso-surface consistent with the depth coordinates is used as an interface 501 (fig. 5) between the foreground and the background, so as to determine the image coordinates and the depth coordinates of the interface in the live scene, where the interface is used to distinguish the foreground image from the background image.
In step 3032, the image segmentation device performs foreground segmentation and background segmentation on the depth frame image according to the boundary to obtain a foreground image and a background image of the live scene.
In an alternative embodiment, the image segmentation device compares the image depth of each object in the depth frame image with the image depth of the interface; determining the foreground image and the rear Jing Tuxiang according to the depth comparison result of each object in the live scene; the foreground image comprises a foreground object with image depth smaller than that of the interface; the background image comprises a background object with the image depth being more than or equal to the image depth of the interface.
Specifically, since the interface between the foreground and the background has been determined, when the foreground image and the background image are determined, the comparison can be performed according to the image depth of each object in the live scene and the image depth of the interface.
In other optional embodiments, the image segmentation apparatus further corrects the person segmentation result by using the obtained foreground image and the obtained background image, so that the output image segmentation result is more accurate.
Taking the aforementioned scenario shown in fig. 5 as an example, based on the interface (taking the grid where the display device 505 is located as the interface), the object a502, the commodity 504, and the person a503 in fig. 5 are all foreground objects in the foreground image; whereas in fig. 5 person B506 (person B is the person displayed in display device 505) and object B506 (object B is the display device) will be the background objects in the background image.
And for the foreground image, performing result-increasing correction processing on the target object segmentation result, namely performing result-increasing correction processing on the target object segmentation result according to the image depth of each foreground object in the foreground image and the image depth of each target object in the target object segmentation result to obtain the image segmentation result of the live broadcast scene.
Fig. 6 is a second interface schematic diagram of an image segmentation method according to the present disclosure. As shown in fig. 6, the result of the target object segmentation result (including the person a and the person B) is corrected by using the foreground image on the basis of fig. 5. Since the foreground image includes the object a, the commodity and the person a, the commodity is consistent with the image depth of the person a, and the commodity is added to the image segmentation result for output and display.
That is, the image segmentation apparatus determines whether or not there is a target foreground object having an image depth identical to any one of the target object segmentation results among the foreground objects of the foreground image, and if so, adds the target foreground object to the image segmentation result of the live scene.
And the image segmentation device corrects the target object segmentation result by using the background image, namely, corrects the target object segmentation result by reducing the result according to the image depth of each background object in the background image and the image depth of each target object in the target object segmentation result, so as to obtain the image segmentation result of the live broadcast scene.
Fig. 7 is a third interface schematic diagram of an image segmentation method according to the present disclosure. As shown in fig. 7, on the basis of fig. 5, the result of the target object segmentation result (including person a and person B) is corrected by using the background image. Since the background image includes the object B and the person B, and in the target object segmentation result, the person B belongs to the background image, at this time, the person B needs to be removed from the result, and only the person a is reserved as output.
That is, the image dividing means will determine whether there is a target object result that coincides with the image depth of each of the background objects of the background image among the target object division results; and if so, removing the target object result from the target object segmentation result to obtain an image segmentation result of the live scene.
Finally, fig. 8 is a fourth interface schematic diagram of an image segmentation method according to the present disclosure. As shown in fig. 8, the results obtained in fig. 6 and 7 are superimposed, and the final image segmentation result includes the character a and the commodity.
When the images of the live scenes are processed by using the prior art, the images of the person a and the person B are outputted as the image division results in the face of the same live scene. It is clear that person B is located in the display device and should not be output, whereas the merchandise to be displayed is not identified and output. And after the image of the live scene is processed by using the image segmentation method provided by the present disclosure, as shown in fig. 8, the person a and the commodity will be output.
Compared with the prior art, the method has the advantages that the foreground image and the background image are utilized to correct the target object segmentation result respectively, so that the obtained image segmentation result comprises not only the target object image, such as a figure image, but also other object images related to the target object image in a live broadcast scene, such as an object image held by a figure, and the live broadcast effect is effectively improved.
In a second aspect, corresponding to the image segmentation method of the above embodiment, fig. 9 is a block diagram of the image segmentation apparatus provided in the embodiment of the present disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 9, the image segmentation apparatus includes:
an acquiring module 910, configured to acquire a video frame image and a depth frame image of a live scene at the same moment;
a first processing module 920, configured to determine a target object segmentation result in the video frame image by using a target object segmentation algorithm;
a second processing module 930, configured to determine a foreground image and a background image of the live scene according to the video frame image and the depth frame image;
and a third processing module 940, configured to perform correction processing on the target object segmentation result by using the foreground image and the background image of the live scene, to obtain an image segmentation result of the live scene.
In an alternative embodiment, the second processing module 930 is specifically configured to:
performing background modeling processing on the video frame image and the depth frame image to obtain an interface of a foreground and a background in a live scene; and carrying out foreground and background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live broadcast scene.
In an alternative embodiment, the second processing module 930 is specifically configured to:
generating point cloud data of the live broadcast scene by utilizing the video frame image and the depth frame image, wherein the point cloud data of the live broadcast scene comprises image coordinates and depth coordinates of objects in the live broadcast scene; and modeling the point cloud data of the live broadcast scene by using a background modeling algorithm to obtain an image coordinate and a depth coordinate of an interface of the foreground and the background in the live broadcast scene.
In an alternative embodiment, the background modeling algorithm includes a marking cube algorithm.
In an alternative embodiment, the second processing module 930 is specifically configured to:
comparing the image depth of each object in the depth frame image with the image depth of the interface; determining the foreground image and the rear Jing Tuxiang according to the depth comparison result of each object in the live scene; the foreground image comprises a foreground object with image depth smaller than that of the interface; the background image comprises a background object with the image depth being more than or equal to the image depth of the interface.
In an alternative embodiment, the third processing module 940 is specifically configured to:
and correcting the result increase of the target object segmentation result according to the image depth of each foreground object in the foreground image and the image depth of each target object in the target object segmentation result to obtain the image segmentation result of the live scene.
In an alternative embodiment, the third processing module 940 is specifically configured to:
and determining whether a target foreground object with the same image depth as any target object in the target object segmentation result exists in each foreground object of the foreground image, and if so, adding the target foreground object into the image segmentation result of the live scene.
In an alternative embodiment, the third processing module 940 is specifically configured to:
and correcting the target object segmentation result according to the image depth of each background object in the background image and the image depth of each target object in the target object segmentation result, so as to obtain the image segmentation result of the live scene.
In an alternative embodiment, the third processing module 940 is specifically configured to:
determining whether a target object result consistent with the image depth of each background object of the background image exists in the target object segmentation result; and if so, removing the target object result from the target object segmentation result to obtain an image segmentation result of the live scene.
In an alternative embodiment, the obtaining module 910 is specifically configured to:
and synchronously shooting the live scene by using a depth camera and an RGB camera which are arranged in the live scene so as to acquire a video frame image and a depth frame image of the live scene at the same moment.
The image segmentation device provided by the embodiment of the disclosure obtains a video frame image and a depth frame image of a live scene at the same moment; determining a target object segmentation result in the video frame image by using a target object segmentation algorithm; determining a foreground image and a background image of the live scene according to the video frame image and the depth frame image; and respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain the image segmentation result of the live broadcast scene. Compared with the prior art, in the method, the target object segmentation result is corrected by utilizing the foreground image and the background image respectively, so that the obtained image segmentation result comprises not only the figure image but also other images related to the figure in the live broadcast scene, and the live broadcast effect is effectively improved.
The electronic device provided in this embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Referring to fig. 10, there is shown a schematic diagram of an electronic device, which may be a terminal device or a media library, suitable for use in implementing embodiments of the present disclosure. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic apparatus may include a processing device (e.g., a central processor, a graphics processor, or the like) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1008 into a random access Memory (Random Access Memory, RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
In general, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1007 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 1008 including, for example, magnetic tape, hard disk, etc.; and communication means 1009. The communication means 1009 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 10 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or media library. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
A computer program product provided in this embodiment includes computer instructions, where the computer instructions are executed by a processor to perform a method as described in any preceding claim, and the implementation principle and technical effects are similar, and are not described herein again.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The following are some embodiments of the present disclosure.
In a first aspect, according to one or more embodiments of the present disclosure, an image segmentation method includes:
acquiring a video frame image and a depth frame image of a live scene at the same moment;
determining a target object segmentation result in the video frame image by using a target object segmentation algorithm;
determining a foreground image and a background image of the live scene according to the video frame image and the depth frame image;
and respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain the image segmentation result of the live broadcast scene.
In an optional implementation manner, the determining the foreground image and the background image of the live scene according to the video frame image and the depth frame image includes:
performing background modeling processing on the video frame image and the depth frame image to obtain an interface of a foreground and a background in a live scene;
and carrying out foreground and background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live broadcast scene.
In an optional embodiment, the performing background modeling processing on the video frame image and the depth frame image to obtain an interface between a foreground and a background in the live scene includes:
Generating point cloud data of the live broadcast scene by utilizing the video frame image and the depth frame image, wherein the point cloud data of the live broadcast scene comprises image coordinates and depth coordinates of objects in the live broadcast scene;
and modeling the point cloud data of the live broadcast scene by using a background modeling algorithm to obtain an image coordinate and a depth coordinate of an interface of the foreground and the background in the live broadcast scene.
In an alternative embodiment, the background modeling algorithm includes a Marching Cube algorithm.
In an optional implementation manner, the performing foreground and background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live scene includes:
comparing the image depth of each object in the depth frame image with the image depth of the interface;
determining the foreground image and the rear Jing Tuxiang according to the depth comparison result of each object in the live scene;
the foreground image comprises a foreground object with image depth smaller than that of the interface; the background image comprises a background object with the image depth being more than or equal to the image depth of the interface.
In an optional embodiment, the correcting the target object segmentation result by using the foreground image of the live scene includes:
and correcting the result increase of the target object segmentation result according to the image depth of each foreground object in the foreground image and the image depth of each target object in the target object segmentation result to obtain the image segmentation result of the live scene.
In an optional embodiment, performing a correction process for adding a result to the target object segmentation result to obtain an image segmentation result of the live scene, where the correction process includes:
and determining whether a target foreground object with the same image depth as any target object in the target object segmentation result exists in each foreground object of the foreground image, and if so, adding the target foreground object into the image segmentation result of the live scene.
In an optional embodiment, the correcting the target object segmentation result by using the foreground image of the live scene includes:
and correcting the target object segmentation result according to the image depth of each background object in the background image and the image depth of each target object in the target object segmentation result, so as to obtain the image segmentation result of the live scene.
In an optional implementation manner, the correcting the result reduction of the target object segmentation result to obtain an image segmentation result of the live scene includes:
determining whether a target object result consistent with the image depth of each background object of the background image exists in the target object segmentation result; and if so, removing the target object result from the target object segmentation result to obtain an image segmentation result of the live scene.
In an optional embodiment, the acquiring a video frame image and a depth frame image of a live scene at the same moment includes:
and synchronously shooting the live scene by using a depth camera and an RGB camera which are arranged in the live scene so as to acquire a video frame image and a depth frame image of the live scene at the same moment.
In a second aspect, according to one or more embodiments of the present disclosure, an image segmentation apparatus includes:
the acquisition module is used for acquiring video frame images and depth frame images of the live broadcast scene at the same moment;
the first processing module is used for determining a target object segmentation result in the video frame image by utilizing a target object segmentation algorithm;
the second processing module is used for determining a foreground image and a background image of the live broadcast scene according to the video frame image and the depth frame image;
And the third processing module is used for respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain the image segmentation result of the live broadcast scene.
In an alternative embodiment, the second processing module is specifically configured to:
performing background modeling processing on the video frame image and the depth frame image to obtain an interface of a foreground and a background in a live scene; and carrying out foreground and background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live broadcast scene.
In an alternative embodiment, the second processing module is specifically configured to:
generating point cloud data of the live broadcast scene by utilizing the video frame image and the depth frame image, wherein the point cloud data of the live broadcast scene comprises image coordinates and depth coordinates of objects in the live broadcast scene; and modeling the point cloud data of the live broadcast scene by using a background modeling algorithm to obtain an image coordinate and a depth coordinate of an interface of the foreground and the background in the live broadcast scene.
In an alternative embodiment, the background modeling algorithm includes a Marching Cube algorithm.
In an alternative embodiment, the second processing module is specifically configured to:
comparing the image depth of each object in the depth frame image with the image depth of the interface; determining the foreground image and the rear Jing Tuxiang according to the depth comparison result of each object in the live scene; the foreground image comprises a foreground object with image depth smaller than that of the interface; the background image comprises a background object with the image depth being more than or equal to the image depth of the interface.
In an alternative embodiment, the third processing module is specifically configured to:
and correcting the result increase of the target object segmentation result according to the image depth of each foreground object in the foreground image and the image depth of each target object in the target object segmentation result to obtain the image segmentation result of the live scene.
In an alternative embodiment, the third processing module is specifically configured to:
and determining whether a target foreground object with the same image depth as any target object in the target object segmentation result exists in each foreground object of the foreground image, and if so, adding the target foreground object into the image segmentation result of the live scene.
In an alternative embodiment, the third processing module is specifically configured to:
and correcting the target object segmentation result according to the image depth of each background object in the background image and the image depth of each target object in the target object segmentation result, so as to obtain the image segmentation result of the live scene.
In an alternative embodiment, the third processing module is specifically configured to:
determining whether a target object result consistent with the image depth of each background object of the background image exists in the target object segmentation result; and if so, removing the target object result from the target object segmentation result to obtain an image segmentation result of the live scene.
In an alternative embodiment, the obtaining module is specifically configured to:
and synchronously shooting the live scene by using a depth camera and an RGB camera which are arranged in the live scene so as to acquire a video frame image and a depth frame image of the live scene at the same moment.
In a third aspect, according to one or more embodiments of the present disclosure, an electronic device includes: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method of any one of the preceding claims.
In a fourth aspect, according to one or more embodiments of the present disclosure, a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement a method as described in any preceding claim.
In a fifth aspect, according to one or more embodiments of the present disclosure, a computer program product comprises computer instructions for execution by a processor of the method of any one of the preceding claims.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. An image segmentation method, comprising:
acquiring a video frame image and a depth frame image of a live scene at the same moment;
determining a target object segmentation result in the video frame image by using a target object segmentation algorithm;
determining a foreground image and a background image of the live scene according to the video frame image and the depth frame image;
respectively correcting the target object segmentation result by utilizing a foreground image and a background image of the live broadcast scene to obtain an image segmentation result of the live broadcast scene;
wherein the determining the foreground image and the background image of the live scene according to the video frame image and the depth frame image comprises:
performing background modeling processing on the video frame image and the depth frame image to obtain an interface of a foreground and a background in a live scene;
And carrying out foreground and background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live broadcast scene.
2. The image segmentation method according to claim 1, wherein the performing background modeling processing on the video frame image and the depth frame image to obtain an interface between a foreground and a background in a live scene includes:
generating point cloud data of the live broadcast scene by utilizing the video frame image and the depth frame image, wherein the point cloud data of the live broadcast scene comprises image coordinates and depth coordinates of objects in the live broadcast scene;
and modeling the point cloud data of the live broadcast scene by using a background modeling algorithm to obtain an image coordinate and a depth coordinate of an interface of the foreground and the background in the live broadcast scene.
3. The image segmentation method as set forth in claim 2, wherein the background modeling algorithm comprises a Marching Cube algorithm.
4. The image segmentation method according to claim 1, wherein the performing foreground-background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live scene comprises:
Comparing the image depth of each object in the depth frame image with the image depth of the interface;
determining the foreground image and the rear Jing Tuxiang according to the depth comparison result of each object in the live scene;
the foreground image comprises a foreground object with image depth smaller than that of the interface; the background image comprises a background object with the image depth being more than or equal to the image depth of the interface.
5. The image segmentation method according to claim 1, wherein the correcting the target object segmentation result using the foreground image of the live scene includes:
and correcting the result increase of the target object segmentation result according to the image depth of each foreground object in the foreground image and the image depth of each target object in the target object segmentation result to obtain the image segmentation result of the live scene.
6. The image segmentation method according to claim 5, wherein performing a result-added correction process on the target object segmentation result to obtain an image segmentation result of the live scene, comprises:
And determining whether a target foreground object with the same image depth as any target object in the target object segmentation result exists in each foreground object of the foreground image, and if so, adding the target foreground object into the image segmentation result of the live scene.
7. The image segmentation method according to claim 1, wherein the correcting the target object segmentation result using the foreground image of the live scene includes:
and correcting the target object segmentation result according to the image depth of each background object in the background image and the image depth of each target object in the target object segmentation result, so as to obtain the image segmentation result of the live scene.
8. The image segmentation method according to claim 7, wherein the performing the correction processing for reducing the result of the target object segmentation result to obtain the image segmentation result of the live scene includes:
determining whether a target object result consistent with the image depth of each background object of the background image exists in the target object segmentation result; and if so, removing the target object result from the target object segmentation result to obtain an image segmentation result of the live scene.
9. The image segmentation method according to any one of claims 1-8, wherein the acquiring video frame images and depth frame images of a live scene at the same time includes:
and synchronously shooting the live scene by using a depth camera and an RGB camera which are arranged in the live scene so as to acquire a video frame image and a depth frame image of the live scene at the same moment.
10. An image dividing apparatus, comprising:
the acquisition module is used for acquiring video frame images and depth frame images of the live broadcast scene at the same moment;
the first processing module is used for determining a target object segmentation result in the video frame image by utilizing a target object segmentation algorithm;
the second processing module is used for determining a foreground image and a background image of the live broadcast scene according to the video frame image and the depth frame image;
the third processing module is used for respectively correcting the target object segmentation result by utilizing the foreground image and the background image of the live broadcast scene to obtain an image segmentation result of the live broadcast scene;
the second processing module is specifically configured to:
performing background modeling processing on the video frame image and the depth frame image to obtain an interface of a foreground and a background in a live scene;
And carrying out foreground and background segmentation processing on the depth frame image according to the boundary to obtain a foreground image and a background image of the live broadcast scene.
11. An electronic device, comprising:
at least one processor; and
a memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the method of any one of claims 1-9.
12. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the method of any of claims 1-9.
13. A computer program product comprising computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 9.
CN202111225907.4A 2021-10-21 2021-10-21 Image segmentation method, device, electronic equipment and program product Active CN113963000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111225907.4A CN113963000B (en) 2021-10-21 2021-10-21 Image segmentation method, device, electronic equipment and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111225907.4A CN113963000B (en) 2021-10-21 2021-10-21 Image segmentation method, device, electronic equipment and program product

Publications (2)

Publication Number Publication Date
CN113963000A CN113963000A (en) 2022-01-21
CN113963000B true CN113963000B (en) 2024-03-15

Family

ID=79465819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111225907.4A Active CN113963000B (en) 2021-10-21 2021-10-21 Image segmentation method, device, electronic equipment and program product

Country Status (1)

Country Link
CN (1) CN113963000B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550186A (en) * 2022-04-21 2022-05-27 北京世纪好未来教育科技有限公司 Method and device for correcting document image, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945201A (en) * 2017-12-19 2018-04-20 北京奇虎科技有限公司 Video landscape processing method and processing device based on adaptive threshold fuzziness
CN108010038A (en) * 2017-12-19 2018-05-08 北京奇虎科技有限公司 Live dress ornament based on adaptive threshold fuzziness is dressed up method and device
CN110610496A (en) * 2019-04-24 2019-12-24 广东工业大学 Fluorescent glue defect segmentation method robust to illumination change
CN110809171A (en) * 2019-11-12 2020-02-18 腾讯科技(深圳)有限公司 Video processing method and related equipment
CN112258527A (en) * 2020-11-02 2021-01-22 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112365604A (en) * 2020-11-05 2021-02-12 深圳市中科先见医疗科技有限公司 AR equipment depth of field information application method based on semantic segmentation and SLAM
CN112804516A (en) * 2021-04-08 2021-05-14 北京世纪好未来教育科技有限公司 Video playing method and device, readable storage medium and electronic equipment
CN112954198A (en) * 2021-01-27 2021-06-11 北京有竹居网络技术有限公司 Image processing method and device and electronic equipment
CN113298735A (en) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113362365A (en) * 2021-06-17 2021-09-07 云从科技集团股份有限公司 Video processing method, system, device and medium
CN113379922A (en) * 2021-06-22 2021-09-10 北醒(北京)光子科技有限公司 Foreground extraction method, device, storage medium and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945201A (en) * 2017-12-19 2018-04-20 北京奇虎科技有限公司 Video landscape processing method and processing device based on adaptive threshold fuzziness
CN108010038A (en) * 2017-12-19 2018-05-08 北京奇虎科技有限公司 Live dress ornament based on adaptive threshold fuzziness is dressed up method and device
CN110610496A (en) * 2019-04-24 2019-12-24 广东工业大学 Fluorescent glue defect segmentation method robust to illumination change
CN110809171A (en) * 2019-11-12 2020-02-18 腾讯科技(深圳)有限公司 Video processing method and related equipment
CN112258527A (en) * 2020-11-02 2021-01-22 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112365604A (en) * 2020-11-05 2021-02-12 深圳市中科先见医疗科技有限公司 AR equipment depth of field information application method based on semantic segmentation and SLAM
CN112954198A (en) * 2021-01-27 2021-06-11 北京有竹居网络技术有限公司 Image processing method and device and electronic equipment
CN112804516A (en) * 2021-04-08 2021-05-14 北京世纪好未来教育科技有限公司 Video playing method and device, readable storage medium and electronic equipment
CN113362365A (en) * 2021-06-17 2021-09-07 云从科技集团股份有限公司 Video processing method, system, device and medium
CN113298735A (en) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113379922A (en) * 2021-06-22 2021-09-10 北醒(北京)光子科技有限公司 Foreground extraction method, device, storage medium and equipment

Also Published As

Publication number Publication date
CN113963000A (en) 2022-01-21

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN112101305B (en) Multi-path image processing method and device and electronic equipment
CN111210485B (en) Image processing method and device, readable medium and electronic equipment
CN110070063B (en) Target object motion recognition method and device and electronic equipment
CN111414879B (en) Face shielding degree identification method and device, electronic equipment and readable storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110796664B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110781823A (en) Screen recording detection method and device, readable medium and electronic equipment
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
WO2022247630A1 (en) Image processing method and apparatus, electronic device and storage medium
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN113963000B (en) Image segmentation method, device, electronic equipment and program product
CN112085733B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111368668B (en) Three-dimensional hand recognition method and device, electronic equipment and storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN109816791B (en) Method and apparatus for generating information
WO2022237435A1 (en) Method and device for changing background in picture, and storage medium and program product
WO2022227996A1 (en) Image processing method and apparatus, electronic device, and readable storage medium
CN112037227B (en) Video shooting method, device, equipment and storage medium
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN114596383A (en) Line special effect processing method and device, electronic equipment, storage medium and product
CN110807728B (en) Object display method and device, electronic equipment and computer-readable storage medium
CN115937010B (en) Image processing method, device, equipment and medium
CN112465692A (en) Image processing method, device, equipment and storage medium
CN112308809A (en) Image synthesis method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant