CN111242881B - Method, device, storage medium and electronic equipment for displaying special effects - Google Patents

Method, device, storage medium and electronic equipment for displaying special effects Download PDF

Info

Publication number
CN111242881B
CN111242881B CN202010014709.2A CN202010014709A CN111242881B CN 111242881 B CN111242881 B CN 111242881B CN 202010014709 A CN202010014709 A CN 202010014709A CN 111242881 B CN111242881 B CN 111242881B
Authority
CN
China
Prior art keywords
target
area
preset
template
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010014709.2A
Other languages
Chinese (zh)
Other versions
CN111242881A (en
Inventor
诸葛晶晶
倪光耀
吕烨华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010014709.2A priority Critical patent/CN111242881B/en
Publication of CN111242881A publication Critical patent/CN111242881A/en
Priority to PCT/CN2020/129296 priority patent/WO2021139408A1/en
Application granted granted Critical
Publication of CN111242881B publication Critical patent/CN111242881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure relates to a method, an apparatus, a storage medium, and an electronic device for displaying a special effect, which may determine position information of a special effect region, the special effect region including a touch region of a user on a target object displayed on a screen; determining a template area on a preset object template according to the position information; determining a target area on the target object according to the template area; and rendering the target area according to a preset special effect.

Description

Method, device, storage medium and electronic equipment for displaying special effects
Technical Field
The present disclosure relates to the field of special effect generation, and in particular, to a method, an apparatus, a storage medium, and an electronic device for displaying a special effect.
Background
With the continuous development of computer technology, the functions of an intelligent terminal are also continuously improved, for example, a picture or a video can be taken or photographed through the intelligent terminal, and at present, when the intelligent terminal is used for taking a picture or photographing a video, not only can the picture or video effects of the traditional functions be realized, but also the picture or video effects with additional functions can be realized through related application programs, for example, special effects (such as a makeup effect and a transparent effect) are realized on a picture or a video.
The existing special effect implementation method only supports recording of a special effect implementation position set by a user on a terminal screen, scene switching between a special effect area and a non-special effect area under an absolute position on the screen is achieved, after a target object (such as a human face) displayed on the screen moves, the area displaying the special effect still stays at an initial special effect position set by the user on the screen, and special effect display cannot be carried out in a user setting area on the moved target object in real time, so that real-time interaction special effect with the user cannot be achieved, and use experience of the user is further reduced.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a method of displaying a special effect, the method comprising: determining position information of a special effect area, wherein the special effect area comprises a touch area of a target object displayed on a screen by a user; determining a template area on a preset object template according to the position information; determining a target area on the target object according to the template area; and rendering the target area according to a preset special effect.
In a second aspect, there is provided an apparatus for displaying a special effect, the apparatus comprising: the device comprises a first determining module, a second determining module and a display module, wherein the first determining module is used for determining the position information of a special effect area, and the special effect area comprises a touch area of a target object displayed on a screen by a user; the second determining module is used for determining a template area on a preset object template according to the position information; a third determining module, configured to determine a target area on the target object according to the template area; and the special effect rendering module is used for rendering the target area according to a preset special effect.
In a third aspect, a computer readable medium is provided, on which a computer program is stored, which program, when being executed by a processing device, carries out the steps of the method according to the first aspect of the disclosure.
In a fourth aspect, an electronic device is provided, comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to implement the steps of the method of the first aspect of the present disclosure.
By the technical scheme, the position information of the special effect area can be determined, wherein the special effect area comprises a touch area of a target object displayed on a screen by a user; determining a template area on a preset object template according to the position information; determining a target area on the target object according to the template area; and rendering the target area according to the preset special effect, so that the special effect position set on the target object by the user can be recorded through the preset object template, and the target object can be followed by the special effect area according to the special effect position set on the target object by the user and recorded by the preset object template after the target object moves, so that the real-time interactive special effect with the user can be realized, and the user experience is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow diagram illustrating a first method of displaying effects in accordance with one illustrative embodiment;
FIG. 2 is a flow diagram illustrating a second method of displaying special effects in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a face template according to an exemplary embodiment;
FIG. 4 is a diagram illustrating triangulation of key points of a face in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a face texture mapping in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating a first apparatus for displaying special effects, according to an example embodiment;
FIG. 7 is a block diagram illustrating a second apparatus for displaying special effects in accordance with an illustrative embodiment;
fig. 8 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
FIG. 1 is a flow diagram illustrating a method of displaying effects, as shown in FIG. 1, according to an exemplary embodiment, the method comprising the steps of:
in step 101, position information of a special effect region including a touch region of a user on a target object displayed on a screen is determined.
The screen may be a screen of a terminal device having a special effect generation function, the target object may include an object that needs to implement a preset special effect, such as a picture or a video of a human face, a gesture, or a pet, being displayed on the terminal screen, the preset special effect may include a special effect that is selected by a user in advance, such as a makeup special effect (e.g., adding an eye shadow) or a transparent special effect by wiping, and the position information may include screen coordinates of the special effect region on the screen.
In an actual application scenario, when a user uses a special effect implementation application program (such as a dither) on a terminal to perform special effect setting on a target object displayed on a terminal screen and currently captured by a terminal camera, the user may select a special effect area that is desired to perform special effect display on the screen by touching (such as smearing or wiping with a finger) the screen, where the user may select the special effect area that is desired to perform special effect display in a continuous smearing or intermittent smearing manner (that is, the user touches the screen multiple times), so that the terminal may take a smearing area formed by continuous smearing by the user as the special effect area, or take a union of multiple smearing areas formed by intermittent smearing by the user as the special effect area.
In step 102, a template region is determined on the preset object template according to the position information.
The preset object template is a template corresponding to the target object, for example, if the target object is a human face, the preset object template is a preset human face template, if the target object is a pet face, the preset object template is a preset pet face template, the preset object template includes a plurality of triangular preset regions composed of preset key points of the target object, and the template region is a region corresponding to a special effect region selected by a user on the target object displayed on a screen on the preset object template.
Before executing the step, the key point detection can be carried out on the target object to obtain a plurality of key points; and triangulating the target object according to the key point to obtain a plurality of triangular areas, so that the target triangular area corresponding to the position information can be determined in the triangular areas, and the area corresponding to the target triangular area is determined to be the template area in the triangular preset areas on the preset object template.
In step 103, a target area on the target object is determined according to the template area.
In this step, a target preset key point corresponding to the template region may be obtained; determining a target key point corresponding to the target preset key point in the plurality of key points of the moved target object; and determining a triangular area formed by the target key points as the target area.
It should be noted that, after the position information of the special effect area selected by the user on the target object displayed on the screen is recorded through the preset object template, when the target object moves, the following of the target object by the special effect area can be realized according to the special effect position set on the target object by the user recorded by the preset object template, for example, by taking the video of the target object displayed on the screen as an example, the target area may be determined on a target image of one or more image frames (including consecutive image frames or non-consecutive image frames) in the video according to the position of the special effect area recorded by the preset object template, and specifically, the image frame including the target image may be acquired first, then, the target area is determined on the target object included in the image frame according to the template area, so as to achieve the effect that the special effect area moves along with the movement of the target object.
In the process of determining the target area on the target object included in the image frame according to the template area, preset target keypoints corresponding to the template area may also be obtained, the target keypoints corresponding to the preset target keypoints are determined in one or more image frames including the target object, and a triangular area formed by the target keypoints is determined as the target area.
In step 104, the target area is rendered according to a preset special effect.
In this step, a preset background image may be obtained, the target object may be rendered to the preset background image, and then the target area may be rendered according to the preset background image.
The preset background image may include a special effect background that is selected by a user in advance according to a preference of the user.
By adopting the method, the position of the special effect area set on the target object by the user can be recorded through the preset object template, so that the target object can be followed by the special effect area according to the special effect position set on the target object by the user and recorded by the preset object template after the target object moves, the real-time interactive special effect with the user can be realized, and the user experience is further improved.
Fig. 2 is a flowchart illustrating a method for displaying a special effect, which may be applied to a terminal device with a special effect generation function, such as a smart phone, a tablet computer, and the like, according to an exemplary embodiment, where the method includes the following steps:
in step 201, position information of a special effect region including a touch region of a user on a target object displayed on a screen is determined.
The screen may be a screen of a terminal device having a special effect generation function, the target object may include an object that needs to implement a preset special effect, such as a picture or a video of a human face, a gesture, or a pet, being displayed on the terminal screen, the preset special effect may include a special effect that is selected by a user in advance, such as a makeup special effect (e.g., adding an eye shadow) or a transparent special effect by wiping, and the position information may include screen coordinates of the special effect region on the screen.
In an actual application scenario, when a user uses a special effect implementation application program (such as a dither) on a terminal to perform special effect setting on a target object displayed on a terminal screen and currently captured by a terminal camera, the user may select a special effect area that is desired to perform special effect display on the screen by touching (such as smearing or wiping with a finger) the screen, where the user may select the special effect area that is desired to perform special effect display in a continuous smearing or intermittent smearing manner (that is, the user touches the screen multiple times), so that the terminal may take a smearing area formed by continuous smearing by the user as the special effect area, or take a union of multiple smearing areas formed by intermittent smearing by the user as the special effect area.
For example, taking the target object as a human face and taking an example that the user currently wants to add an eye shadow to an eye part of the human face displayed on the screen, the user may paint an area on the eye part of the human face displayed on the screen by using a finger, where the painted area is the special effect area, and then the terminal may add the eye shadow to the painted area.
In a possible implementation manner, after the terminal acquires the event that the user touches the screen, screen coordinate points (for example, X _ touch and Y _ touch may be recorded) returned when the screen is pressed may be recorded, then a circle is drawn around the screen coordinate points and with a preset radius (which may be preset according to a display scale of the screen), and as the user's finger is continuously smeared, a continuous smearing region may be formed by a plurality of dots, that is, the position information of the special effect region on the screen may include one or more screen coordinate points.
In order to realize that the preset special effect set in the special effect area by the user before the target object moves can move along with the movement of the target object in the case that the target object displayed on the screen moves, the present disclosure may record the smearing trace of the target object on the screen by the user through a preset object template (such as a face template shown in fig. 3), where the preset object template includes a plurality of triangular preset areas composed of preset key points of the target object (the preset key points may be obtained by performing key point detection on a preset object template corresponding to the target object in advance, and the preset key points may be used to record standard positions of each key area of the target object), that is, position information of the special effect area selected on the screen by the user before the target object moves is mapped onto the preset object template, and specifically, after the position information is obtained, the template area corresponding to the position information on the preset object template is determined by performing steps 202 to 205.
In step 202, keypoint detection is performed on the target object to obtain a plurality of keypoints.
As shown in fig. 3, if the target object is a face displayed on a screen, the key points are key points of the face that can position key area positions (such as eyebrows, eyes, nose, mouth, face contour, etc.) of the face, and the key area positions of the face can be usually positioned by selecting key points of 106 persons or key points of 280 persons, it should be noted that, in this step, specific implementation manners of obtaining the key points through key point detection may refer to descriptions in related documents, which is not limited by this disclosure.
In step 203, the target object is triangulated according to the key points to obtain a plurality of triangular regions.
In a possible implementation manner of this step, a triangle may be formed by using neighboring keypoints at any three positions as vertices based on multiple keypoints obtained by keypoint detection, so that multiple triangular regions may be obtained.
For example, fig. 4 is a schematic diagram of a plurality of triangular regions obtained after triangulation is performed based on a plurality of face key points, and in the process of detecting a plurality of face key points by using the face key points, each key point may be numbered, for example, if 106 face key points are used to locate key region positions of a face, the 106 face key points may be respectively set as No. 0 face key point, No. 1 face key point, No. 2 face key point, No. 9 The face key points No. 5 and 82 are a group of key points adjacent to each other, and the face key points No. 5, No. 6, and No. 82 are another group of key points adjacent to each other, so that in the process of triangulation, three face key points No. 4, No. 5, and No. 82 may be used as vertices to form a triangular region, and three face key points No. 5, No. 6, and No. 82 may be used as vertices to form another triangular region.
In step 204, a target triangle area corresponding to the position information is determined among the triangle areas.
In one possible implementation manner, the target triangular region may be determined according to a screen coordinate point included in the position information, and specifically, a triangular region in which the screen coordinate point is located may be determined as the target triangular region.
Illustratively, fig. 5 is a schematic diagram of a face texture mapping according to an exemplary embodiment, where a represents a position of the special effect region (a white origin region shown in a) selected by the user on the face displayed on the screen to implement a preset special effect, b represents a position of the special effect region recorded on the face template after mapping the special effect region to the face template, as shown in a, if the position information of the special effect region on the screen includes a coordinate point 1, a coordinate point 2, and a coordinate point 3, and the position of the coordinate point 1 is located in a triangle region (which may be represented as an a triangle region, for example) with three face key points of No. 4, No. 5, and No. 82 as vertices, and the positions of the point 2 and the coordinate point 3 are located in a triangle region with three face key points of No. 5, No. 6, and No. 82 as vertices (for example, which may be represented as a B triangle region), the target triangle region corresponding to the position information of the special effect region is the a triangle region and the B triangle region, and the above examples are only illustrative and the disclosure does not limit this.
In step 205, a region corresponding to the target triangle region is determined as a corresponding template region on the preset object template among the triangle preset regions.
The triangle preset area is an area formed by preset key points adjacent to any three positions on the preset object template, the preset key points correspond to the key points obtained in step 202 one by one, and the number of the key points used in step 202 is specifically used for representing the key area of the target object, which depends on the number of the preset key points in the preset object template of the target object, so that the triangle area corresponds to the triangle preset area one by one.
In a possible implementation manner of this step, a target key point corresponding to the target triangular region may be determined first, and then a target preset key point corresponding to the target key point is found on the preset object template, so that the triangular preset region composed of the target preset key points on the preset object template may be determined as the template region.
By way of example, continuing with fig. 5 as an example, assuming that the target triangle area is an a triangle area and a B triangle area, and as shown in fig. 5, the a triangle area uses three face key points No. 4, No. 5, and No. 82 as vertices, and the B triangle area uses three face key points No. 5, No. 6, and No. 82 as vertices, so that it can be determined that the target key points corresponding to the target triangle area are the face key points No. 4, No. 5, No. 6, and No. 82, the preset face key points No. 4, No. 5, No. 6, and No. 82 corresponding to the face template shown in B are the target preset key points, and further it can be determined that on the face template, a triangle preset area using three preset face key points No. 4, No. 5, and No. 82 as vertices, and a triangle preset area using three preset face key points No. 5, No. 6, and No. 82 as the template area, the foregoing examples are illustrative only, and the disclosure is not limited thereto.
Therefore, a special effect area selected by a user on a target object displayed on a screen is mapped to a corresponding template area on a preset object template, so that a smearing area (namely the template area, which can also be called as a smearing MASK) of the target object displayed on the screen by the user can be recorded on the preset object template, and then the smearing MASK recorded on the preset object template can be remapped to the moved target object displayed on the screen as a UV map under the condition that the target object moves, so that the special effect area initially set by the user can move along with the movement of the target object.
It is understood that, in the case that a target object moves, it may be regarded as that a plurality of continuous image frames including the target object are generated on the screen, and after the special effect area selected by the user on the target object displayed on the screen is mapped to the corresponding template area on the preset object template, the smearing MASK recorded in the template area may be remapped as a UV map back to the target object in the plurality of image frames on the screen during the movement of the target object, that is, the target area is determined on the target object included in the image frame according to the template area, so that the special effect area may move along with the movement of the target object, in this embodiment, the smearing MASK recorded in the preset object template may be remapped as a UV map back to the moved target object displayed on the screen by performing steps 206 to 208, if the image displayed on the screen before the target object moves is regarded as the first image frame, the moved target object displayed on the screen is any image frame displayed on the screen during the movement of the target object.
In step 206, a target preset key point corresponding to the template region is obtained.
The target preset key points may include vertices corresponding to a target triangle preset area where the template area is located, for example, the preset face key points No. 4, No. 5, No. 6, and No. 82 corresponding to the face template shown in the b diagram in the example in step 205 are the target preset key points.
In step 207, a target keypoint corresponding to the target preset keypoint is determined among the keypoints.
In this step, the plurality of key points are key points after the position of the target object on the screen is moved, and it can be understood that, in the process of moving the target object, the positions of the key points on the target object on the screen are also moved, but the relative positions of the key points on the target object are not changed, for example, taking a video of the target object as an example, in a plurality of frames of consecutive image frames in the video, screen position information indicating the key points of the eyes on a next image frame is shifted from screen position information on a previous image frame, but the relative positions of the key points indicating the eyes on the face are not changed.
In a possible implementation manner, the target keypoint may be determined according to the preset keypoint and identification information (e.g., a serial number) of the keypoint, and since the preset keypoint and the keypoint are in one-to-one correspondence, it may be determined that the keypoint respectively consistent with the identification information of each target preset keypoint is the target keypoint, for example, if the target preset keypoint is a preset keypoint No. 1 or 2, the target keypoint is a key point No. 1 or 2 on the moved target object, which is only exemplified here, and the disclosure does not limit this.
In step 208, a triangular region composed of the target key points is determined as a target region corresponding to the special effect region on the moved target object.
To this end, the paint MASK recorded on the preset object template is remapped as a UV map to the target area on the moved target object displayed on the screen, so that the preset special effect (such as a transparent effect and an eye shadow effect) can be rendered on the target area on the moved target object displayed on the screen, so as to achieve an effect that the special effect moves along with the movement of the target object displayed on the screen, and in this embodiment, the preset special effect can be rendered on the target area on the moved target object by performing steps 209 to 210.
In step 209, a preset background image is obtained, and the target object is rendered to the preset background image.
The preset background image is a background image selected by a user from a plurality of preset image backgrounds on the terminal according to the preference of the user, and the terminal can acquire the preset background image according to the triggering operation of the user on the preset background image.
In the process of rendering the target image to the preset background image, an initial image including the target object (the initial image includes an image of the target object and an environment image where the target object is located) may be collected by a terminal camera device, and then the initial image is subjected to matting processing (for example, matting algorithm is used for matting) to obtain a foreground image of the target object (the foreground image is an image only including the target object region), so that in the process of rendering the target object to the preset background image, the foreground image may be superimposed to the preset background image, for example, a mix function may be used for superimposing the images.
In step 210, the target area is rendered according to the preset background image.
In this step, after the target object is rendered to the preset background image, the target area may be further rendered according to the preset background image by using a mix function.
In addition, when the target area is rendered according to the preset special effect, the preset special effect can be rendered in the target area according to a preset background image, a foreground image of the target object and the target area.
In a possible implementation manner, if the preset special effect is to achieve the transparent effect in the target area, the formula (1) and the formula (2) may be used to achieve the transparent effect in the target area according to the preset background image, the foreground image, the contour image of the target object (the contour image may be obtained by performing image processing on the foreground image through a depth learning algorithm (e.g., a matting algorithm)), and the target area:
color mix (foreground image, preset background image, smear MASK) (1)
gl _ francolor ═ mix (color, preset background image, 1.0-outline image) (2)
The method comprises the steps of obtaining a pixel point gray value, obtaining a target area by means of a MASK, obtaining a gray value of each pixel point by means of color, and obtaining a gray value of each pixel point by means of fusion of four textures of a foreground image, a preset background image and the target area, wherein mix (x, y, a) ═ x (1-a) + y a, the target area is obtained by means of smearing the MASK, the color represents the gray value of each pixel point obtained based on the three textures of the foreground image, the preset background image and the smear MASK, and the gl _ FragColor represents the gray value of each pixel.
It should be further noted that, on the premise that the preset special effect can move along with the movement of the target object displayed on the screen by using the method provided by the present disclosure, the preset special effect may be any special effect.
By adopting the method, the special effect position set on the target object by the user can be recorded through the preset object template, so that the target object can be ensured to be followed by the special effect region according to the special effect position set on the target object by the user and recorded by the preset object template after the target object moves, the real-time interactive special effect with the user can be realized, the user experience is improved, and in addition, the transparent effect can be realized in the target region of the target object by overlapping various textures on the premise that the preset special effect can move along with the movement of the target object displayed on a screen.
Fig. 6 is a block diagram illustrating an apparatus for displaying special effects according to an exemplary embodiment, as shown in fig. 6, the apparatus including:
a first determining module 601, configured to determine position information of a special effect region, where the special effect region includes a touch region on a target object displayed on a screen by a user;
a second determining module 602, configured to determine a template area on a preset object template according to the position information;
a third determining module 603, configured to determine a target area on the target object according to the template area;
a special effect rendering module 604, configured to render the target area according to a preset special effect.
Optionally, the third determining module 603 is configured to obtain an image frame, where the image frame includes the target object; the target area is determined on the target object included in the image frame according to the template area.
Optionally, fig. 7 is a block diagram of an apparatus for displaying special effects according to the embodiment shown in fig. 6, where the preset object template includes a plurality of triangle preset regions composed of preset key points of the target object, and as shown in fig. 7, the apparatus further includes:
a detection module 605, configured to perform keypoint detection on the target object to obtain multiple keypoints;
a triangulation module 606, configured to triangulate the target object according to the key point to obtain a plurality of triangular regions;
the second determining module 602 is configured to determine a target triangular region corresponding to the position information in the plurality of triangular regions; and determining the area corresponding to the target triangular area as the template area in the plurality of triangular preset areas.
Optionally, the third determining module 603 is configured to obtain a target preset key point corresponding to the template region; determining a target key point corresponding to the target preset key point in the plurality of key points; and determining a triangular area formed by the target key points as the target area.
Optionally, the special effect rendering module 604 is configured to obtain a preset background image; rendering the target object to the preset background image; and rendering the target area according to the preset background image.
By adopting the device, the special effect position set on the target object by the user can be recorded through the preset object template, so that the target object can be followed by the special effect region according to the special effect position set on the target object by the user recorded by the preset object template after the target object moves, the real-time interactive special effect with the user can be realized, and the user experience is further improved.
Referring now to FIG. 8, shown is a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some implementations, the clients may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining position information of a special effect area, wherein the special effect area comprises a touch area of a target object displayed on a screen by a user; determining a template area on a preset object template according to the position information; determining a target area on the target object according to the template area; and rendering the target area according to a preset special effect.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of a module in some cases does not constitute a limitation on the module itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides a method of displaying a special effect, including determining location information of a special effect region including a touch region of a user on a target object of a screen display; determining a template area on a preset object template according to the position information; determining a target area on the target object according to the template area; and rendering the target area according to a preset special effect.
Example 2 provides the method of example 1, the determining a target region on the target object from the template region comprising: acquiring an image frame, the image frame including the target object; determining the target area on the target object included in the image frame according to the template area.
The preset object template comprises a plurality of triangular preset regions composed of preset key points of the target object, and before determining the template region on the preset object template according to the position information, the method further comprises: performing key point detection on the target object to obtain a plurality of key points; triangulating the target object according to the key points to obtain a plurality of triangular areas; the determining a template region on a preset object template according to the position information includes: determining a target triangular area corresponding to the position information in the plurality of triangular areas; and determining the area corresponding to the target triangular area as the template area in the plurality of triangular preset areas.
The determining a target region on the target object according to the template region comprises: acquiring a target preset key point corresponding to the template area; determining a target key point corresponding to the target preset key point in the plurality of key points; and determining a triangular area formed by the target key points as the target area.
The rendering the target area according to the preset special effect comprises the following steps: acquiring a preset background image; rendering the target object to the preset background image; and rendering the target area according to the preset background image.
Example 3 provides, in accordance with one or more embodiments of the present disclosure, an apparatus to display a special effect, comprising: the device comprises a first determining module, a second determining module and a display module, wherein the first determining module is used for determining the position information of a special effect area, and the special effect area comprises a touch area of a target object displayed on a screen by a user; the second determining module is used for determining a template area on a preset object template according to the position information; a third determining module, configured to determine a target area on the target object according to the template area; and the special effect rendering module is used for rendering the target area according to a preset special effect.
The third determining module is configured to acquire an image frame, where the image frame includes the target object; determining the target area on the target object included in the image frame according to the template area.
Example 4 provides the apparatus of example 3, the preset object template including a plurality of triangular preset regions composed of preset key points of the target object, the apparatus further including: the detection module is used for detecting key points of the target object to obtain a plurality of key points; the triangulation module is used for triangulating the target object according to the key points to obtain a plurality of triangular areas; the second determining module is configured to determine a target triangular region corresponding to the position information in the plurality of triangular regions; and determining the area corresponding to the target triangular area as the template area in the plurality of triangular preset areas.
The third determining module is configured to obtain a target preset key point corresponding to the template region; determining a target key point corresponding to the target preset key point in the plurality of key points; and determining a triangular area formed by the target key points as the target area.
The special effect rendering module is used for acquiring a preset background image; rendering the target object to the preset background image; and rendering the target area according to the preset background image.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (10)

1. A method of displaying a special effect, the method comprising:
determining position information of a special effect area, wherein the special effect area comprises a touch area of a target object displayed on a screen by a user;
determining a template area on a preset object template according to the position information;
determining a target area on the target object according to the template area;
rendering the target area according to a preset special effect;
wherein the preset object template includes a plurality of triangular preset regions composed of preset key points of the target object, and before determining the template region on the preset object template according to the position information, the method further includes:
performing key point detection on the target object to obtain a plurality of key points;
triangulating the target object according to the key points to obtain a plurality of triangular areas;
the determining a template region on a preset object template according to the position information includes:
determining a target triangular area corresponding to the position information in the plurality of triangular areas;
and determining the area corresponding to the target triangular area as the template area in the plurality of triangular preset areas.
2. The method of claim 1, wherein determining a target region on the target object from the template region comprises:
acquiring an image frame, the image frame including the target object;
determining the target area on the target object included in the image frame according to the template area.
3. The method of claim 2, wherein the determining the target region on the target object included in the image frame according to the template region comprises:
acquiring a target preset key point corresponding to the template area;
determining a target key point corresponding to the target preset key point in the plurality of key points;
and determining a triangular area formed by the target key points as the target area.
4. The method according to any one of claims 1 to 3, wherein the rendering the target area according to a preset special effect comprises:
acquiring a preset background image; rendering the target object to the preset background image;
and rendering the target area according to the preset background image.
5. An apparatus for displaying a special effect, the apparatus comprising:
the device comprises a first determining module, a second determining module and a display module, wherein the first determining module is used for determining the position information of a special effect area, and the special effect area comprises a touch area of a target object displayed on a screen by a user;
the second determining module is used for determining a template area on a preset object template according to the position information;
a third determining module, configured to determine a target area on the target object according to the template area;
the special effect rendering module is used for rendering the target area according to a preset special effect;
wherein the preset object template includes a plurality of triangular preset regions composed of preset key points of the target object, the apparatus further includes:
the detection module is used for detecting key points of the target object to obtain a plurality of key points;
the triangulation module is used for triangulating the target object according to the key points to obtain a plurality of triangular areas;
the second determining module is configured to determine a target triangular region corresponding to the position information in the plurality of triangular regions; and determining the area corresponding to the target triangular area as the template area in the plurality of triangular preset areas.
6. The apparatus of claim 5, wherein the third determining module is configured to obtain an image frame, the image frame comprising the target object; determining the target area on the target object included in the image frame according to the template area.
7. The apparatus according to claim 6, wherein the third determining module is configured to obtain a target preset key point corresponding to the template region; determining a target key point corresponding to the target preset key point in the plurality of key points; and determining a triangular area formed by the target key points as the target area.
8. The apparatus according to any one of claims 5 to 7, wherein the special effect rendering module is configured to obtain a preset background image; rendering the target object to the preset background image; and rendering the target area according to the preset background image.
9. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1-4.
10. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 4.
CN202010014709.2A 2020-01-07 2020-01-07 Method, device, storage medium and electronic equipment for displaying special effects Active CN111242881B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010014709.2A CN111242881B (en) 2020-01-07 2020-01-07 Method, device, storage medium and electronic equipment for displaying special effects
PCT/CN2020/129296 WO2021139408A1 (en) 2020-01-07 2020-11-17 Method and apparatus for displaying special effect, and storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014709.2A CN111242881B (en) 2020-01-07 2020-01-07 Method, device, storage medium and electronic equipment for displaying special effects

Publications (2)

Publication Number Publication Date
CN111242881A CN111242881A (en) 2020-06-05
CN111242881B true CN111242881B (en) 2021-01-12

Family

ID=70879891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014709.2A Active CN111242881B (en) 2020-01-07 2020-01-07 Method, device, storage medium and electronic equipment for displaying special effects

Country Status (2)

Country Link
CN (1) CN111242881B (en)
WO (1) WO2021139408A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242881B (en) * 2020-01-07 2021-01-12 北京字节跳动网络技术有限公司 Method, device, storage medium and electronic equipment for displaying special effects
CN111899192B (en) * 2020-07-23 2022-02-01 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN112037339B (en) * 2020-09-01 2024-01-19 抖音视界有限公司 Image processing method, apparatus and storage medium
CN112188058A (en) * 2020-09-29 2021-01-05 努比亚技术有限公司 Video shooting method, mobile terminal and computer storage medium
CN112954206B (en) * 2021-02-05 2022-08-16 上海富彭展览展示服务有限公司 Virtual makeup display method, system, device and storage medium thereof
CN113038228B (en) * 2021-02-25 2023-05-30 广州方硅信息技术有限公司 Virtual gift transmission and request method, device, equipment and medium thereof
CN113160031A (en) * 2021-03-26 2021-07-23 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113068072A (en) * 2021-03-30 2021-07-02 北京达佳互联信息技术有限公司 Video playing method, device and equipment
CN113744414B (en) * 2021-09-06 2022-06-28 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114222191B (en) * 2021-11-24 2023-09-08 星际数科科技股份有限公司 Chat reloading video playing system
CN114331887A (en) * 2021-12-23 2022-04-12 北京达佳互联信息技术有限公司 Video special effect processing method and device, electronic equipment and storage medium
CN114531553B (en) * 2022-02-11 2024-02-09 北京字跳网络技术有限公司 Method, device, electronic equipment and storage medium for generating special effect video
CN114598823A (en) * 2022-03-11 2022-06-07 北京字跳网络技术有限公司 Special effect video generation method and device, electronic equipment and storage medium
CN114895831A (en) * 2022-04-28 2022-08-12 北京达佳互联信息技术有限公司 Virtual resource display method and device, electronic equipment and storage medium
CN115239575B (en) * 2022-06-06 2023-10-27 荣耀终端有限公司 Beautifying method and device
CN115937010B (en) * 2022-08-17 2023-10-27 北京字跳网络技术有限公司 Image processing method, device, equipment and medium

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101319667B1 (en) * 2012-05-24 2013-10-17 삼성에스디에스 주식회사 System and method for providing augmented reality and method for tracking region of interest used in the same
EP3101027B1 (en) * 2013-12-09 2018-11-07 Olymvax Biopharmaceuticals Inc. Staphylococcus aureus spa5 mutant, composition comprising mutant and preparation method and use thereof
WO2016151888A1 (en) * 2015-03-26 2016-09-29 オリンパス株式会社 Image processing device
CN104966318B (en) * 2015-06-18 2017-09-22 清华大学 Augmented reality method with imaging importing and image special effect function
CN105049911B (en) * 2015-07-10 2017-12-29 西安理工大学 A kind of special video effect processing method based on recognition of face
CN105657538B (en) * 2015-12-31 2019-01-08 杭州雅乐互动科技有限公司 One kind carrying out synthetic method and device to video file by mobile terminal
CN106875431B (en) * 2017-02-10 2020-03-17 成都弥知科技有限公司 Image tracking method with movement prediction and augmented reality implementation method
CN108961361B (en) * 2017-05-27 2023-06-27 天津方正手迹数字技术有限公司 Method and system for generating special effect text image and computer equipment
CN107909628A (en) * 2017-10-27 2018-04-13 广西小草信息产业有限责任公司 A kind of word processor and method
CN108965740B (en) * 2018-07-11 2020-10-30 深圳超多维科技有限公司 Real-time video face changing method, device, equipment and storage medium
CN108958610A (en) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 Special efficacy generation method, device and electronic equipment based on face
CN109147023A (en) * 2018-07-27 2019-01-04 北京微播视界科技有限公司 Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN109672830B (en) * 2018-12-24 2020-09-04 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN109710255B (en) * 2018-12-24 2022-07-12 网易(杭州)网络有限公司 Special effect processing method, special effect processing device, electronic device and storage medium
CN109859102B (en) * 2019-02-01 2021-07-23 北京达佳互联信息技术有限公司 Special effect display method, device, terminal and storage medium
RU2707369C1 (en) * 2019-02-27 2019-11-26 Федеральное государственное бюджетное образовательное учреждение высшего образования "Самарский государственный медицинский университет" Министерства здравоохранения Российской Федерации Method for preparing and performing a surgical operation using augmented reality and a complex of equipment for its implementation
CN110047030B (en) * 2019-04-10 2023-05-16 网易(杭州)网络有限公司 Periodic special effect generation method and device, electronic equipment and storage medium
CN110012352B (en) * 2019-04-17 2020-07-24 广州华多网络科技有限公司 Image special effect processing method and device and video live broadcast terminal
CN110084204B (en) * 2019-04-29 2020-11-24 北京字节跳动网络技术有限公司 Image processing method and device based on target object posture and electronic equipment
CN110503703B (en) * 2019-08-27 2023-10-13 北京百度网讯科技有限公司 Method and apparatus for generating image
CN111242881B (en) * 2020-01-07 2021-01-12 北京字节跳动网络技术有限公司 Method, device, storage medium and electronic equipment for displaying special effects

Also Published As

Publication number Publication date
CN111242881A (en) 2020-06-05
WO2021139408A1 (en) 2021-07-15

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
CN112929582A (en) Special effect display method, device, equipment and medium
CN111243049B (en) Face image processing method and device, readable medium and electronic equipment
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110796664A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111210485A (en) Image processing method and device, readable medium and electronic equipment
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
WO2022247630A1 (en) Image processing method and apparatus, electronic device and storage medium
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN114842120A (en) Image rendering processing method, device, equipment and medium
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN114742856A (en) Video processing method, device, equipment and medium
CN111583329B (en) Augmented reality glasses display method and device, electronic equipment and storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN111833459A (en) Image processing method and device, electronic equipment and storage medium
CN113963000B (en) Image segmentation method, device, electronic equipment and program product
CN115358919A (en) Image processing method, device, equipment and storage medium
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN110807728B (en) Object display method and device, electronic equipment and computer-readable storage medium
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium
CN115937010B (en) Image processing method, device, equipment and medium
CN117746274A (en) Information processing method and device
CN117152385A (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant