CN115718551A - Special effect adding method and device, computing equipment and computer storage medium - Google Patents

Special effect adding method and device, computing equipment and computer storage medium Download PDF

Info

Publication number
CN115718551A
CN115718551A CN202211473070.XA CN202211473070A CN115718551A CN 115718551 A CN115718551 A CN 115718551A CN 202211473070 A CN202211473070 A CN 202211473070A CN 115718551 A CN115718551 A CN 115718551A
Authority
CN
China
Prior art keywords
special effect
position information
key point
target
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211473070.XA
Other languages
Chinese (zh)
Inventor
张鸣鸣
刘蓬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202211473070.XA priority Critical patent/CN115718551A/en
Publication of CN115718551A publication Critical patent/CN115718551A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a special effect adding method and device, computing equipment and a computer storage medium. The method comprises the following steps: monitoring target special effect adding triggering operation of a user on a target key point of an entity object to be processed displayed on a terminal screen; responding to a target special effect adding triggering operation, and determining screen position information corresponding to a target key point; and adding the target special effect to a target key point of the entity object to be processed according to the screen position information, and dynamically displaying the target special effect according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen. According to the scheme, the intelligent increase special effect is achieved, in addition, the target special effect is dynamically displayed according to the field depth change parameters, the special effect is not presented statically and invariably any more, the user can visually see the change of the target special effect, the interestingness of the user is improved, therefore, more users can be attracted to add the special effect to the entity object to be processed, and drainage is achieved.

Description

Special effect adding method and device, computing equipment and computer storage medium
Technical Field
The application relates to the technical field of computers, in particular to a special effect adding method, a special effect adding device, computing equipment and a computer storage medium.
Background
With the development of communication technology and terminal devices, various terminal devices such as mobile phones, tablet computers, etc. have become an indispensable part of people's work and life, and with the increasing popularity of terminal devices, video interaction has become a main channel for communication and entertainment.
Taking the example of purchasing a buyer and a buyer by a user, after purchasing the buyer and the buyer, the user may shoot a video of the buyer and share the video in the evaluation area of the commodity, and in the process of shooting the video of the buyer and the video of the buyer, the user may have a desire to add a corresponding special effect to the buyer and the buyer, however, the added special effect is static and is not interesting, so that the user has a weak desire to add the special effect, and the user runs away.
Disclosure of Invention
The application aims to provide a special effect adding method, a special effect adding device, computing equipment and a computer storage medium, so as to solve the problems that in the prior art, due to the fact that an added special effect is statically presented, interestingness is lacked, further the special effect adding will of a user is not strong, the user runs off and the like.
According to an aspect of an embodiment of the present application, there is provided a special effect adding method including:
monitoring target special effect adding triggering operation of a user on a target key point of an entity object to be processed displayed on a terminal screen;
responding to a target special effect adding triggering operation, and determining screen position information corresponding to a target key point;
and adding the target special effect to a target key point of the entity object to be processed according to the screen position information, and dynamically displaying the target special effect according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen.
Further, dynamically displaying the target special effect according to the depth-of-field change parameter corresponding to the entity object to be processed in the terminal screen further comprises:
if the depth of field change parameter corresponding to the entity object to be processed in the terminal screen is from near to far, dynamically displaying the target special effect according to the mode that the size of the special effect is from large to small;
and if the depth of field change parameters corresponding to the entity object to be processed in the terminal screen are from far to near, dynamically displaying the target special effect according to the mode that the size of the special effect is from small to large.
Further, before dynamically displaying the target special effect according to the depth-of-field variation parameter corresponding to the entity object to be processed in the terminal screen, the method further comprises the following steps:
dynamically calculating the coordinate distance between two preset reference key points according to the screen position information corresponding to the two preset reference key points, and dynamically determining the special effect size corresponding to the target special effect according to the coordinate distance;
the dynamic display of the target special effect according to the depth of field change parameters corresponding to the entity object to be processed in the terminal screen further comprises the following steps:
and dynamically displaying the target special effect according to the determined special effect size according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen.
Further, the method further comprises: and carrying out key point identification processing on the entity object to be processed to obtain the screen position information of each key point on the terminal screen.
Further, the step of performing key point identification processing on the entity object to be processed to obtain the screen position information of each key point on the terminal screen further comprises the following steps:
carrying out key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to the terminal screen;
and calculating the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
Further, performing key point identification processing on the entity object to be processed to obtain relative position information of each key point relative to the terminal screen, and calculating the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size further includes:
the frame layer carries out key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to the terminal screen, and transmits the relative position information corresponding to each key point to the page layer;
and the page layer calculates the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
Further, the relative position information includes screen width relative position information and screen height relative position information; the screen position information includes: screen width position information and screen height position information;
calculating the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size further comprises:
multiplying the relative position information of the screen width by the width value of the terminal screen to obtain the screen width position information of each key point on the terminal screen;
and multiplying the relative position information of the screen height by the height value of the terminal screen to obtain the screen height position information of each key point on the terminal screen.
Further, the monitoring of the target special effect addition triggering operation of the user on the target key point of the entity object to be processed displayed on the terminal screen further includes:
and monitoring the selection operation of the user on the special effect displayed by the terminal screen and the key point of the entity object to be processed, and determining a target key point and a target special effect according to the selection operation.
Further, the special effect adding triggering operation for monitoring the target special effect added by the user to the target key point of the entity object to be processed displayed on the terminal screen further comprises the following steps:
judging whether any key point in the entity object to be processed displayed on the terminal screen has a preset action or not;
if yes, determining the key point with the preset action as a target key point, and determining the special effect associated with the preset action as a target special effect corresponding to the target key point.
Further, the method further comprises: and responding to the special effect video recording request, and recording the entity object video to be processed added with the target special effect.
Further, the method further comprises: and responding to a video one-key issuing request triggered by a user, and issuing the recorded entity object video to be processed added with the target special effect in a target page.
According to another aspect of the embodiments of the present application, there is provided a special effect adding apparatus, including:
the monitoring module is fast and is suitable for monitoring the target special effect adding triggering operation of a user on the target key point of the entity object to be processed displayed on the terminal screen;
the determining module is suitable for responding to the target special effect adding triggering operation and determining the screen position information corresponding to the target key point;
the setting module is suitable for adding the target special effect to a target key point of the entity object to be processed according to the screen position information;
and the dynamic display module is suitable for dynamically displaying the target special effect according to the corresponding depth of field change parameters of the entity object to be processed in the terminal screen.
According to yet another aspect of embodiments herein, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the special effect adding method.
According to another aspect of the embodiments of the present application, a computer storage medium is provided, where at least one executable instruction is stored in the computer storage medium, and the executable instruction causes a processor to perform an operation corresponding to the above special effect adding method.
According to the scheme provided by the embodiment of the application, the intelligent increase of the special effect is realized, in addition, the target special effect is dynamically displayed according to the depth of field change parameters and is not statically and invariably presented any more, a user can visually see the change of the target special effect, the interestingness of the user is improved, more users can be attracted to add the special effect to the entity object to be processed, and drainage is realized.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a flow diagram of a special effects addition method according to one embodiment of the present application;
FIG. 2A shows a schematic flow diagram of a special effects addition method according to another embodiment of the present application;
FIG. 2B is a schematic diagram of a human-shaped handheld displayed on the terminal screen;
FIG. 2C is a diagram of a special effects panel displayed on the terminal screen;
FIG. 2D is a schematic view of an interface for recording a video;
FIG. 2E is a schematic illustration of video saving;
FIG. 3 illustrates a schematic structural diagram of a special effects addition apparatus according to one embodiment of the present application;
FIG. 4 shows a schematic structural diagram of a computing device according to an embodiment in the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
First, the noun terms to which one or more embodiments of the present application relate are explained.
MediaPipe: is a multimedia machine learning model application framework developed and sourced by Google Research. Is a graph-based cross-platform framework for building machine learning conduits for multimodal (video, audio and sensor) applications.
Cocos Creator: the cross-platform game engine is a light-weight, efficient and free-source-opening cross-platform game engine, is a real-time 3D content creation platform, supports 2D and 3D game development, and can provide a set of complete industry solutions in the fields of HMI, ioT, XR, virtual figures and the like.
Fig. 1 shows a schematic flow diagram of a special effect addition method according to an embodiment of the present application.
As shown in fig. 1, the method comprises the steps of:
step S101, monitoring a target special effect adding triggering operation of a user on a target key point of an entity object to be processed displayed on a terminal screen.
Specifically, the entity object to be processed refers to an entity object to be added with a special effect, and may be, for example, a human-shaped handheld, a pet demon handheld, a static character, a dynamic character, and the like, the key point is a key part of the entity object to be processed, and taking the human-shaped handheld as an example, the key point may be a human body part such as a five-sense organ, a hand, a foot, a shoulder, and the like of the human-shaped handheld, and mainly relates to 33 human body key parts, which are not listed here.
In general, an entity object to be processed is displayed through a terminal device, a user can visually check the entity object to be processed through a terminal screen, a target key point is a key point to which the user wants to add a special effect, and a target special effect is a specific special effect which the user wants to add to the target key point of the entity object to be processed, for example, a photosphere special effect, a flame special effect, and the like. In order to add a special effect in time, a trigger operation of adding the special effect to a target key point of a to-be-processed entity object displayed on a terminal screen by a user needs to be monitored.
And step S102, responding to the target special effect adding triggering operation, and determining the screen position information corresponding to the target key point.
In order to enable the target special effect to be accurately displayed at the user target key point, in response to the target special effect addition trigger operation, screen position information corresponding to the target key point needs to be determined. The screen position information reflects the actual position of the target key point on the terminal screen, and the screen position information corresponding to the target key point is determined, so that the special effect can be accurately added according to the screen position information, and the problem that the target special effect adding position is inaccurate is solved.
And step S103, adding the target special effect to a target key point of the entity object to be processed according to the screen position information, and dynamically displaying the target special effect according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen.
The target special effect is a special effect that a user wants to add to a target key point, and after the screen position information of the target key point is determined, the target special effect can be added to the target key point of the entity object to be processed according to the screen position information, for example, the target key point is a right hand of a humanoid hand, the target special effect is a photosphere special effect, here, the photosphere special effect is added to the right hand of the humanoid hand, and from the viewpoint of visual effect, the photosphere is emitted by the right hand of the humanoid hand.
The corresponding depth of field change parameter of the entity object to be processed in the terminal screen visually reflects the distance of the entity object to be processed seen by a user through the terminal screen relative to the terminal screen, and the target special effect added to the target key point can dynamically display the target special effect according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen, namely, the target special effect added to the target key point dynamically changes along with the change of the depth of field change parameter.
According to the scheme, the intelligent special effect increasing method and the intelligent special effect increasing device, the intelligent special effect increasing is achieved, in addition, the target special effect is dynamically displayed according to the depth of field change parameters, the special effect is not statically and invariably presented any more, the user can visually see the change of the target special effect, the interestingness of the user is improved, therefore, more users can be attracted to add the special effect to the entity object to be processed, and drainage is achieved.
Fig. 2A shows a schematic flow diagram of a special effect addition method according to another embodiment of the present application. As shown in fig. 2A, the method includes the steps of:
step S201, performing key point identification processing on the entity object to be processed to obtain the screen position information of each key point on the terminal screen.
In order to perform special effect addition with pertinence, it is necessary to perform key point identification processing on an entity object to be processed displayed on a terminal screen, for example, the entity object to be processed may be performed with a MediaPipe frame, and screen position information of each key point on the terminal screen may be obtained through the identification processing, where the screen position information of each key point on the terminal screen reflects an actual position of the key point on the terminal screen.
In a specific implementation, the screen position information of each key point on the terminal screen can be further determined by the following method: and carrying out key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to the terminal screen, and calculating the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
The relative position information of the key point relative to the terminal screen is the position of the key point relative to the terminal screen, wherein the relative position information comprises screen width relative position information and screen height relative position information, specifically, the relative position information is expressed by a numerical value between two values of 0-1, wherein 0 represents that the key point is in the leftmost side of the terminal screen, 1 represents that the key point is in the rearmost side of the terminal screen, 0 represents that the key point is in the lowermost side of the terminal screen, 1 represents that the key point is in the uppermost side of the terminal screen, and by taking the key point as a nose as an example, the relative position information of the nose relative to the terminal screen is { x:0.5, y 0.5}, wherein x represents that the horizontal coordinate of the nose in the terminal screen is 0.5 times of the screen width, and y represents that the vertical coordinate of the nose in the terminal screen is 0.5 times of the screen height; for another example, the relative position information is { x:0.4, y; this is by way of illustration only and not by way of limitation; the relative position information of the key point relative to the terminal screen is determined based on the current position of the entity object to be processed on the terminal screen, and when the position of the entity object to be processed in the terminal screen changes, the determined relative position information changes accordingly.
The terminal screen sizes of terminal devices of different brands and different models may be slightly different, in order to accurately add a target special effect, after determining the relative position information, the screen position information of each key point on the terminal screen is calculated according to the relative position information and the terminal screen size, and the calculated screen position information is the actual position of each key point of the entity object to be processed on the terminal screen, wherein the screen position information includes: the screen width position information and the screen height position information, for example, the screen width relative position information may be multiplied by the terminal screen width value to obtain the screen width position information of each key point on the terminal screen; and multiplying the relative position information of the screen height by the height value of the terminal screen to obtain the screen height position information of each key point on the terminal screen.
For example, when calculating the screen position information of the nose on the terminal screen, the screen width position information of the nose on the terminal screen is obtained by multiplying 0.4 by the terminal screen width value, and the screen height position information of the nose on the terminal screen is obtained by multiplying 0.5 by the terminal screen height value, for example, the terminal screen width value is 8.09 cm, and the terminal screen height value is 14.39 cm, and the calculated result is the screen position information of the nose on the terminal screen.
The execution main body of the special effect adding method provided by the embodiment is a client, the client can be a page with a special effect adding function, the page can be opened through a browser to perform special effect addition, and when the special effect is added, the screen position information of each key point on a terminal screen is calculated through a frame layer and a page layer: the frame layer carries out key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to the terminal screen, and transmits the relative position information corresponding to each key point to the page layer; and the page layer calculates the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
Step S202, monitoring the target special effect adding triggering operation of the target key point of the entity object to be processed displayed on the terminal screen by the user.
Specifically, the method can be realized by the following steps: the terminal screen can display a special effect panel, all the special effects which can be added and key point identifications of all key points of the entity object to be processed can be displayed in the special effect panel, a user can select the special effect which is desired to be added and the key points of the special effect which is desired to be added in the special effect panel, and after the selection operation of the special effect displayed on the terminal screen and the key points of the entity object to be processed by the user is monitored, the target key points and the target special effects can be determined according to the selection operation.
Or judging whether any key point in the entity object to be processed displayed on the terminal screen has a preset action or not; if yes, determining the key point with the preset action as a target key point, and determining the special effect associated with the preset action as a target special effect corresponding to the target key point. Specifically, the association relationship between the preset action and the special effect is preset, and the preset action and the corresponding special effect can be flexibly set according to needs, for example, if a fist is associated with the flame special effect, a palm extending is associated with the photosphere special effect, and a certain key point in the entity object to be processed makes a fist making action, the key point can be determined as a target key point, and the flame special effect associated with the fist making action is determined as the target special effect.
Step S203, responding to the target special effect adding triggering operation, and determining the screen position information corresponding to the target key point.
In order to enable the target special effect to be accurately displayed at the user target key point, in response to the target special effect addition trigger operation, screen position information corresponding to the target key point needs to be determined. Since the screen position information of each key point of the entity object to be processed has already been determined in step S201, after the target key point for special effect addition is determined, the screen position information corresponding to the target key point can be directly determined. The screen position information reflects the actual position of the target key point on the terminal screen, and the screen position information corresponding to the target key point is determined, so that the special effect can be accurately added according to the screen position information, and the problem that the target special effect adding position is inaccurate is solved.
Step S204, dynamically calculating the coordinate distance between the two reference key points according to the screen position information corresponding to the two preset reference key points, and dynamically determining the special effect size corresponding to the target special effect according to the coordinate distance.
The selection principle of the reference key points is mainly considered from the stability point of view, and the two reference key points do not generate bending displacement and the like. Taking a human-shaped handheld as an example, the two selected reference key points are the right shoulder and the left hip.
The screen position information of the two reference key points is determined in the above step, and when the entity object to be processed has displacement change in depth relative to the terminal screen, the coordinate distance between the two reference key points is dynamically calculated according to the screen position information corresponding to the two preset reference key points, for example, the two preset reference key points: the coordinates of the right shoulder (point a) and the left hip (point B) are { x1, y1}, { x2, y2}, respectively, so that the distance between the two points can be calculated from the coordinates of the two reference key points using the following formula:
Figure BDA0003953070760000081
the distance between the two reference key points also reflects the distance between the entity object to be processed and the terminal screen, and the farther the entity object to be processed is from the terminal screen, the smaller the distance between the two points is; the closer the entity object to be processed is to the terminal screen, the greater the distance between the two points, therefore, the corresponding depth of field change parameter of the entity object to be processed in the terminal screen can also be determined by the coordinate distance between the two reference key points. The coordinate distance between the two reference key points is reduced from large, the corresponding depth of field change parameters are changed from near to far, the coordinate distance between the two reference key points is increased from small to large, and the corresponding depth of field change parameters are changed from far to near.
In order to enable the special effect size of the added target special effect to be adapted to the size of the entity object to be processed currently displayed on the terminal screen, the special effect size corresponding to the target special effect is dynamically determined according to a coordinate distance, for example, the special effect size of the target special effect is equal to a product of the coordinate distance between two reference key points and a preset value, where the preset value may be flexibly set according to actual experience, for example, the preset value is 5.
And step S205, adding the target special effect to the target key point of the entity object to be processed according to the screen position information.
The target special effect is a special effect which a user wants to add to a target key point, after the screen position information of the target key point is determined, the actual position of the target key point in a terminal screen can be determined according to the screen position information, then the target special effect is added to the corresponding position, and from the visual effect, the target special effect is added to the target key point of the entity object to be processed, for example, the target key point is the right hand of a man-shaped hand, and the target special effect is a photosphere special effect which is added to the right hand of the man-shaped hand. The target effect may include a sequence frame effect and a particle effect.
And step S206, dynamically displaying the target special effect according to the determined special effect size according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen.
From the visual effect, the target special effect changes with the distance between the entity object to be processed and the terminal screen. The distance between the entity object to be processed and the terminal screen can be defined by a depth of field change parameter, and if the depth of field change parameter corresponding to the entity object to be processed in the terminal screen is from near to far, the target special effect is dynamically displayed in a mode that the size of the special effect is from large to small; and if the depth of field change parameters corresponding to the entity object to be processed in the terminal screen are from far to near, dynamically displaying the target special effect according to the mode that the size of the special effect is from small to large. That is, from the visual effect, the closer the entity object to be processed is to the terminal screen, the larger the special effect size of the target special effect is, and the farther the entity object to be processed is from the terminal screen, the smaller the special effect size of the target special effect is, it should be noted that the closer the entity object to be processed is to the terminal screen also reflects the size of the entity object to be processed displayed in the terminal screen, the closer the entity object to be processed is to the terminal screen, the larger the size of the entity object to be processed is seen, and here, the size of the special effect of the whole target special effect changes, and the size of the target special effect and the size of the entity object to be processed change simultaneously, and is not static as the target special effect in the prior art does not change.
In the embodiment, the screen position information of the target key point can be determined at regular time, so that the displayed position of the target special effect is updated correspondingly. The coordinate distance between the two reference key points also needs to be calculated at regular time, so that the special effect size of the target special effect is updated correspondingly.
In an alternative embodiment of the present application, the method further comprises: and responding to the special effect video recording request, and recording the entity object video to be processed added with the target special effect.
Specifically, a video recording button may be displayed on the terminal screen, and after the user clicks the video recording button, the user regards that a special-effect video recording request is sent, and records the to-be-processed entity object video added with the target special effect in response to the special-effect video recording request. After the recording is finished, the recorded to-be-processed entity object video can be automatically stored in the terminal equipment.
In an alternative embodiment of the present application, the method further comprises: and responding to a video one-key issuing request triggered by a user, and issuing the recorded entity object video to be processed added with the target special effect in a target page.
Specifically, the video recording function may be built in the client, and the client may integrate a one-key publishing capability, for example, a publishing button is provided, and a user may directly publish the to-be-processed entity object video added with the target special effect in the target page by clicking the publishing button, thereby reducing user operation steps. For example, the recorded video of the entity object to be processed with the target special effect added thereto may be directly published in the review area of the commodity.
In an optional embodiment of the present application, the target special effect may not only have a change in size, but also may have a change in form, and it may be determined whether a preset behavior exists in the entity object to be processed within a preset time according to screen position information corresponding to the target key point, if so, the special effect form of the target special effect at the target key point is updated, specifically, a moving distance of the target key point within the preset time is calculated according to the screen position information corresponding to the target key point, and it is determined whether a preset behavior exists based on the moving distance, for example, a pushing behavior, and the like.
In addition, whether the action corresponding to the target key point is changed from the first action to the second action can be judged, and if yes, the special effect form of the target special effect at the target key point is updated according to the changed second action. For example, the target key point is the right hand of the humanoid hand, the right hand changes from palm stretching to fist making, the target special effect is the diamond, and the diamond can be changed from a complete form to a broken form when the right hand changes to fist making.
The following description is made with reference to fig. 2B to fig. 2E by taking an example of recording a human-shaped handheld video with special features added as an example:
specifically, when a user wants to record a video added with a special human-shaped handheld, a corresponding client can be opened, for example, the client is a game page, and the game page comprises a framework layer and an H5 game layer.
According to the operation of opening the game triggered by a user, the framework layer opens the game, the game engine loading is carried out on the H5 game layer, the game is started, after the game is started, the framework layer is informed to open the camera, the framework layer opens the camera according to the received camera opening notification, the human-shaped handheld office can be displayed on the terminal screen through the camera, as shown in figure 2B, the camera can acquire corresponding data of the human-shaped handheld office, the framework layer analyzes the acquired data through the mediaprofile, after the data are packaged, the relative position information of 33 key points of the human body relative to the terminal screen is obtained, the relative position information is transmitted to the H5 game layer, and the H5 game layer can determine the screen position information of each key point according to the relative position information. The terminal screen will display a special effect panel, as shown in fig. 2C, in the special effect panel, the user selects a special effect desired by the user, and the user selects a photosphere special effect in the right hand. The H5 game layer creates a corresponding photosphere effect and sets the photosphere effect to a right hand position, wherein the effect size of the photosphere effect is determined according to the coordinate distance between the right shoulder and the left hip of the reference key point. The user clicks the video recording button, the UI interface can be automatically hidden, and the screen recording is started, as shown in FIG. 2D, namely, the H5 game layer calls the frame video recording interface, the frame layer can start the video recording, and the H5 game layer is notified after the video recording is finished, the H5 game layer obtains the video recording result and calls the album storage interface, and the recorded video is stored in the album, as shown in FIG. 2E. The human-shaped hand is used for dealing with the change condition of the depth of field in the terminal screen and influencing the special effect size of the special effect of the photosphere.
In addition, the special effect adding method provided by the embodiment can also add a special effect to the recorded video.
According to the scheme, the intelligent special effect increasing method and the intelligent special effect increasing device, the intelligent special effect increasing is achieved, in addition, the target special effect is dynamically displayed according to the depth of field change parameters, the special effect is not statically and invariably presented any more, the user can visually see the change of the target special effect, the interestingness of the user is improved, therefore, more users can be attracted to add the special effect to the entity object to be processed, and drainage is achieved. Due to the fact that the one-key publishing function is integrated, a user can conveniently and directly publish the videos added with special effects to the target page, user operation steps are reduced, and meanwhile the sharing willingness of the user on the recorded special effect videos can be effectively improved.
Fig. 3 shows a schematic structural diagram of an effect adding apparatus according to an embodiment of the present application.
As shown in fig. 3, the apparatus includes:
the monitoring module 301 is suitable for monitoring the target special effect addition triggering operation of a user on the target key point of the entity object to be processed displayed on the terminal screen;
the determining module 302 is adapted to determine screen position information corresponding to a target key point in response to a target special effect addition triggering operation;
the setting module 303 is adapted to add the target special effect to a target key point of the entity object to be processed according to the screen position information;
and the dynamic display module 304 is adapted to dynamically display the target special effect according to the depth of field change parameter corresponding to the entity object to be processed in the terminal screen.
Optionally, the dynamic presentation module is further adapted to: if the depth of field change parameter corresponding to the entity object to be processed in the terminal screen is from near to far, dynamically displaying the target special effect according to the mode that the size of the special effect is from large to small;
and if the depth of field change parameters corresponding to the entity object to be processed in the terminal screen are from far to near, dynamically displaying the target special effect according to the mode that the size of the special effect is from small to large.
Optionally, the apparatus further comprises: the calculation module is suitable for dynamically calculating the coordinate distance between two reference key points according to the screen position information corresponding to the two preset reference key points and dynamically determining the special effect size corresponding to the target special effect according to the coordinate distance;
the dynamic presentation module is further adapted to: and dynamically displaying the target special effect according to the determined special effect size according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen.
Optionally, the apparatus further comprises: and the identification module is suitable for identifying key points of the entity object to be processed to obtain the screen position information of each key point on the terminal screen.
Optionally, the identification module is further adapted to: carrying out key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to the terminal screen;
and calculating the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
Optionally, the identification module is further adapted to: the frame layer carries out key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to the terminal screen, and transmits the relative position information corresponding to each key point to the page layer;
and the page layer calculates the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
Optionally, the relative position information includes screen width relative position information and screen height relative position information; the screen position information includes: screen width position information and screen height position information;
the identification module is further adapted to: multiplying the relative position information of the screen width by the width value of the terminal screen to obtain the screen width position information of each key point on the terminal screen;
and multiplying the relative position information of the screen height by the height value of the terminal screen to obtain the screen height position information of each key point on the terminal screen.
Optionally, the monitoring module is further adapted to: and monitoring the selection operation of the user on the special effect displayed by the terminal screen and the key point of the entity object to be processed, and determining a target key point and a target special effect according to the selection operation.
Optionally, the monitoring module is further adapted to: judging whether any key point in the entity object to be processed displayed on the terminal screen has a preset action or not;
if yes, determining the key point with the preset action as a target key point, and determining the special effect associated with the preset action as a target special effect corresponding to the target key point.
Optionally, the apparatus further comprises: and the recording module is suitable for responding to the special effect video recording request and recording the entity object video to be processed added with the target special effect.
Optionally, the apparatus further comprises: and the publishing module is suitable for responding to a video one-key publishing request triggered by a user and publishing the recorded entity object video to be processed added with the target special effect in the target page.
According to the scheme, the intelligent special effect increasing method and the intelligent special effect increasing device, the intelligent special effect increasing is achieved, in addition, the target special effect is dynamically displayed according to the depth of field change parameters, the special effect is not statically and invariably presented any more, the user can visually see the change of the target special effect, the interestingness of the user is improved, therefore, more users can be attracted to add the special effect to the entity object to be processed, and drainage is achieved.
The embodiment of the application also provides a nonvolatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the computer executable instruction can execute the special effect adding method in any method embodiment.
Fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application, and the specific embodiment of the present application does not limit the specific implementation of the computing device.
As shown in fig. 4, the computing device may include: a processor (processor) 402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein: the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408.
A communication interface 404 for communicating with network elements of other devices, such as clients or other servers.
The processor 402 is configured to execute the program 410, and may specifically perform relevant steps in the above-described special effect addition method embodiment.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present Application. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be configured to cause the processor 402 to execute the special effect adding method in any of the above-described method embodiments. For specific implementation of each step in the program 410, reference may be made to corresponding steps and corresponding descriptions in units in the above special effect addition embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present application are not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the embodiments of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limited to the order of execution unless otherwise specified.

Claims (14)

1. A special effect addition method, comprising:
monitoring target special effect adding triggering operation of a user on a target key point of an entity object to be processed displayed on a terminal screen;
responding to the target special effect adding triggering operation, and determining screen position information corresponding to the target key point;
and adding the target special effect to a target key point of the entity object to be processed according to the screen position information, and dynamically displaying the target special effect according to a depth change parameter corresponding to the entity object to be processed in a terminal screen.
2. The method according to claim 1, wherein the dynamically exhibiting the target special effect according to the depth-of-field variation parameter corresponding to the entity object to be processed in the terminal screen further comprises:
if the depth of field change parameter corresponding to the entity object to be processed in the terminal screen is from near to far, dynamically displaying the target special effect according to the mode that the size of the special effect is from large to small;
and if the depth of field change parameters corresponding to the entity object to be processed in the terminal screen are from far to near, dynamically displaying the target special effect according to the mode that the size of the special effect is from small to large.
3. The method according to claim 1 or 2, wherein before dynamically presenting the target special effect according to the depth of field variation parameter corresponding to the entity object to be processed in the terminal screen, the method further comprises:
dynamically calculating the coordinate distance between two preset reference key points according to the screen position information corresponding to the two preset reference key points, and dynamically determining the special effect size corresponding to the target special effect according to the coordinate distance;
the dynamically displaying the target special effect according to the depth of field change parameters corresponding to the entity object to be processed in the terminal screen further comprises:
and dynamically displaying the target special effect according to the determined special effect size according to the corresponding depth of field change parameter of the entity object to be processed in the terminal screen.
4. The method according to any one of claims 1-3, wherein the method further comprises: and carrying out key point identification processing on the entity object to be processed to obtain the screen position information of each key point on the terminal screen.
5. The method according to claim 4, wherein the performing the key point identification processing on the entity object to be processed to obtain the screen position information of each key point on the terminal screen further comprises:
performing key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to a terminal screen;
and calculating the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
6. The method according to claim 5, wherein the performing the keypoint identification processing on the entity object to be processed to obtain the relative position information of each keypoint with respect to the terminal screen, and the calculating the screen position information of each keypoint on the terminal screen according to the relative position information and the terminal screen size further comprises:
the frame layer carries out key point identification processing on the entity object to be processed to obtain the relative position information of each key point relative to a terminal screen, and transmits the relative position information corresponding to each key point to the page layer;
and the page layer calculates the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size.
7. The method of claim 5 or 6, wherein the relative position information comprises screen width relative position information and screen height relative position information; the screen position information includes: screen width position information and screen height position information;
calculating the screen position information of each key point on the terminal screen according to the relative position information and the terminal screen size further comprises:
multiplying the relative position information of the screen width by the width value of the terminal screen to obtain the screen width position information of each key point on the terminal screen;
and multiplying the relative position information of the screen height by the height value of the terminal screen to obtain the screen height position information of each key point on the terminal screen.
8. The method according to any one of claims 1 to 7, wherein the monitoring of the target special effect addition trigger operation of the user on the target key point of the entity object to be processed displayed on the terminal screen further comprises:
the method comprises the steps of monitoring the selection operation of a user on special effects displayed on a terminal screen and key points of an entity object to be processed, and determining target key points and target special effects according to the selection operation.
9. The method according to any one of claims 1 to 7, wherein the monitoring of the special effect addition triggering operation of the user for adding the target special effect to the target key point of the entity object to be processed displayed on the terminal screen further comprises:
judging whether any key point in the entity object to be processed displayed on a terminal screen has a preset action or not;
if yes, determining the key point with the preset action as a target key point, and determining the special effect associated with the preset action as a target special effect corresponding to the target key point.
10. The method according to any one of claims 1-9, wherein the method further comprises: and responding to a special effect video recording request, and recording the entity object video to be processed added with the target special effect.
11. The method of claim 10, wherein the method further comprises:
and responding to a video one-key issuing request triggered by a user, and issuing the recorded entity object video to be processed added with the target special effect in a target page.
12. A special effects adding apparatus comprising:
the monitoring module is fast and is suitable for monitoring the target special effect adding triggering operation of a user on the target key point of the entity object to be processed displayed on the terminal screen;
the determining module is used for responding to the target special effect adding triggering operation and determining the screen position information corresponding to the target key point;
the setting module is suitable for adding the target special effect to a target key point of the entity object to be processed according to the screen position information;
and the dynamic display module is suitable for dynamically displaying the target special effect according to the depth of field change parameters corresponding to the entity object to be processed in the terminal screen.
13. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the special effect adding method according to any one of claims 1-11.
14. A computer storage medium having stored therein at least one executable instruction that causes a processor to perform operations corresponding to the special effects addition method of any of claims 1-11.
CN202211473070.XA 2022-11-21 2022-11-21 Special effect adding method and device, computing equipment and computer storage medium Pending CN115718551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211473070.XA CN115718551A (en) 2022-11-21 2022-11-21 Special effect adding method and device, computing equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211473070.XA CN115718551A (en) 2022-11-21 2022-11-21 Special effect adding method and device, computing equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN115718551A true CN115718551A (en) 2023-02-28

Family

ID=85256060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211473070.XA Pending CN115718551A (en) 2022-11-21 2022-11-21 Special effect adding method and device, computing equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN115718551A (en)

Similar Documents

Publication Publication Date Title
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
US9679416B2 (en) Content creation tool
EP3129871B1 (en) Generating a screenshot
KR101732839B1 (en) Segmentation of content delivery
AU2017334312B2 (en) Objective based advertisement placement platform
CN111950056B (en) BIM display method and related equipment for building informatization model
CN105512187B (en) Information display method and information display device based on display picture
US11575626B2 (en) Bidirectional bridge for web view
US10147240B2 (en) Product image processing method, and apparatus and system thereof
CN113891140A (en) Material editing method, device, equipment and storage medium
CN114116086A (en) Page editing method, device, equipment and storage medium
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
CN111931106A (en) Data processing method and related device
CN108874141B (en) Somatosensory browsing method and device
CN115718551A (en) Special effect adding method and device, computing equipment and computer storage medium
CN111638819B (en) Comment display method, device, readable storage medium and system
CN109816485B (en) Page display method and device
CN111783470A (en) Display method of dynamic materials in reading page, computing equipment and storage medium
CN113112613B (en) Model display method and device, electronic equipment and storage medium
CN114153539B (en) Front-end application interface generation method and device, electronic equipment and storage medium
CN117648510B (en) Information display method, information display device, computer equipment and storage medium
CN113867872A (en) Project editing method, device, equipment and storage medium
CN116228514A (en) Rendering data processing method, rendering data processing device, computer equipment, rendering data processing medium and rendering data processing program product
CN114253646A (en) Digital sand table display and generation method, equipment and storage medium
CN115933929A (en) Online interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination