CN111210486B - Method and device for realizing streamer effect - Google Patents

Method and device for realizing streamer effect Download PDF

Info

Publication number
CN111210486B
CN111210486B CN202010014440.8A CN202010014440A CN111210486B CN 111210486 B CN111210486 B CN 111210486B CN 202010014440 A CN202010014440 A CN 202010014440A CN 111210486 B CN111210486 B CN 111210486B
Authority
CN
China
Prior art keywords
streamer
object model
rendering
user
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010014440.8A
Other languages
Chinese (zh)
Other versions
CN111210486A (en
Inventor
陈瑽
寇京博
李嘉乐
田吉亮
庄涛
杨凯允
陈嘉伟
殷宏亮
张峰
姚逸宁
徐丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Perfect Chijin Technology Co ltd
Original Assignee
Beijing Perfect Chijin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Perfect Chijin Technology Co ltd filed Critical Beijing Perfect Chijin Technology Co ltd
Priority to CN202010014440.8A priority Critical patent/CN111210486B/en
Publication of CN111210486A publication Critical patent/CN111210486A/en
Application granted granted Critical
Publication of CN111210486B publication Critical patent/CN111210486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Abstract

The application provides a method and a device for realizing a streamer effect, wherein the method comprises the following steps: responding to a loading triggering operation of loading an object model, and acquiring auxiliary texture map coordinate information of the object model, which is used for storing texture map coordinate position information for streamer use; responding to the streamer rendering triggering operation of the object model, and obtaining a light sweeping diagram and a background diagram for streamer rendering of the object model; and performing streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map. According to the scheme of the application, the high-efficiency and natural streamer effect can be realized, and user definition is supported, so that the real-time and diversified personalized requirements of users can be met, the user experience satisfaction can be improved, and the interestingness is greatly enhanced.

Description

Method and device for realizing streamer effect
Technical Field
The application relates to the technical field of computers, in particular to a technical scheme for realizing streamer effect.
Background
In the prior art, a common solution of the streamer technology is to make a streamer texture map, put the streamer texture map on the original map, and continuously update the position of the map to realize the streamer of texture movement during rendering. However, the existing streamer technology has the following drawbacks: 1) In the existing streamer technology, whether the manufacture of the light sweeping diagram is fine or not determines the final effect of streamer, the periphery of the light sweeping diagram is usually blurred and feathered, the periphery of the streamer can not be sharp, but the light sweeping diagram is manufactured by art in advance, and any change is not supported after the art manufacture is finished; 2) For complex models, for efficiency reasons, all surfaces of the model are usually spread out on a plane by means of UV (texture map coordinates), so that on one UV map, the surfaces of the model are put together, and for example, there may be shoulder parts, hands, and a part of the foot in the same row, and the map is often the bottom map of the sweep, so that from the efficiency point of view, the flow is not displayed from top to bottom during the sweep, and it is possible that the flow may occur in other positions.
Disclosure of Invention
The purpose of the application is to provide a technical scheme for realizing more efficient and natural streamer effect.
According to one embodiment of the present application, there is provided a method for achieving streamer effects, wherein the method comprises:
responding to a loading triggering operation of loading an object model, and acquiring auxiliary texture map coordinate information of the object model, which is used for storing texture map coordinate position information for streamer use;
responding to the streamer rendering triggering operation of the object model, and obtaining a light sweeping diagram and a background diagram for streamer rendering of the object model;
and performing streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
There is also provided, in accordance with another embodiment of the present application, an apparatus for achieving streamer effects, wherein the apparatus includes:
means for obtaining sub-texture map coordinate information of an object model for storing texture map coordinate position information for streamer use in response to a load trigger operation to load the object model;
means for obtaining a light-sweeping map and a background map for streamer rendering of the object model in response to a streamer rendering trigger operation on the object model;
And the device is used for carrying out streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
According to another embodiment of the present application, there is also provided a computer apparatus, wherein the computer apparatus includes: a memory for storing one or more programs; one or more processors coupled to the memory, which when executed by the one or more processors, cause the one or more processors to perform operations comprising:
responding to a loading triggering operation of loading an object model, and acquiring auxiliary texture map coordinate information of the object model, which is used for storing texture map coordinate position information for streamer use;
responding to the streamer rendering triggering operation of the object model, and obtaining a light sweeping diagram and a background diagram for streamer rendering of the object model;
and performing streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
According to another embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon a computer program executable by a processor to:
Responding to a loading triggering operation of loading an object model, and acquiring auxiliary texture map coordinate information of the object model, which is used for storing texture map coordinate position information for streamer use;
responding to the streamer rendering triggering operation of the object model, and obtaining a light sweeping diagram and a background diagram for streamer rendering of the object model;
and performing streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
Compared with the prior art, the application has the following advantages: 1) By adding the auxiliary texture map coordinate information for storing the texture map coordinate information for streamer use, namely adding an auxiliary UV space, when an object model is manufactured, the streamer effect with high efficiency can be realized according to the auxiliary texture map coordinate information, and streamer can naturally appear at a specified position without appearing outside the specified position; 2) The high-efficiency streamer effect can be realized, and meanwhile, good performance can be ensured; 3) The user-defined light sweeping diagram and/or background diagram are supported, so that the expected light flowing effect of the user can be flexibly realized according to the user-defined content, the real-time and diversified personalized requirements of the user can be met, the user experience satisfaction can be improved, and the interestingness is greatly enhanced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 shows a flow diagram of a method for implementing streamer effects according to one embodiment of the present application;
FIG. 2 shows a schematic structural diagram of an apparatus for achieving streamer effects according to one embodiment of the present application;
FIG. 3 illustrates an exemplary system that may be used to implement various embodiments described herein.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
In this context, the term "device" refers to an intelligent electronic device that can execute a predetermined process such as numerical computation and/or logic computation by executing a predetermined program or instruction, and may include a processor and a memory, where the predetermined process is executed by the processor executing a program instruction pre-stored in the memory, or the predetermined process is executed by hardware such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), or a combination of the two.
The technical scheme of the application is mainly realized by computer equipment. The computer equipment comprises network equipment and user equipment. The network device includes, but is not limited to, a single network server, a server group of multiple network servers, or a Cloud based Cloud Computing (Cloud Computing) consisting of a large number of computers or network servers, where Cloud Computing is one of distributed Computing, and is a super virtual computer consisting of a group of loosely coupled computer sets. The user devices include, but are not limited to, PCs, tablet computers, smartphones, IPTV, PDAs, wearable devices, etc. The computer device can be independently operated to realize the application, and can also be accessed to a network and realize the application through interaction with other computer devices in the network. The network where the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), and the like.
It should be noted that the above-mentioned computer device is only an example, and other computer devices that may be present in the present application or may appear in the future are also included in the scope of the present application and are incorporated herein by reference.
The methods discussed later herein (some of which are illustrated by flowcharts) may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative and are for purposes of describing example embodiments of the present application. This application may be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present application is described in further detail below with reference to the accompanying drawings.
It should be noted that the technical solution for realizing the streamer effect of the present application can be applied to any object model in an application scene that may use streamer, such as a character model in a game scene, a vehicle model, and the like, which is not limited in this application.
Prior to executing the solution for achieving the streamer effect of the present application, a pre-artistic fabrication is required to support the solution for achieving the streamer effect of the present application. When the model map, especially the complex model map, is manufactured by fine-tuning, scaling, rotating and filling in blank positions, UV (ultraviolet) mapping is required to be developed, different surfaces of an object model are placed on the same plane map, and in order to occupy a rectangular plane map, the surfaces are finely scaled and rotated, so that the UV map is as small as possible. The following exemplarily describes a process of pre-artistic making which needs to be performed in the present application:
1) When an object model is created, if the object model needs to have a streamer effect or needs to support a streamer effect, sub-texture map coordinate information (hereinafter also referred to as sub-UV information) is added to the object model, and the sub-UV information may be named, for example, UV2. In the sub UV information, only the surface of the streamer section requiring or supporting the streamer effect is developed, all the surfaces are not rotated, the surfaces requiring simultaneous brushing are placed on the same line, and no mapping is required to be drawn or placed on UV, that is, only texture mapping coordinate position information (hereinafter also referred to as UV position information) for streamer use is stored, and then the object model is saved and derived. The object model thus derived has two UV spaces, one corresponding to the main UV information (hereinafter also referred to as main texture map coordinate information) containing the UV positions and various maps for UV placement, and the other corresponding to the sub UV information as described above for streamer use, which sub UV information includes only UV position information.
2) One or more optional light-sweeping patterns are made and named for use in the streamer effect, using substantially the same method as conventional streamer making. The selectable light sweeping patterns may be of any shape and style, for example, selectable light sweeping patterns of various shapes such as snowflake, diamond, round, star and the like can be prefabricated, and fixed or selectable styles are defined for the selectable light sweeping patterns of various shapes, for example, the snowflake-shaped selectable light sweeping patterns support various selectable colors and various light sweeping directions. Each selectable sweep pattern may also be considered a sweep pattern that allows the user to customize.
3) One or more optional background maps are made and used as textures for the secondary UV information, so the optional background maps may also be referred to as secondary UV maps. Each selectable background graphic may also be considered a background graphic template that is user-customizable.
4) Setting some parameter information corresponding to the sweep pattern and the background pattern. In some embodiments, the information is edited in the loader, and in the scenario where the object model is required to have the streamer in the unit, some parameters, such as "flow mode", "flow color", etc., are set and specified in the loader of the object model, where "flow mode" must be set as the name of the secondary UV information, such as UV2, if the streamer cannot be used after the change, other streamer parameters except "flow mode" may be written in a default value in advance or changed at runtime or customized by the user. In some embodiments, the streamer parameters include, but are not limited to: the speed of the sweeping light, the position offset of the sweeping light, the rotation angle and scaling multiple of the sweeping light, the moving speed of the background image, the scaling multiple of the background image, the streamer start time, the streamer end time, the streamer interval time and the like.
After the fine arts manufacturing process is accomplished, just can use the scheme of this application to realize the streamer effect.
Fig. 1 shows a flow diagram of a method for implementing streamer effects according to an embodiment of the present application. The method according to the present embodiment includes step S11, step S12, and step S13. In step S11, the computer device obtains sub-texture map coordinate information of an object model for storing texture map coordinate position information for streamer use in response to a load trigger operation to load the object model; in step S12, the computer device obtains a light sweeping map and a background map for performing streamer rendering on the object model in response to a streamer rendering triggering operation on the object model; in step S13, the computer device performs streamer rendering on the object model according to the secondary texture map coordinate information, the sweep pattern, and the background pattern.
In step S11, the computer device obtains sub-texture map coordinate information of the object model for storing texture map coordinate position information for use by streamer in response to a load trigger operation to load the object model. In some embodiments, the object model is a 3D (3 Dimensions) model, the object model needs to perform streamer or support streamer, the object model includes at least one component that needs to perform streamer or support streamer, and the components may be completely separated and may have a real or visual connection relationship; optionally, the at least one component is located at the same level of the object model; preferably, the at least one component is located at an outer level of the object model. In some embodiments, the secondary UV information is stored in a model loader. In some embodiments, the UV location information corresponds to one or more components of the subject model that support streamer, e.g., components of one subject model that support streamer include clothing, boots, swords, and the secondary UV location information includes texture map coordinate locations corresponding to clothing, boots, swords.
Wherein the load trigger operation is used for triggering the loading of the object model, and the load trigger operation comprises any operation used for triggering the loading of the object model. In some embodiments, the load trigger operation is triggered by predetermined program logic executing in the computer device, such as a load trigger operation executing in accordance with predetermined program logic in a gaming application; in other embodiments, the load trigger operation is performed by a user, such as a predetermined operation (e.g., a click operation, a voice operation, or a gesture operation, etc.) performed by the user in the gaming application for loading the object model. In some embodiments, a computer device is responsive to a load trigger operation to load an object model, load the object model, and obtain secondary texture map coordinate information of the object model for storing texture map coordinate position information for streamer use; as one example, a user performs a click operation for loading avatar a in a game application, in response to which a computer device loads a model of avatar a and obtains secondary UV information of the model of avatar a.
In step S12, the computer device obtains a sweep pattern and a background pattern for streamer rendering of the object model in response to a streamer rendering trigger operation for the object model. Wherein the streamer rendering trigger operation is configured to trigger streamer rendering of the object model, and the streamer rendering trigger operation includes any operation for triggering streamer rendering of the object model. In some embodiments, the streamer rendering trigger operation is triggered by predetermined program logic executing in the computer device, such as a streamer rendering trigger operation executing in accordance with predetermined program logic in a gaming application; in other embodiments, the streamer rendering triggering operation is performed by a user, such as a predetermined operation (e.g., a click operation, a voice operation, or a gesture operation, etc.) performed by the user in a gaming application for triggering streamer rendering of the object model. In some embodiments, the streamer rendering trigger operation is directed to being an entire object model, i.e., for triggering streamer rendering of all streamer-enabled components in the object model; in other embodiments, the streamer rendering trigger operation is directed to a specific target part in the object model, i.e. is used to trigger streamer rendering of a specific target part in all streamer-enabled parts in the object model.
The light sweeping map refers to a map which achieves a light sweeping effect by moving during light flowing, and the background map refers to a background texture map of the light sweeping map, namely, the light sweeping map moves on the background texture map during light flowing. Optionally, the light-sweeping map and/or the background map may be completely user-defined by the user, or the user may be partially user-defined on the basis of a pre-made optional light-sweeping map and an optional background map (e.g., text content is added to a pre-made optional background map to generate a background map), or the user may directly select from the pre-made optional light-sweeping map and the optional background map, or default to the system (i.e., no user participation is required), which does not limit the implementation manner of obtaining the light-sweeping map and the background map.
In step S13, the computer device performs streamer rendering on the object model according to the secondary texture map coordinate information, the sweep pattern, and the background pattern. In some embodiments, a computer device uses the sweep pattern and a predetermined algorithm in conjunction with the background pattern and secondary UV information to effect streamer rendering of the object model. In some embodiments, the computer device streamers all streamenabled components of the object model according to the secondary texture map coordinate information, the sweep map, and the background map. In some embodiments, the computer device streamer renders one or more target components of all streamer-enabled components of the object model, which may be determined in the step S13 or may be determined prior to the step S13, as may be determined in the aforementioned step S12 based on the streamer rendering trigger operation, according to the secondary texture map coordinate information, the sweep map, and the background map; in some embodiments, the computer device may determine whether the streamer is for the entire object model or for one or more target components in the object model based on the current scene or the streamer rendering trigger operation, such as the current reload scene, then streamer only clothing of the virtual character; in some embodiments, the computer device may determine whether the streamer is for the entire object model or for one or more target components in the object model based on the user indication, such as determining to streamer only certain pieces of equipment of the virtual character based on the user indication.
In some embodiments, the step S11 includes: and responding to a loading triggering operation of loading the object model, detecting whether auxiliary texture map coordinate information exists in the object model, if so, acquiring main texture map coordinate information of the object model and auxiliary texture map coordinate information for storing texture map coordinate position information for streamer use, otherwise, only acquiring the main texture map coordinate information of the object model. In some embodiments, the computer device responds to a loading trigger operation for loading the object model, firstly detects whether the auxiliary texture map coordinate information exists in the object model, and if so, judges whether the current scene needs or supports streamer; if the current scene needs or supports streamer, acquiring the main texture map coordinate information and the auxiliary texture map coordinate information of the object model, otherwise, only acquiring the main texture map coordinate information of the object model, and not acquiring the auxiliary UV information. It should be noted that, in different scenarios, the components of the object model that support streamer may be different.
In some embodiments, the step S12 includes: and responding to the streamer rendering triggering operation of the object model, which is executed by a user, and obtaining a sweep graph and a background graph for streamer rendering of the object model. As one example, the streamer rendering trigger operation performed by the user includes a reload operation, when the user performs a reload operation in the gaming application for the virtual character model that he uses, the computer device obtains a sweep pattern and a background pattern for streamer rendering the virtual character model in response to the reload operation.
In some embodiments, the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model includes: responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram for performing light flowing rendering on the object model; and/or, responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image for streamer rendering of the object model. In some embodiments, the streamer configuration interface is used to perform configuration operations related to streamer effects, including, but not limited to: a configuration operation of each sweep pattern related to the configuration of the sweep pattern, a configuration operation of each background pattern related to the configuration of the background pattern, a configuration operation of some parameters related to the streamer effect, a selection operation related to the streamer, etc. The sweep pattern configuration operations include any configuration operations related to a sweep pattern, including, but not limited to, a selection operation for selecting a sweep pattern, a sweep pattern parameter selection operation for configuring sweep pattern parameter information, or a sweep pattern parameter configuration operation, among others, in some embodiments. The context map configuration operations include any configuration operations associated with a context map, including, but not limited to, a selection operation for selecting a context map, a context map parameter selection operation or a context map parameter configuration operation for configuring context map parameter information, a content input operation for customizing text content or picture content, etc., in some embodiments.
In some embodiments, the background graph configuration operation includes a content input operation and a background graph parameter selection operation, and the obtaining, in response to the background graph configuration operation performed by the user at the streamer effect configuration interface, a background graph for streamer rendering the object model includes: responding to content input operation executed by a user on the streaming effect configuration interface, and obtaining user-defined content information input by the user; responding to the background picture parameter selection operation executed by the user on the streaming light effect configuration interface, and obtaining the background picture parameter information appointed by the user; and generating a background image for streamer rendering of the object model according to the self-defined content information and the background image parameter information. The custom content information includes any content information that is custom by the user for the background map, and in some embodiments, the custom content information includes custom text content and/or picture content. As an example, the computer device obtains the custom text content "no double under the sun" and the texture map M input by the user in response to the content input operation performed by the user on the streamer effect configuration interface, and obtains the user-specified streamer direction, tiling mode, color and other background map parameter information in response to the background map parameter selection operation performed by the user on the streamer effect configuration interface; and then, the computer equipment generates a background map for carrying out streamer rendering on the object model according to the custom text content 'no double in the world', the texture map M and background map parameter information specified by a user.
In some embodiments, the obtaining, in response to a background map configuration operation performed by a user at the streamer effect configuration interface, a background map for streamer rendering the object model includes: and responding to the background image configuration operation executed by the user on the streaming light effect configuration interface, and obtaining a background image which is selected by the user from a plurality of selectable background images and is used for streaming light rendering of the object model. Based on this, the user can directly select one of a plurality of selectable background images, which are prepared in advance, as the background image for streamer rendering of the object model. In some embodiments, the computer device obtains a background image selected by a user from a plurality of selectable background images in response to a background image configuration operation performed by the user on the streamer effect configuration interface, and obtains custom text content input by the user in response to a content input operation performed by the user on the streamer effect configuration interface; responding to the background picture parameter selection operation executed by the user on the streaming light effect configuration interface, and obtaining the background picture parameter information appointed by the user; and generating a background image for carrying out streamer rendering on the object model according to the self-defined text content, the selected background image and the background image parameter information.
In some embodiments, the obtaining, in response to a light map configuration operation performed by a user at the streamer effect configuration interface, a light map for streamer rendering the object model includes: and responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram selected by the user from a plurality of selectable light sweeping diagrams and used for performing light flowing rendering on the object model. Based on this, the user may directly select one of the pre-made plurality of selectable light-sweeping patterns as the light-sweeping pattern for streamer rendering of the object model, and in some embodiments, the shape and style of the streamer may be unique or selectable by the user by selecting which light-sweeping pattern.
In some embodiments, the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model includes: responding to a streamer rendering triggering operation of the object model, which is executed by a user, to obtain one or more target components, and to obtain a sweep graph and a background graph for streamer rendering of the one or more target components; wherein, the step S13 includes: extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information; and performing streamer rendering on the one or more target components according to the coordinate position information of the target texture map, the sweep-light map and the background map. In some embodiments, the streamer rendering trigger operation executed by the user in different scenes corresponds to different target components, for example, the target component corresponding to the streamer rendering trigger operation executed by the user in a scene of changing clothes is changed clothes, the target component corresponding to the streamer rendering trigger operation executed by the user in a scene of changing equipment is changed equipment, and the target component corresponding to the streamer rendering trigger operation executed by the user in a flight scene is all components supporting streamer. In some embodiments, the user may directly perform a predetermined operation on a certain component in the object model to trigger streamer rendering on the component, that is, the predetermined operation is used as a streamer rendering triggering operation, and the target component is the component to which the predetermined operation is directed, for example, when the user performs a double-click operation on a wing of a virtual character model used in a game, in response to the double-click operation, it may be determined that the target component requiring streamer is the wing of the virtual character model. In some embodiments, there are multiple target components that need to be streamer, the multiple target components may use the same sweep pattern and background pattern, and possibly different sweep patterns and background patterns, for example, target components that have a joint relationship may use the same sweep pattern and background pattern, and two target components that are separated from each other may use different sweep patterns and background patterns, where the streamer configuration interface supports a streamer configuration for each target component.
In some embodiments, the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model includes: responsive to a streamer rendering trigger operation performed by a user on the object model, presenting a streamer selection interface comprising at least one streamer-supporting component of the object model; one or more target components selected by the user from the at least one component are obtained in response to a selection operation performed by the user in the streamer selection interface. The streamer selection interface is for enabling a user to select a target part from the at least one part that requires streamer. Based on the method, the user can select one or more target components which need streamer in the object model by himself, so that the realization of streamer effect is more flexible and meets the personalized requirements of the user.
In some embodiments, the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model includes: and responding to a streamer rendering triggering operation of the object model, which is executed by a user, and obtaining one or more target components corresponding to the streamer rendering operation according to operation information corresponding to the streamer rendering operation. In some embodiments, the operation information includes any information related to the streamer rendering operation, such as an area to which the streamer rendering operation corresponds, a type of the streamer rendering operation, and the like. As one example, the object model is divided into a plurality of areas, and in response to a streamer rendering trigger operation performed on the object model by a user, a component supporting streamer in the area corresponding to the streamer rendering operation is used as one or more target components corresponding to the streamer rendering operation.
In some embodiments, the step S13 includes: correcting the light sweeping diagram and optimizing display according to the coordinate information of the auxiliary texture map and the parameter information of the light sweeping diagram corresponding to the light sweeping diagram; calculating streamer duration information and streamer speed information according to the time parameter information; modifying the streamer effect according to the background map parameter information corresponding to the background map; calculating the final streamer color; and performing streamer rendering on the object model according to the streamer color. In some embodiments, the computer device corrects the light-sweeping map based on the coordinate information of the secondary texture map according to the scaling parameter, the offset parameter and the rotation parameter of the light-sweeping map, then performs fade-in and fade-out effects according to the rotation parameter of the light-sweeping to optimize display, calculates the starting end interval time and the ending end interval speed according to each time parameter, modifies the effects by using the parameter information of the background map, calculates the final streamer color (the addition of the light-sweeping color and the background map color is the final streamer color), and performs streamer rendering on the object model according to the streamer color.
According to the scheme of the application, by adding the auxiliary texture map coordinate information for storing the texture map coordinate position information for streamer use, namely adding an auxiliary UV space, when an object model is manufactured, the streamer effect can be realized with high efficiency according to the auxiliary texture map coordinate information, and streamer can naturally appear at a specified position and cannot appear outside the specified position; moreover, the high-efficiency streamer effect can be realized, and meanwhile, good performance can be ensured; in addition, the user-defined light sweeping diagram and/or the background diagram are supported, so that the expected light flowing effect of the user can be flexibly realized according to the user-defined content, the real-time and diversified personalized requirements of the user can be met, the user experience satisfaction can be improved, and the interestingness is greatly enhanced.
Fig. 2 shows a schematic structural flow diagram of an apparatus for realizing streamer effects according to an embodiment of the present application. The apparatus for realizing a streamer effect (hereinafter, simply referred to as "streamer apparatus") includes means for acquiring sub-texture map coordinate information of an object model for storing texture map coordinate position information for use by streamer in response to a loading trigger operation for loading the object model (hereinafter, simply referred to as "first means 11"), means for acquiring a sweep pattern and a background pattern for streamer rendering the object model in response to a streamer rendering trigger operation for the object model (hereinafter, simply referred to as "second means 12"), means for streamer rendering the object model in accordance with the sub-texture map coordinate information, the sweep pattern, and the background pattern (hereinafter, simply referred to as "third means 13"). The operations performed by the first device 11, the second device 12, and the third device 13 are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference herein.
In some embodiments, the first device 11 is configured to: and responding to a loading triggering operation of loading the object model, detecting whether auxiliary texture map coordinate information exists in the object model, if so, acquiring main texture map coordinate information of the object model and auxiliary texture map coordinate information for storing texture map coordinate position information for streamer use, otherwise, only acquiring the main texture map coordinate information of the object model. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the second device 12 is configured to: and responding to the streamer rendering triggering operation of the object model, which is executed by a user, and obtaining a sweep graph and a background graph for streamer rendering of the object model. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model includes: responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram for performing light flowing rendering on the object model; and/or, responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image for streamer rendering of the object model. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the background graph configuration operation includes a content input operation and a background graph parameter selection operation, and the obtaining, in response to the background graph configuration operation performed by the user at the streamer effect configuration interface, a background graph for streamer rendering the object model includes: responding to content input operation executed by a user on the streaming effect configuration interface, and obtaining user-defined content information input by the user; responding to the background picture parameter selection operation executed by the user on the streaming light effect configuration interface, and obtaining the background picture parameter information appointed by the user; and generating a background image for streamer rendering of the object model according to the self-defined content information and the background image parameter information. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the obtaining, in response to a background map configuration operation performed by a user at the streamer effect configuration interface, a background map for streamer rendering the object model includes: and responding to the background image configuration operation executed by the user on the streaming light effect configuration interface, and obtaining a background image which is selected by the user from a plurality of selectable background images and is used for streaming light rendering of the object model. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the obtaining, in response to a light map configuration operation performed by a user at the streamer effect configuration interface, a light map for streamer rendering the object model includes: and responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram selected by the user from a plurality of selectable light sweeping diagrams and used for performing light flowing rendering on the object model. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model includes: responding to a streamer rendering triggering operation of the object model, which is executed by a user, to obtain one or more target components, and to obtain a sweep graph and a background graph for streamer rendering of the one or more target components; wherein the third device 13 is configured to: extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information; and performing streamer rendering on the one or more target components according to the coordinate position information of the target texture map, the sweep-light map and the background map. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model includes: responsive to a streamer rendering trigger operation performed by a user on the object model, presenting a streamer selection interface comprising at least one streamer-supporting component of the object model; one or more target components selected by the user from the at least one component are obtained in response to a selection operation performed by the user in the streamer selection interface. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model includes: and responding to a streamer rendering triggering operation of the object model, which is executed by a user, and obtaining one or more target components corresponding to the streamer rendering operation according to operation information corresponding to the streamer rendering operation. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
In some embodiments, the third device 13 is configured to: correcting the light sweeping diagram and optimizing display according to the coordinate information of the auxiliary texture map and the parameter information of the light sweeping diagram corresponding to the light sweeping diagram; calculating streamer duration information and streamer speed information according to the time parameter information; modifying the streamer effect according to the background map parameter information corresponding to the background map; calculating the final streamer color; and performing streamer rendering on the object model according to the streamer color. The relevant operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein and are incorporated by reference.
The application also provides a computer device, wherein the computer device comprises: a memory for storing one or more programs; and the one or more processors are connected with the memory, and when the one or more programs are executed by the one or more processors, the one or more processors are caused to execute the method for realizing the streamer effect.
The present application also provides a computer readable storage medium having stored thereon a computer program executable by a processor for performing the method for achieving streamer effects described herein.
The present application also provides a computer program product which, when executed by a device, causes the device to perform the method for achieving streamer effects described herein.
FIG. 3 illustrates an exemplary system that may be used to implement various embodiments described herein.
In some embodiments, system 1000 can be implemented as any of the processing devices of the embodiments of the present application. In some embodiments, system 1000 can include one or more computer-readable media (e.g., system memory or NVM/storage 1020) having instructions and one or more processors (e.g., processor(s) 1005) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described herein.
For one embodiment, the system control module 1010 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 1005 and/or any suitable device or component in communication with the system control module 1010.
The system control module 1010 may include a memory controller module 1030 to provide an interface to the system memory 1015. The memory controller module 1030 may be a hardware module, a software module, and/or a firmware module.
System memory 1015 may be used, for example, to load and store data and/or instructions for system 1000. For one embodiment, system memory 1015 may comprise any suitable volatile memory, such as, for example, suitable DRAM. In some embodiments, the system memory 1015 may comprise double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, the system control module 1010 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 1020 and communication interface(s) 1025.
For example, NVM/storage 1020 may be used to store data and/or instructions. NVM/storage 1020 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1020 may include storage resources that are physically part of the device on which system 1000 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 1020 may be accessed over a network via communication interface(s) 1025.
Communication interface(s) 1025 may provide an interface for system 1000 to communicate over one or more networks and/or with any other suitable device. The system 1000 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic of one or more controllers (e.g., memory controller module 1030) of the system control module 1010. For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic of one or more controllers of the system control module 1010 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic of one or more controllers of the system control module 1010. For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic of one or more controllers of the system control module 1010 to form a system on chip (SoC).
In various embodiments, system 1000 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, system 1000 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
While the foregoing particularly illustrates and describes exemplary embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the claims. The protection sought herein is as set forth in the claims below. These and other aspects of the various embodiments are specified in the following numbered clauses:
1. a method for achieving a streamer effect, wherein the method comprises:
responding to a loading triggering operation of loading an object model, and acquiring auxiliary texture map coordinate information of the object model, which is used for storing texture map coordinate position information for streamer use;
responding to the streamer rendering triggering operation of the object model, and obtaining a light sweeping diagram and a background diagram for streamer rendering of the object model;
and performing streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
2. The method of clause 1, wherein the obtaining sub-texture map coordinate information of the object model for storing texture map coordinate position information for use by streamer in response to a load trigger operation to load the object model comprises:
And responding to a loading triggering operation of loading the object model, detecting whether auxiliary texture map coordinate information exists in the object model, if so, acquiring main texture map coordinate information of the object model and auxiliary texture map coordinate information for storing texture map coordinate position information for streamer use, otherwise, only acquiring the main texture map coordinate information of the object model.
3. The method of clause 1, wherein the obtaining a sweep pattern and a background pattern for streamer rendering the object model in response to streamer rendering triggering operations on the object model comprises:
and responding to the streamer rendering triggering operation of the object model, which is executed by a user, and obtaining a sweep graph and a background graph for streamer rendering of the object model.
4. The method of clause 3, wherein the obtaining a sweep pattern and a background pattern for streamer rendering the object model in response to a streamer rendering trigger operation performed by a user on the object model comprises:
responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; and
Responding to a light sweeping diagram configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light sweeping diagram for streamer rendering of the object model; and/or, responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image for streamer rendering of the object model.
5. The method of clause 4, wherein the background graph configuration operations include a content input operation and a background graph parameter selection operation, the obtaining a background graph for streamer rendering the object model in response to a user performing a background graph configuration operation at the streamer effect configuration interface, comprising:
responding to content input operation executed by a user on the streaming effect configuration interface, and obtaining user-defined content information input by the user;
responding to the background picture parameter selection operation executed by the user on the streaming light effect configuration interface, and obtaining the background picture parameter information appointed by the user;
and generating a background image for streamer rendering of the object model according to the self-defined content information and the background image parameter information.
6. The method of clause 4, wherein the obtaining a background map for streamer rendering the object model in response to a user performing a background map configuration operation at the streamer effect configuration interface comprises:
And responding to the background image configuration operation executed by the user on the streaming light effect configuration interface, and obtaining a background image which is selected by the user from a plurality of selectable background images and is used for streaming light rendering of the object model.
7. The method of clause 4, wherein the obtaining a light map for streamer rendering the object model in response to a user performed light map configuration operation at the streamer effect configuration interface comprises:
and responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram selected by the user from a plurality of selectable light sweeping diagrams and used for performing light flowing rendering on the object model.
8. The method of clause 3, wherein the obtaining a sweep pattern and a background pattern for streamer rendering the object model in response to a streamer rendering trigger operation performed by a user on the object model comprises:
responding to a streamer rendering triggering operation of the object model, which is executed by a user, to obtain one or more target components, and to obtain a sweep graph and a background graph for streamer rendering of the one or more target components;
wherein said performing streamer rendering on said object model according to said secondary texture map coordinate information, said luminous sweep map and said background map comprises:
Extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information;
and performing streamer rendering on the one or more target components according to the coordinate position information of the target texture map, the sweep-light map and the background map.
9. The method of clause 8, wherein the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model comprises:
responsive to a streamer rendering trigger operation performed by a user on the object model, presenting a streamer selection interface comprising at least one streamer-supporting component of the object model;
one or more target components selected by the user from the at least one component are obtained in response to a selection operation performed by the user in the streamer selection interface.
10. The method of clause 8, wherein the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model comprises:
and responding to a streamer rendering triggering operation of the object model, which is executed by a user, and obtaining one or more target components corresponding to the streamer rendering operation according to operation information corresponding to the streamer rendering operation.
11. The method of any of clauses 1-10, wherein the streamer rendering the object model according to the secondary texture map coordinate information, the luminous sweep map, and the background map comprises:
correcting the light sweeping diagram and optimizing display according to the coordinate information of the auxiliary texture map and the parameter information of the light sweeping diagram corresponding to the light sweeping diagram;
calculating streamer duration information and streamer speed information according to the time parameter information;
modifying the streamer effect according to the background map parameter information corresponding to the background map;
calculating the final streamer color;
and performing streamer rendering on the object model according to the streamer color.
12. An apparatus for achieving a streamer effect, wherein the apparatus comprises:
means for obtaining sub-texture map coordinate information of an object model for storing texture map coordinate position information for streamer use in response to a load trigger operation to load the object model;
means for obtaining a light-sweeping map and a background map for streamer rendering of the object model in response to a streamer rendering trigger operation on the object model;
and the device is used for carrying out streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
13. The apparatus of clause 12, wherein the means for acquiring the secondary texture map coordinate information of the object model for storing texture map coordinate position information for use by streamer in response to a load trigger operation to load the object model is adapted to:
and responding to a loading triggering operation of loading the object model, detecting whether auxiliary texture map coordinate information exists in the object model, if so, acquiring main texture map coordinate information of the object model and auxiliary texture map coordinate information for storing texture map coordinate position information for streamer use, otherwise, only acquiring the main texture map coordinate information of the object model.
14. The apparatus of clause 12, wherein the means for obtaining a sweep pattern and a background pattern for streamer rendering the object model in response to a streamer rendering trigger operation on the object model is configured to:
and responding to the streamer rendering triggering operation of the object model, which is executed by a user, and obtaining a sweep graph and a background graph for streamer rendering of the object model.
15. The apparatus of clause 14, wherein the obtaining, in response to the streamer rendering trigger operation performed by the user on the object model, a sweep pattern and a background pattern for streamer rendering the object model comprises:
Responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; and
responding to a light sweeping diagram configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light sweeping diagram for streamer rendering of the object model; and/or, responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image for streamer rendering of the object model.
16. The apparatus of clause 15, wherein the background map configuration operations include a content input operation and a background map parameter selection operation, the obtaining a background map for streamer rendering of the object model in response to a user performing a background map configuration operation at the streamer effect configuration interface, comprising:
responding to content input operation executed by a user on the streaming effect configuration interface, and obtaining user-defined content information input by the user;
responding to the background picture parameter selection operation executed by the user on the streaming light effect configuration interface, and obtaining the background picture parameter information appointed by the user;
and generating a background image for streamer rendering of the object model according to the self-defined content information and the background image parameter information.
17. The apparatus of clause 15, wherein the obtaining a background map for streamer rendering the object model in response to a user performing a background map configuration operation at the streamer effect configuration interface comprises:
and responding to the background image configuration operation executed by the user on the streaming light effect configuration interface, and obtaining a background image which is selected by the user from a plurality of selectable background images and is used for streaming light rendering of the object model.
18. The apparatus of clause 15, wherein the obtaining a light map for streamer rendering the object model in response to a user performed light map configuration operation at the streamer effect configuration interface comprises:
and responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram selected by the user from a plurality of selectable light sweeping diagrams and used for performing light flowing rendering on the object model.
19. The apparatus of clause 14, wherein the obtaining, in response to the streamer rendering trigger operation performed by the user on the object model, a sweep pattern and a background pattern for streamer rendering the object model comprises:
responding to a streamer rendering triggering operation of the object model, which is executed by a user, to obtain one or more target components, and to obtain a sweep graph and a background graph for streamer rendering of the one or more target components;
The device for streamer rendering of the object model according to the coordinate information of the secondary texture map, the sweep-light map and the background map is used for:
extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information;
and performing streamer rendering on the one or more target components according to the coordinate position information of the target texture map, the sweep-light map and the background map.
20. The apparatus of clause 19, wherein the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model comprises:
responsive to a streamer rendering trigger operation performed by a user on the object model, presenting a streamer selection interface comprising at least one streamer-supporting component of the object model;
one or more target components selected by the user from the at least one component are obtained in response to a selection operation performed by the user in the streamer selection interface.
21. The apparatus of clause 19, wherein the obtaining one or more target components in response to a streamer rendering trigger operation performed by a user on the object model comprises:
And responding to a streamer rendering triggering operation of the object model, which is executed by a user, and obtaining one or more target components corresponding to the streamer rendering operation according to operation information corresponding to the streamer rendering operation.
22. The apparatus of any of clauses 12 to 21, wherein the means for streamer rendering the object model from the secondary texture map coordinate information, the sweep pattern, and the background pattern is configured to:
correcting the light sweeping diagram and optimizing display according to the coordinate information of the auxiliary texture map and the parameter information of the light sweeping diagram corresponding to the light sweeping diagram;
calculating streamer duration information and streamer speed information according to the time parameter information;
modifying the streamer effect according to the background map parameter information corresponding to the background map;
calculating the final streamer color;
and performing streamer rendering on the object model according to the streamer color.
21. An apparatus, wherein the apparatus comprises:
a memory for storing one or more programs;
one or more processors, coupled to the memory,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of clauses 1-11.
22. A computer readable storage medium having stored thereon a computer program executable by a processor to perform the method of any of clauses 1 to 11.
23. A computer program product which, when executed by an apparatus, causes the apparatus to perform the method of any one of clauses 1 to 11.

Claims (21)

1. A method for achieving a streamer effect, wherein the method comprises:
responding to a loading triggering operation of loading an object model, detecting whether auxiliary texture map coordinate information exists in the object model, if so, acquiring main texture map coordinate information of the object model and auxiliary texture map coordinate information for storing texture map coordinate position information for streamer use, otherwise, only acquiring the main texture map coordinate information of the object model;
responding to a streamer rendering triggering operation of the object model, which is executed by a user, to obtain a sweep graph and a background graph for streamer rendering of the object model;
and performing streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
2. The method of claim 1, wherein the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model comprises:
Responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; and
responding to a light sweeping diagram configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light sweeping diagram for streamer rendering of the object model; and/or, responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image for streamer rendering of the object model.
3. The method of claim 2, wherein the context map configuration operation includes a content input operation and a context map parameter selection operation, the obtaining a context map for streamer rendering of the object model in response to a context map configuration operation performed by a user at the streamer effect configuration interface, comprising:
responding to content input operation executed by a user on the streaming effect configuration interface, and obtaining user-defined content information input by the user;
responding to the background picture parameter selection operation executed by the user on the streaming light effect configuration interface, and obtaining the background picture parameter information appointed by the user;
and generating a background image for streamer rendering of the object model according to the self-defined content information and the background image parameter information.
4. The method of claim 2, wherein the obtaining a background map for streamer rendering the object model in response to a user-performed background map configuration operation at the streamer effect configuration interface comprises:
and responding to the background image configuration operation executed by the user on the streaming light effect configuration interface, and obtaining a background image which is selected by the user from a plurality of selectable background images and is used for streaming light rendering of the object model.
5. The method of claim 2, wherein the obtaining a light map for streamer rendering the object model in response to a user performed light map configuration operation at the streamer effect configuration interface comprises:
and responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram selected by the user from a plurality of selectable light sweeping diagrams and used for performing light flowing rendering on the object model.
6. The method of claim 1, wherein the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model comprises:
Responding to a streamer rendering triggering operation of the object model, which is executed by a user, to obtain one or more target components, and to obtain a sweep graph and a background graph for streamer rendering of the one or more target components;
wherein said performing streamer rendering on said object model according to said secondary texture map coordinate information, said luminous sweep map and said background map comprises:
extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information;
and performing streamer rendering on the one or more target components according to the coordinate position information of the target texture map, the sweep-light map and the background map.
7. The method of claim 4, wherein the obtaining one or more target components in response to a user-performed streamer rendering trigger operation on the object model comprises:
responsive to a streamer rendering trigger operation performed by a user on the object model, presenting a streamer selection interface comprising at least one streamer-supporting component of the object model;
one or more target components selected by the user from the at least one component are obtained in response to a selection operation performed by the user in the streamer selection interface.
8. The method of claim 6, wherein the obtaining one or more target components in response to a user-performed streamer rendering trigger operation on the object model comprises:
and responding to a streamer rendering triggering operation of the object model, which is executed by a user, and obtaining one or more target components corresponding to the streamer rendering operation according to operation information corresponding to the streamer rendering operation.
9. The method of any of claims 1 to 8, wherein the streamer rendering the object model according to the secondary texture map coordinate information, the luminous sweep map, and the background map comprises:
correcting the light sweeping diagram and optimizing display according to the coordinate information of the auxiliary texture map and the parameter information of the light sweeping diagram corresponding to the light sweeping diagram;
calculating streamer duration information and streamer speed information according to the time parameter information;
modifying the streamer effect according to the background map parameter information corresponding to the background map;
calculating the final streamer color;
and performing streamer rendering on the object model according to the streamer color.
10. An apparatus for achieving a streamer effect, wherein the apparatus comprises:
Means for detecting whether secondary texture map coordinate information exists in the object model in response to a load trigger operation for loading the object model, and if so, obtaining primary texture map coordinate information of the object model and secondary texture map coordinate information for storing texture map coordinate position information for use by streamer, otherwise, obtaining only the primary texture map coordinate information of the object model;
means for obtaining a sweep pattern and a background pattern for streamer rendering of the object model in response to a streamer rendering trigger operation of the object model performed by a user;
and the device is used for carrying out streamer rendering on the object model according to the coordinate information of the auxiliary texture map, the light sweeping map and the background map.
11. The apparatus of claim 10, wherein the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model comprises:
responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; and
Responding to a light sweeping diagram configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light sweeping diagram for streamer rendering of the object model; and/or, responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image for streamer rendering of the object model.
12. The apparatus of claim 11, wherein the context map configuration operation comprises a content input operation and a context map parameter selection operation, the obtaining a context map for streamer rendering of the object model in response to a context map configuration operation performed by a user at the streamer effect configuration interface comprising:
responding to content input operation executed by a user on the streaming effect configuration interface, and obtaining user-defined content information input by the user;
responding to the background picture parameter selection operation executed by the user on the streaming light effect configuration interface, and obtaining the background picture parameter information appointed by the user;
and generating a background image for streamer rendering of the object model according to the self-defined content information and the background image parameter information.
13. The apparatus of claim 11, wherein the obtaining a background map for streamer rendering the object model in response to a user-performed background map configuration operation at the streamer effect configuration interface comprises:
And responding to the background image configuration operation executed by the user on the streaming light effect configuration interface, and obtaining a background image which is selected by the user from a plurality of selectable background images and is used for streaming light rendering of the object model.
14. The apparatus of claim 11, wherein the obtaining a light map for streamer rendering of the object model in response to a user performed light map configuration operation at the streamer effect configuration interface comprises:
and responding to the light sweeping diagram configuration operation executed by the user on the light flowing effect configuration interface, and obtaining a light sweeping diagram selected by the user from a plurality of selectable light sweeping diagrams and used for performing light flowing rendering on the object model.
15. The apparatus of claim 10, wherein the obtaining, in response to a streamer rendering trigger operation performed by a user on the object model, a sweep pattern and a background pattern for streamer rendering the object model comprises:
responding to a streamer rendering triggering operation of the object model, which is executed by a user, to obtain one or more target components, and to obtain a sweep graph and a background graph for streamer rendering of the one or more target components;
The device for streamer rendering of the object model according to the coordinate information of the secondary texture map, the sweep-light map and the background map is used for:
extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information;
and performing streamer rendering on the one or more target components according to the coordinate position information of the target texture map, the sweep-light map and the background map.
16. The apparatus of claim 15, wherein the obtaining one or more target components in response to a user-performed streamer rendering trigger operation on the object model comprises:
responsive to a streamer rendering trigger operation performed by a user on the object model, presenting a streamer selection interface comprising at least one streamer-supporting component of the object model;
one or more target components selected by the user from the at least one component are obtained in response to a selection operation performed by the user in the streamer selection interface.
17. The apparatus of claim 15, wherein the obtaining one or more target components in response to a user-performed streamer rendering trigger operation on the object model comprises:
And responding to a streamer rendering triggering operation of the object model, which is executed by a user, and obtaining one or more target components corresponding to the streamer rendering operation according to operation information corresponding to the streamer rendering operation.
18. The apparatus of any of claims 10 to 17, wherein the means for streamer rendering the object model from the secondary texture map coordinate information, the sweep map, and the background map is to:
correcting the light sweeping diagram and optimizing display according to the coordinate information of the auxiliary texture map and the parameter information of the light sweeping diagram corresponding to the light sweeping diagram;
calculating streamer duration information and streamer speed information according to the time parameter information;
modifying the streamer effect according to the background map parameter information corresponding to the background map;
calculating the final streamer color;
and performing streamer rendering on the object model according to the streamer color.
19. An apparatus, wherein the apparatus comprises:
a memory for storing one or more programs;
one or more processors, coupled to the memory,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-9.
20. A computer readable storage medium having stored thereon a computer program executable by a processor to perform the method of any of claims 1 to 9.
21. A computer program product which, when executed by an apparatus, causes the apparatus to perform the method of any one of claims 1 to 9.
CN202010014440.8A 2020-01-07 2020-01-07 Method and device for realizing streamer effect Active CN111210486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014440.8A CN111210486B (en) 2020-01-07 2020-01-07 Method and device for realizing streamer effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014440.8A CN111210486B (en) 2020-01-07 2020-01-07 Method and device for realizing streamer effect

Publications (2)

Publication Number Publication Date
CN111210486A CN111210486A (en) 2020-05-29
CN111210486B true CN111210486B (en) 2024-01-05

Family

ID=70786000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014440.8A Active CN111210486B (en) 2020-01-07 2020-01-07 Method and device for realizing streamer effect

Country Status (1)

Country Link
CN (1) CN111210486B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053424B (en) * 2020-09-29 2024-03-22 北京完美赤金科技有限公司 Rendering method and device of 3D model
CN112528596A (en) * 2020-12-01 2021-03-19 北京达佳互联信息技术有限公司 Rendering method and device for special effect of characters, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765542A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image rendering method, electronic equipment and computer readable storage medium
CN109978968A (en) * 2019-04-10 2019-07-05 广州虎牙信息科技有限公司 Video rendering method, apparatus, equipment and the storage medium of Moving Objects

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10311548B2 (en) * 2017-09-05 2019-06-04 Microsoft Technology Licensing, Llc Scaling render targets to a higher rendering resolution to display higher quality video frames

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765542A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image rendering method, electronic equipment and computer readable storage medium
CN109978968A (en) * 2019-04-10 2019-07-05 广州虎牙信息科技有限公司 Video rendering method, apparatus, equipment and the storage medium of Moving Objects

Also Published As

Publication number Publication date
CN111210486A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
US11217015B2 (en) Method and apparatus for rendering game image
US9682321B2 (en) Multiple frame distributed rendering of interactive content
CN103530018B (en) The method for building up and mobile terminal at widget interface in Android operation system
US20220249949A1 (en) Method and apparatus for displaying virtual scene, device, and storage medium
KR20190133716A (en) Selective application of reprojection processing to layer subregions to optimize late reprojection power
US9639976B2 (en) Efficient computation of shadows for circular light sources
CN107248193A (en) The method, system and device that two dimensional surface is switched over virtual reality scenario
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
CN111210486B (en) Method and device for realizing streamer effect
CN103970518A (en) 3D rendering method and device for logic window
CN106658139B (en) Focus control method and device
CN113112579A (en) Rendering method, rendering device, electronic equipment and computer-readable storage medium
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
CN106780659A (en) A kind of two-dimension situation map generalization method and electronic equipment
CN109712226A (en) The see-through model rendering method and device of virtual reality
KR20160050295A (en) Method for Simulating Digital Watercolor Image and Electronic Device Using the same
US9483873B2 (en) Easy selection threshold
CN110047120A (en) A kind of animated show method and device
CN109189537A (en) The dynamic display method of page info calculates equipment and computer storage medium
CN114367113A (en) Method, apparatus, medium, and computer program product for editing virtual scene
AU2016230943B2 (en) Virtual trying-on experience
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium
JP2006171760A (en) Memory controller with graphic processing function
WO2023216771A1 (en) Virtual weather interaction method and apparatus, and electronic device, computer-readable storage medium and computer program product
TWI410890B (en) Method and apparatus emulating branch structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 701-25, floor 7, building 5, yard 1, Shangdi East Road, Haidian District, Beijing 100085

Applicant after: Beijing perfect Chijin Technology Co.,Ltd.

Address before: 701-25, floor 7, building 5, yard 1, Shangdi East Road, Haidian District, Beijing 100085

Applicant before: Beijing chijinzhi Entertainment Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant