CN111210486A - Method and device for realizing streamer effect - Google Patents

Method and device for realizing streamer effect Download PDF

Info

Publication number
CN111210486A
CN111210486A CN202010014440.8A CN202010014440A CN111210486A CN 111210486 A CN111210486 A CN 111210486A CN 202010014440 A CN202010014440 A CN 202010014440A CN 111210486 A CN111210486 A CN 111210486A
Authority
CN
China
Prior art keywords
streamer
object model
rendering
map
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010014440.8A
Other languages
Chinese (zh)
Other versions
CN111210486B (en
Inventor
陈瑽
寇京博
李嘉乐
田吉亮
庄涛
杨凯允
陈嘉伟
殷宏亮
张峰
姚逸宁
徐丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chijinzhi Entertainment Technology Co Ltd
Original Assignee
Beijing Chijinzhi Entertainment Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chijinzhi Entertainment Technology Co Ltd filed Critical Beijing Chijinzhi Entertainment Technology Co Ltd
Priority to CN202010014440.8A priority Critical patent/CN111210486B/en
Publication of CN111210486A publication Critical patent/CN111210486A/en
Application granted granted Critical
Publication of CN111210486B publication Critical patent/CN111210486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The application provides a method and a device for realizing streamer effect, wherein the method comprises the following steps: obtaining sub-texture map coordinate information of the object model, which is used for storing texture map coordinate position information used for streamer, in response to a loading trigger operation for loading the object model; responding to a streamer rendering triggering operation of the object model, and obtaining a light scanning graph and a background graph for streamer rendering of the object model; and performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map. According to the scheme, the efficient and natural streamer effect can be achieved, and user customization is supported, so that the real-time and diversified personalized requirements of the user can be met, the user experience satisfaction can be improved, and the interestingness is greatly enhanced.

Description

Method and device for realizing streamer effect
Technical Field
The application relates to the technical field of computers, in particular to a technical scheme for realizing a streamer effect.
Background
In the prior art, a common solution of the streamer technology is to make a light-sweeping texture map, place the map on an original map, and continuously update the position of the map during rendering to realize streamer of texture movement. However, the existing streamer technology has the following defects: 1) in the existing streamer technology, the final effect of streamer is determined by whether the production of a scanned optical map is fine or not, the periphery of the scanned optical map is usually blurred and feathered, so that the periphery of the streamer cannot be too sharp, but the scanned optical map is required to be produced by art in advance and does not support any change after the art is produced; 2) for complex models, for efficiency reasons, all faces of the model are often unfolded on a plane by unfolding UV (texture map coordinates), so that on a UV map, the surfaces of the model are put together, for example, a human body, there may be a part of the shoulder, hand, and foot in the same row, and the bottom of the scanned light is often the map, so that from the aspect of efficiency, the flow is not displayed from top to bottom when the light is scanned, and the flow may occur in other positions.
Disclosure of Invention
The application aims to provide a technical scheme for realizing more efficient and natural streamer effect.
According to an embodiment of the present application, there is provided a method for achieving a streamer effect, wherein the method comprises:
obtaining sub-texture map coordinate information of the object model, which is used for storing texture map coordinate position information used for streamer, in response to a loading trigger operation for loading the object model;
responding to a streamer rendering triggering operation of the object model, and obtaining a light scanning graph and a background graph for streamer rendering of the object model;
and performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
There is also provided, in accordance with another embodiment of the present application, apparatus for achieving a streamer effect, wherein the apparatus includes:
means for obtaining sub-texture map coordinate information of an object model for storing texture map coordinate position information for use by streamer in response to a load trigger operation to load the object model;
means for obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering trigger operation on the object model;
and the device is used for performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
There is also provided, in accordance with another embodiment of the present application, a computer apparatus, wherein the computer apparatus includes: a memory for storing one or more programs; one or more processors coupled with the memory, the one or more programs, when executed by the one or more processors, causing the one or more processors to perform operations comprising:
obtaining sub-texture map coordinate information of the object model, which is used for storing texture map coordinate position information used for streamer, in response to a loading trigger operation for loading the object model;
responding to a streamer rendering triggering operation of the object model, and obtaining a light scanning graph and a background graph for streamer rendering of the object model;
and performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
According to another embodiment of the present application, there is also provided a computer-readable storage medium having a computer program stored thereon, the computer program being executable by a processor to:
obtaining sub-texture map coordinate information of the object model, which is used for storing texture map coordinate position information used for streamer, in response to a loading trigger operation for loading the object model;
responding to a streamer rendering triggering operation of the object model, and obtaining a light scanning graph and a background graph for streamer rendering of the object model;
and performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
Compared with the prior art, the method has the following advantages: 1) by adding the auxiliary texture mapping coordinate information for storing the texture mapping coordinate position information used by the streamer when the object model is manufactured, namely adding an auxiliary UV space, the efficient streamer effect can be realized according to the auxiliary texture mapping coordinate information, and the streamer can naturally appear at the appointed position and can not appear beyond the appointed position; 2) the high-efficiency streamer effect can be realized, and the good performance is ensured; 3) the method supports the user to define the light scanning picture and/or the background picture, so that the streamer effect expected by the user can be flexibly realized according to the user-defined content, the real-time and diversified individual requirements of the user can be met, the user experience satisfaction can be improved, and the interestingness is greatly enhanced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 shows a flow diagram of a method for implementing a streamer effect according to an embodiment of the application;
FIG. 2 is a schematic diagram of an apparatus for implementing a streamer effect according to an embodiment of the present application;
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The term "device" in this context refers to an intelligent electronic device that can perform predetermined processes such as numerical calculations and/or logic calculations by executing predetermined programs or instructions, and may include a processor and a memory, wherein the predetermined processes are performed by the processor executing program instructions prestored in the memory, or performed by hardware such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), or performed by a combination of the above two.
The technical scheme of the application is mainly realized by computer equipment. Wherein the computer device comprises a network device and a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of computers or network servers, wherein Cloud Computing is one of distributed Computing, a super virtual computer consisting of a collection of loosely coupled computers. The user equipment includes but is not limited to PCs, tablets, smart phones, IPTV, PDAs, wearable devices, and the like. The computer equipment can be independently operated to realize the application, and can also be accessed into a network to realize the application through the interactive operation with other computer equipment in the network. The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), and the like.
It should be noted that the above-mentioned computer devices are only examples, and other computer devices that are currently available or that may come into existence in the future, such as may be applicable to the present application, are also included within the scope of the present application and are incorporated herein by reference.
The methodologies discussed hereinafter, some of which are illustrated by flow diagrams, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present application. This application may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present application is described in further detail below with reference to the attached figures.
It should be noted that the technical solution for implementing the streamer effect in the present application can be applied to any object model in an application scene that may use streamer, such as a character model and a vehicle model in a game scene, and the present application does not limit this.
Before the scheme for realizing the streamer effect is executed, the prior art production needs to be carried out so as to support the scheme for realizing the streamer effect. In the art, when making model pictures, especially complex model pictures, the UV needs to be developed, different surfaces of an object model are placed on the same plane picture, and in order to fill up a rectangular plane picture, the surfaces need to be finely scaled, rotated and filled in blank positions, so that the UV picture is as small as possible. The following exemplarily describes a process of the pre-art production to be performed in the present application:
1) when an object model is created, if the object model needs to have a streamer effect or needs to support the streamer effect, sub-texture map coordinate information (hereinafter also referred to as sub-UV information) is added to the object model, and the sub-UV information may be named, for example, UV 2. In the sub UV information, only the surface of the streamer part that needs or supports the streamer effect is expanded, all the surfaces are not rotated, the surfaces that need to be scanned simultaneously are placed on the same line, and there is no need to draw or place any map on the UV, that is, only the texture map coordinate position information (hereinafter also referred to as UV position information) for use in streamer is stored, and then the object model is saved and derived. The object model derived then has two UV spaces, one corresponding to the main UV information (also referred to in the context of main texture map coordinate information) containing the position of the UV and the various maps on which the UV is placed, and the other corresponding to the sub-UV information as described above for use in streamer lighting, which sub-UV information contains only the UV position information.
2) One or more alternative sweeps are made and named for use in the streamer effect, using substantially the same method as conventional streamer making. The selectable light scanning patterns can be in any shape and style, for example, the selectable light scanning patterns in various shapes such as snowflake shapes, rhombuses, circles, stars and the like can be prefabricated, fixed or selectable styles are defined for the selectable light scanning patterns in various shapes, and the selectable light scanning patterns in snowflake shapes support various selectable colors and various light scanning directions. Each selectable sweep pattern can also be viewed as a sweep template that is customizable by the user.
3) One or more optional background maps are made and used as textures for the sub-UV information, so the optional background maps may also be referred to as sub-UV maps. Each selectable background image can also be considered as a background image template, which is allowed to be customized by the user.
4) And setting some parameter information corresponding to the scanning light image and the background image. In some embodiments, information is edited in the shader, and in a scene in which the unity needs the streamer of the object model, some parameters, such as "flow mode", "flow color", and the like, are set and specified in the shader of the object model, where "flow mode" must be set as the name of the sub UV information, such as UV2, and if the parameter is changed, the streamer cannot be used, and the streamer parameters other than "flow mode" may be written in advance into default values or changed at runtime or customized by a user. In some embodiments, the streamer parameters include, but are not limited to: the speed of the scanning light, the positional deviation of the scanning light, the rotation angle and the zoom factor of the scanning light, the moving speed of the background image, the zoom factor of the background image, the streamer start time, the streamer end time, the streamer interval time, and the like.
Above-mentioned fine arts manufacturing process accomplishes the back, just can use the scheme of this application to realize the streamer effect.
Fig. 1 shows a flow chart of a method for implementing a streamer effect according to an embodiment of the present application. The method according to the present embodiment includes step S11, step S12, and step S13. In step S11, the computer apparatus acquires sub-texture map coordinate information of an object model for storing texture map coordinate position information for use in streamer in response to a load trigger operation to load the object model; in step S12, in response to a streamer rendering trigger operation on the object model, the computer device obtains a light scanning map and a background map for streamer rendering the object model; in step S13, the computer device performs a rendering process on the object model according to the sub-texture map coordinate information, the light map, and the background map.
In step S11, the computer apparatus acquires sub-texture map coordinate information of the object model for storing texture map coordinate position information for use in streamer in response to a load trigger operation to load the object model. In some embodiments, the object model is a 3D (3 Dimensions) model, the object model needs to be streamer or support streamer, the object model includes at least one component needing to be streamer or support streamer, and the components may be completely separated or may have actual or visual connection relations; optionally, the at least one component is located at the same level of the object model; preferably, the at least one component is located on an outer face of the object model. In some embodiments, the secondary UV information is stored in a model loader. In some embodiments, the UV location information corresponds to one or more components of the object model that support streamer light, for example, components of an object model that support streamer light include clothing, boots, and swords, and the secondary UV location information includes texture map coordinate locations corresponding to clothing, boots, and swords.
Wherein the loading trigger operation is used for triggering the loading of the object model, and the loading trigger operation comprises any operation for triggering the loading of the object model. In some embodiments, the load trigger operation is triggered by predetermined program logic executing in the computer device, such as a load trigger operation executed in accordance with the predetermined program logic in a gaming application; in other embodiments, the loading trigger operation is performed by a user, such as a predetermined operation (e.g., a click operation, a voice operation, a gesture operation, etc.) performed by the user in the game application for loading the object model. In some embodiments, a computer device loads an object model in response to a load trigger operation to load the object model and obtains secondary texture map coordinate information of the object model for storing texture map coordinate position information for use by streamlining; as one example, a user performs a click operation for loading a virtual character a in a game application, and in response to the click operation, a computer device loads a model of the virtual character a and acquires sub UV information of the model of the virtual character a.
In step S12, the computer device obtains a scan map and a background map for performing a streamer rendering on the object model in response to a streamer rendering trigger operation on the object model. The stream rendering triggering operation is used for triggering the stream rendering of the object model, and the stream rendering triggering operation comprises any operation used for triggering the stream rendering of the object model. In some embodiments, the streamer rendering trigger operation is triggered by predetermined program logic executed in the computer device, such as a streamer rendering trigger operation executed in accordance with the predetermined program logic in a gaming application; in other embodiments, the streamer rendering triggering operation is performed by a user, such as a predetermined operation (e.g., a click operation, a voice operation, a gesture operation, etc.) performed by the user in a game application for triggering streamer rendering of the object model. In some embodiments, the streamer rendering triggering operation is directed to the entire object model, i.e., is used to trigger streamer rendering of all components in the object model that support streamlining; in other embodiments, the streamer rendering triggering operation is directed to a specific target component in the object model, that is, is used to trigger streamer rendering on a specific target component in all components in the object model that support streamer.
The background image refers to a background texture map of the scanned light map, that is, the scanned light map moves on the background texture map during the running of the light. Optionally, the scanned-light map and/or the background map may be completely customized by the user, or partially customized by the user on the basis of a pre-made optional scanned-light map and an optional background map (for example, text is added to a certain pre-made optional background map to generate a background map), or selected by the user directly from the pre-made optional scanned-light map and the optional background map, or default by the system (that is, without user involvement), and the implementation manner for obtaining the scanned-light map and the background map is not limited in any way by the present application.
In step S13, the computer device performs a rendering process on the object model according to the sub-texture map coordinate information, the light map, and the background map. In some embodiments, a computer device uses the scan map and a predetermined algorithm in conjunction with the background map and secondary UV information to implement a streamer rendering of the object model. In some embodiments, a computer device streamlines rendering of all streamlining enabled components of the object model according to the secondary texture map coordinate information, the skim map, and the background map. In some embodiments, the computer device streamlines rendering of one or more target components of all streamlining enabled components of the object model according to the secondary texture map coordinate information, the skim map and the background map, the one or more target components being determined either in the step S13 or prior to step S13, as determined possibly based on the streamlining rendering triggering operation in the aforementioned step S12; in some embodiments, the computer device may determine whether the streamers are for the entire object model or for one or more target components in the object model based on the current scene or the streamrendering trigger operation, if the scene is currently a reloading scene, then only the clothing of the virtual character is streamed; in some embodiments, the computer device may determine whether the streamlining is for the entire object model or for one or more target components in the object model based on the user indication, such as determining to only streamer certain pieces of equipment of the virtual character based on the user indication.
In some embodiments, the step S11 includes: and responding to a loading trigger operation for loading an object model, detecting whether auxiliary texture mapping coordinate information exists in the object model, if so, acquiring main texture mapping coordinate information of the object model and auxiliary texture mapping coordinate information for storing texture mapping coordinate position information used for streamer, otherwise, only acquiring main texture mapping coordinate information of the object model. In some embodiments, in response to a load trigger operation for loading an object model, a computer device first detects whether sub-texture map coordinate information exists in the object model, and if so, then determines whether a current scene needs or supports streamer; and if the current scene needs or supports streamer light, acquiring the main texture mapping coordinate information and the auxiliary texture mapping coordinate information of the object model, otherwise, only acquiring the main texture mapping coordinate information of the object model without acquiring the auxiliary UV information. It should be noted that, in different scenes, the components of the object model that support streamer may be different.
In some embodiments, the step S12 includes: and responding to the streamer rendering triggering operation of the object model executed by a user, and obtaining a light scanning graph and a background graph for streamer rendering of the object model. As one example, the streamer rendering trigger operation performed by the user includes a reloading operation, and when the user performs the reloading operation in the game application for the virtual character model used by the user, the computer device obtains a light-sweeping map and a background map for streamer rendering of the virtual character model in response to the reloading operation.
In some embodiments, the obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering trigger operation performed on the object model by a user includes: responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map for carrying out streamer rendering on the object model; and/or obtaining a background image for performing streamer rendering on the object model in response to a background image configuration operation executed by a user on the streamer effect configuration interface. In some embodiments, the streamlight effect configuration interface is used to perform configuration operations related to streamlight effects, including but not limited to: various scanning light map configuration operations related to the configuration of the scanning light map, various background map configuration operations related to the configuration of the background map, some parameter configuration operations related to the streamer effect, selection operations related to the streamer part and the like. The scan pattern configuration operation includes any configuration operation related to a scan pattern, and in some embodiments, the scan pattern configuration operation includes, but is not limited to, a selection operation for selecting a scan pattern, a scan pattern parameter selection operation or a scan pattern parameter configuration operation for configuring scan pattern parameter information, and the like. The background image configuration operation includes any configuration operation related to a background image, and in some embodiments, the background image configuration operation includes, but is not limited to, a selection operation for selecting a background image, a background image parameter selection operation or a background image parameter configuration operation for configuring background image parameter information, a content input operation for customizing text content or picture content, and the like.
In some embodiments, the background map configuration operation includes a content input operation and a background map parameter selection operation, and the obtaining a background map for performing a streaming rendering on the object model in response to the background map configuration operation performed by a user on the streaming effect configuration interface includes: responding to content input operation executed by a user on the streamer effect configuration interface, and obtaining user-defined content information input by the user; responding to background image parameter selection operation executed by the user on the streamer effect configuration interface, and obtaining background image parameter information appointed by the user; and generating a background image for performing streaming rendering on the object model according to the custom content information and the background image parameter information. The customized content information comprises any content information customized by a user for the background picture, and in some embodiments, the customized content information comprises customized text content and/or picture content. As an example, in response to a content input operation performed by a user on the streaming light effect configuration interface, the computer device obtains a custom text content "no double all over the world" and a texture map M input by the user, and in response to a background map parameter selection operation performed by the user on the streaming light effect configuration interface, obtains background map parameter information such as a streaming light direction, a tiling mode, a color, and the like specified by the user; and then, the computer equipment generates a background image for performing streaming rendering on the object model according to the custom text content 'no double in the sky', the texture map M and the background image parameter information specified by the user.
In some embodiments, the obtaining a background map for streamlining rendering the object model in response to a background map configuration operation performed by a user at the streamlining effect configuration interface includes: and responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image selected by the user from a plurality of selectable background images and used for streamer rendering of the object model. Based on this, the user can directly select one background image from a plurality of pre-made selectable background images as the background image for performing the stream rendering on the object model. In some embodiments, the computer device obtains a background image selected by a user from a plurality of selectable background images in response to a background image configuration operation performed by the user on the streaming light effect configuration interface, and obtains the user-defined text content input by the user in response to a content input operation performed by the user on the streaming light effect configuration interface; responding to background image parameter selection operation executed by the user on the streamer effect configuration interface, and obtaining background image parameter information appointed by the user; and generating a background image for performing streaming rendering on the object model according to the user-defined text content, the selected background image and the background image parameter information.
In some embodiments, the obtaining a scan map for performing a streamer rendering on the object model in response to a scan map configuration operation performed by a user at the streamer effect configuration interface includes: and responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map selected by the user from a plurality of selectable light scanning maps and used for streamer rendering of the object model. Based on this, the user may directly select one of a plurality of pre-made selectable light maps as the light map for performing the streamer rendering on the object model, and in some embodiments, the user selects which light map, that is, determines the shape and style of the streamer, and in some embodiments, the shape and style of the light map may be unique or may be selected.
In some embodiments, the obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering trigger operation performed on the object model by a user includes: responding to a streamer rendering triggering operation of the object model executed by a user, obtaining one or more target components, and obtaining a light scanning graph and a background graph for streamer rendering the one or more target components; wherein the step S13 includes: extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information; and performing streamer rendering on the one or more target components according to the target texture map coordinate position information, the light scanning map and the background map. In some embodiments, the streamer rendering trigger operations performed by the user in different scenes correspond to different target components, for example, the streamer rendering trigger operation performed by the user in a scene of changing clothes corresponds to the changed clothes, the streamer rendering trigger operation performed by the user in a scene of changing equipment corresponds to the changed equipment, and the streamer rendering trigger operation performed by the user in a scene of flying equipment corresponds to all components supporting streamer. In some embodiments, a user may directly perform a predetermined operation on a certain component in the object model to trigger the streamer rendering of the component, that is, the predetermined operation is used as the streamer rendering trigger operation, and the target component is the component to which the predetermined operation is directed, for example, when the user performs a double-click operation on the wings of the virtual character model used in the game, the target component needing streamer may be determined to be the wings of the virtual character model in response to the double-click operation. In some embodiments, there are a plurality of target components requiring streamer, the plurality of target components may use the same scanning and background images, or may use different scanning and background images, for example, a target component having a joint relationship may use the same scanning and background images, two target components separated from each other may use different scanning and background images, and optionally, the streamer effect configuration interface described above supports streamer effect configuration for each target component separately.
In some embodiments, said obtaining one or more target components in response to a user-performed streamcast rendering triggering operation on said object model comprises: presenting a streamer component selection interface in response to a streamer rendering trigger operation performed on the object model by a user, the streamer component selection interface comprising at least one component of the object model that supports streamer; and responding to the selection operation performed by the user in the streamer component selection interface, and obtaining one or more target components selected from the at least one component by the user. The streamer component selection interface is configured to enable a user to select a target component from the at least one component that requires streamer. Based on this, the user can select one or more target components needing the streamer in the object model by himself, so that the realization of streamer effect is more flexible and more meets the personalized requirements of the user.
In some embodiments, said obtaining one or more target components in response to a user-performed streamcast rendering triggering operation on said object model comprises: and responding to the streamer rendering triggering operation of the object model executed by the user, and acquiring one or more target components corresponding to the streamer rendering operation according to the operation information corresponding to the streamer rendering operation. In some embodiments, the operation information includes any information related to the streamer rendering operation, such as a region corresponding to the streamer rendering operation, a type of the streamer rendering operation, and the like. As an example, the object model is divided into a plurality of regions, and in response to a streamer rendering triggering operation performed on the object model by a user, components supporting streamers in the regions corresponding to the streamer rendering operation are used as one or more target components corresponding to the streamer rendering operation.
In some embodiments, the step S13 includes: correcting the light scanning map and optimally displaying according to the coordinate information of the secondary texture mapping and the parameter information of the light scanning map corresponding to the light scanning map; calculating streamer duration information and streamer speed information according to the time parameter information; modifying the streamer effect according to the background image parameter information corresponding to the background image; calculating the final streamer color; and performing streamer rendering on the object model according to the streamer color. In some embodiments, the computer device corrects the scanned image according to the scaling parameter, the offset parameter and the rotation parameter of the scanned image based on the coordinate information of the secondary texture map, performs fade-in and fade-out effects according to the rotation parameter of the scanned image to optimize display, calculates the start and end time interval and speed by using each time parameter, modifies the effect by using the parameter information of the background image, calculates the final streamer color (the sum of the scanned color and the background image color is the final streamer color), and performs streamer rendering on the object model according to the streamer color.
According to the scheme of the application, the auxiliary texture mapping coordinate information used for storing the texture mapping coordinate position information used for streamer light is added when the object model is manufactured, namely, an auxiliary UV space is added, so that a high-efficiency streamer light effect can be realized according to the auxiliary texture mapping coordinate information, and streamer light can naturally appear at a specified position and cannot appear beyond the specified position; moreover, the high-efficiency streamer effect can be realized, and meanwhile, the good performance is ensured; in addition, the method supports the user to define the light scanning picture and/or the background picture, so that the streamer effect expected by the user can be flexibly realized according to the user-defined content, the real-time and diversified individual requirements of the user can be met, the user experience satisfaction can be improved, and the interestingness is greatly enhanced.
Fig. 2 is a schematic structural flow chart of an apparatus for implementing a streamer effect according to an embodiment of the present application. The apparatus for implementing a streamer effect (hereinafter, referred to as "streamer apparatus") includes means (hereinafter, referred to as "first apparatus 11") for acquiring sub-texture map coordinate information of an object model for storing texture map coordinate position information for use in streamer in response to a load trigger operation for loading the object model, means (hereinafter, referred to as "second apparatus 12") for obtaining a skim map and a background map for use in streamer rendering of the object model in response to a streamer rendering trigger operation for the object model, and means (hereinafter, referred to as "third apparatus 13") for streamer rendering the object model based on the sub-texture map coordinate information, the skim map, and the background map. Here, operations performed by the first device 11, the second device 12, and the third device 13 are the same as or similar to those in the embodiment shown in fig. 1, and therefore are not described herein again, and are incorporated herein by reference.
In some embodiments, the first device 11 is configured to: and responding to a loading trigger operation for loading an object model, detecting whether auxiliary texture mapping coordinate information exists in the object model, if so, acquiring main texture mapping coordinate information of the object model and auxiliary texture mapping coordinate information for storing texture mapping coordinate position information used for streamer, otherwise, only acquiring main texture mapping coordinate information of the object model. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the second device 12 is configured to: and responding to the streamer rendering triggering operation of the object model executed by a user, and obtaining a light scanning graph and a background graph for streamer rendering of the object model. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering trigger operation performed on the object model by a user includes: responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map for carrying out streamer rendering on the object model; and/or obtaining a background image for performing streamer rendering on the object model in response to a background image configuration operation executed by a user on the streamer effect configuration interface. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the background map configuration operation includes a content input operation and a background map parameter selection operation, and the obtaining a background map for performing a streaming rendering on the object model in response to the background map configuration operation performed by a user on the streaming effect configuration interface includes: responding to content input operation executed by a user on the streamer effect configuration interface, and obtaining user-defined content information input by the user; responding to background image parameter selection operation executed by the user on the streamer effect configuration interface, and obtaining background image parameter information appointed by the user; and generating a background image for performing streaming rendering on the object model according to the custom content information and the background image parameter information. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the obtaining a background map for streamlining rendering the object model in response to a background map configuration operation performed by a user at the streamlining effect configuration interface includes: and responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image selected by the user from a plurality of selectable background images and used for streamer rendering of the object model. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the obtaining a scan map for performing a streamer rendering on the object model in response to a scan map configuration operation performed by a user at the streamer effect configuration interface includes: and responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map selected by the user from a plurality of selectable light scanning maps and used for streamer rendering of the object model. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering trigger operation performed on the object model by a user includes: responding to a streamer rendering triggering operation of the object model executed by a user, obtaining one or more target components, and obtaining a light scanning graph and a background graph for streamer rendering the one or more target components; wherein the third means 13 is configured to: extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information; and performing streamer rendering on the one or more target components according to the target texture map coordinate position information, the light scanning map and the background map. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, said obtaining one or more target components in response to a user-performed streamcast rendering triggering operation on said object model comprises: presenting a streamer component selection interface in response to a streamer rendering trigger operation performed on the object model by a user, the streamer component selection interface comprising at least one component of the object model that supports streamer; and responding to the selection operation performed by the user in the streamer component selection interface, and obtaining one or more target components selected from the at least one component by the user. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, said obtaining one or more target components in response to a user-performed streamcast rendering triggering operation on said object model comprises: and responding to the streamer rendering triggering operation of the object model executed by the user, and acquiring one or more target components corresponding to the streamer rendering operation according to the operation information corresponding to the streamer rendering operation. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
In some embodiments, the third means 13 is for: correcting the light scanning map and optimally displaying according to the coordinate information of the secondary texture mapping and the parameter information of the light scanning map corresponding to the light scanning map; calculating streamer duration information and streamer speed information according to the time parameter information; modifying the streamer effect according to the background image parameter information corresponding to the background image; calculating the final streamer color; and performing streamer rendering on the object model according to the streamer color. Here, the related operations are the same as or similar to those in the embodiment shown in fig. 1, and thus are not described again, and are included herein by reference.
The present application further provides a computer device, wherein the computer device includes: a memory for storing one or more programs; one or more processors coupled with the memory, the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method for implementing a streamer effect as described herein.
The present application also provides a computer-readable storage medium having stored thereon a computer program which can be executed by a processor for implementing the method for streamlining effects described herein.
The present application also provides a computer program product which, when executed by an apparatus, causes the apparatus to perform the method for realizing streamer effects described herein.
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
In some embodiments, system 1000 can be implemented as any of the processing devices in the embodiments of the present application. In some embodiments, system 1000 may include one or more computer-readable media (e.g., system memory or NVM/storage 1020) having instructions and one or more processors (e.g., processor(s) 1005) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 1010 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 1005 and/or to any suitable device or component in communication with system control module 1010.
The system control module 1010 may include a memory controller module 1030 to provide an interface to the system memory 1015. Memory controller module 1030 may be a hardware module, a software module, and/or a firmware module.
System memory 1015 may be used to load and store data and/or instructions, for example, for system 1000. For one embodiment, system memory 1015 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 1015 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 1010 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 1020 and communication interface(s) 1025.
For example, NVM/storage 1020 may be used to store data and/or instructions. NVM/storage 1020 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard disk drive(s) (HDD (s)), one or more Compact Disc (CD) drive(s), and/or one or more Digital Versatile Disc (DVD) drive (s)).
NVM/storage 1020 may include storage resources that are physically part of a device on which system 1000 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 1020 may be accessed over a network via communication interface(s) 1025.
Communication interface(s) 1025 may provide an interface for system 1000 to communicate over one or more networks and/or with any other suitable device. System 1000 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic for one or more controller(s) of the system control module 1010, e.g., memory controller module 1030. For one embodiment, at least one of the processor(s) 1005 may be packaged together with logic for one or more controller(s) of the system control module 1010 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic for one or more controller(s) of the system control module 1010. For one embodiment, at least one of the processor(s) 1005 may be integrated on the same die with logic of one or more controllers of the system control module 1010 to form a system on a chip (SoC).
In various embodiments, system 1000 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, system 1000 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
While exemplary embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the claims. The protection sought herein is as set forth in the claims below. These and other aspects of the various embodiments are specified in the following numbered clauses:
1. a method for achieving a streamer effect, wherein the method comprises:
obtaining sub-texture map coordinate information of the object model, which is used for storing texture map coordinate position information used for streamer, in response to a loading trigger operation for loading the object model;
responding to a streamer rendering triggering operation of the object model, and obtaining a light scanning graph and a background graph for streamer rendering of the object model;
and performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
2. The method of clause 1, wherein the obtaining secondary texture map coordinate information of the object model for storing texture map coordinate location information for use in streamer in response to a load trigger operation to load the object model comprises:
and responding to a loading trigger operation for loading an object model, detecting whether auxiliary texture mapping coordinate information exists in the object model, if so, acquiring main texture mapping coordinate information of the object model and auxiliary texture mapping coordinate information for storing texture mapping coordinate position information used for streamer, otherwise, only acquiring main texture mapping coordinate information of the object model.
3. The method of clause 1, wherein the obtaining a skim map and a background map for streamlining rendering the object model in response to a streamlining rendering triggering operation on the object model comprises:
and responding to the streamer rendering triggering operation of the object model executed by a user, and obtaining a light scanning graph and a background graph for streamer rendering of the object model.
4. The method according to clause 3, wherein the obtaining of the pan and background maps for streamlining the object model in response to a streamlining rendering triggering operation performed on the object model by a user comprises:
responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; and
responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map for carrying out streamer rendering on the object model; and/or obtaining a background image for performing streamer rendering on the object model in response to a background image configuration operation executed by a user on the streamer effect configuration interface.
5. The method according to clause 4, wherein the background map configuration operation includes a content input operation and a background map parameter selection operation, and the obtaining a background map for streamlining the object model in response to the background map configuration operation performed by a user at the streamlining effects configuration interface includes:
responding to content input operation executed by a user on the streamer effect configuration interface, and obtaining user-defined content information input by the user;
responding to background image parameter selection operation executed by the user on the streamer effect configuration interface, and obtaining background image parameter information appointed by the user;
and generating a background image for performing streaming rendering on the object model according to the custom content information and the background image parameter information.
6. The method of clause 4, wherein the obtaining a background map for streamlining the object model in response to a background map configuration operation performed by a user at the streamlining effects configuration interface comprises:
and responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image selected by the user from a plurality of selectable background images and used for streamer rendering of the object model.
7. The method according to clause 4, wherein the obtaining a light map for streamlining rendering the object model in response to a light map configuration operation performed by a user at the streamlining effects configuration interface comprises:
and responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map selected by the user from a plurality of selectable light scanning maps and used for streamer rendering of the object model.
8. The method according to clause 3, wherein the obtaining of the pan and background maps for streamlining the object model in response to a streamlining rendering triggering operation performed on the object model by a user comprises:
responding to a streamer rendering triggering operation of the object model executed by a user, obtaining one or more target components, and obtaining a light scanning graph and a background graph for streamer rendering the one or more target components;
wherein the performing, according to the sub-texture map coordinate information, the light map, and the background map, stream rendering on the object model includes:
extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information;
and performing streamer rendering on the one or more target components according to the target texture map coordinate position information, the light scanning map and the background map.
9. The method of clause 8, wherein the obtaining one or more target components in response to a streamcast rendering triggering operation performed on the object model by a user comprises:
presenting a streamer component selection interface in response to a streamer rendering trigger operation performed on the object model by a user, the streamer component selection interface comprising at least one component of the object model that supports streamer;
and responding to the selection operation performed by the user in the streamer component selection interface, and obtaining one or more target components selected from the at least one component by the user.
10. The method of clause 8, wherein the obtaining one or more target components in response to a streamcast rendering triggering operation performed on the object model by a user comprises:
and responding to the streamer rendering triggering operation of the object model executed by the user, and acquiring one or more target components corresponding to the streamer rendering operation according to the operation information corresponding to the streamer rendering operation.
11. The method of any of clauses 1-10, wherein said streamlining rendering of the object model from the secondary texture map coordinate information, the skim map, and the background map comprises:
correcting the light scanning map and optimally displaying according to the coordinate information of the secondary texture mapping and the parameter information of the light scanning map corresponding to the light scanning map;
calculating streamer duration information and streamer speed information according to the time parameter information;
modifying the streamer effect according to the background image parameter information corresponding to the background image;
calculating the final streamer color;
and performing streamer rendering on the object model according to the streamer color.
12. An apparatus for implementing a streamer effect, wherein the apparatus comprises:
means for obtaining sub-texture map coordinate information of an object model for storing texture map coordinate position information for use by streamer in response to a load trigger operation to load the object model;
means for obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering trigger operation on the object model;
and the device is used for performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
13. The apparatus of clause 12, wherein the means for obtaining secondary texture map coordinate information of the object model for storing texture map coordinate position information for use in streamer in response to a load trigger operation to load the object model is configured to:
and responding to a loading trigger operation for loading an object model, detecting whether auxiliary texture mapping coordinate information exists in the object model, if so, acquiring main texture mapping coordinate information of the object model and auxiliary texture mapping coordinate information for storing texture mapping coordinate position information used for streamer, otherwise, only acquiring main texture mapping coordinate information of the object model.
14. The apparatus of clause 12, wherein the means for obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering triggering operation on the object model is configured to:
and responding to the streamer rendering triggering operation of the object model executed by a user, and obtaining a light scanning graph and a background graph for streamer rendering of the object model.
15. The apparatus according to clause 14, wherein the obtaining of the pan and background maps for streamlining the object model in response to a streamlining rendering triggering operation performed on the object model by a user comprises:
responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; and
responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map for carrying out streamer rendering on the object model; and/or obtaining a background image for performing streamer rendering on the object model in response to a background image configuration operation executed by a user on the streamer effect configuration interface.
16. The apparatus according to clause 15, wherein the background map configuration operation includes a content input operation and a background map parameter selection operation, and the obtaining a background map for streamlining the object model in response to the background map configuration operation performed by a user at the streamlining effects configuration interface includes:
responding to content input operation executed by a user on the streamer effect configuration interface, and obtaining user-defined content information input by the user;
responding to background image parameter selection operation executed by the user on the streamer effect configuration interface, and obtaining background image parameter information appointed by the user;
and generating a background image for performing streaming rendering on the object model according to the custom content information and the background image parameter information.
17. The apparatus of clause 15, wherein the obtaining a background map for streamlining the object model in response to a background map configuration operation performed by a user at the streamlining effects configuration interface comprises:
and responding to a background image configuration operation executed by a user on the streamer effect configuration interface, and obtaining a background image selected by the user from a plurality of selectable background images and used for streamer rendering of the object model.
18. The apparatus of clause 15, wherein the obtaining a light map for streamlining rendering the object model in response to a light map configuration operation performed by a user at the streamlining effects configuration interface comprises:
and responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map selected by the user from a plurality of selectable light scanning maps and used for streamer rendering of the object model.
19. The apparatus according to clause 14, wherein the obtaining of the pan and background maps for streamlining the object model in response to a streamlining rendering triggering operation performed on the object model by a user comprises:
responding to a streamer rendering triggering operation of the object model executed by a user, obtaining one or more target components, and obtaining a light scanning graph and a background graph for streamer rendering the one or more target components;
wherein the means for performing stream-wise rendering on the object model according to the secondary texture map coordinate information, the skim map, and the background map is configured to:
extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information;
and performing streamer rendering on the one or more target components according to the target texture map coordinate position information, the light scanning map and the background map.
20. The apparatus of clause 19, wherein the obtaining one or more target components in response to a streamcast rendering triggering operation performed on the object model by a user comprises:
presenting a streamer component selection interface in response to a streamer rendering trigger operation performed on the object model by a user, the streamer component selection interface comprising at least one component of the object model that supports streamer;
and responding to the selection operation performed by the user in the streamer component selection interface, and obtaining one or more target components selected from the at least one component by the user.
21. The apparatus of clause 19, wherein the obtaining one or more target components in response to a streamcast rendering triggering operation performed on the object model by a user comprises:
and responding to the streamer rendering triggering operation of the object model executed by the user, and acquiring one or more target components corresponding to the streamer rendering operation according to the operation information corresponding to the streamer rendering operation.
22. The apparatus of any of clauses 12 to 21, wherein the means for streamlining rendering the object model according to the secondary texture map coordinate information, the skim map, and the background map is to:
correcting the light scanning map and optimally displaying according to the coordinate information of the secondary texture mapping and the parameter information of the light scanning map corresponding to the light scanning map;
calculating streamer duration information and streamer speed information according to the time parameter information;
modifying the streamer effect according to the background image parameter information corresponding to the background image;
calculating the final streamer color;
and performing streamer rendering on the object model according to the streamer color.
21. An apparatus, wherein the apparatus comprises:
a memory for storing one or more programs;
one or more processors coupled to the memory,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of clauses 1-11.
22. A computer-readable storage medium having stored thereon a computer program executable by a processor to perform the method of any of clauses 1-11.
23. A computer program product which, when executed by an apparatus, causes the apparatus to perform the method of any of clauses 1 to 11.

Claims (10)

1. A method for achieving a streamer effect, wherein the method comprises:
obtaining sub-texture map coordinate information of the object model, which is used for storing texture map coordinate position information used for streamer, in response to a loading trigger operation for loading the object model;
responding to a streamer rendering triggering operation of the object model, and obtaining a light scanning graph and a background graph for streamer rendering of the object model;
and performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
2. The method of claim 1, wherein the obtaining secondary texture map coordinate information of the object model for storing texture map coordinate location information for use in streamer in response to a load trigger operation to load the object model comprises:
and responding to a loading trigger operation for loading an object model, detecting whether auxiliary texture mapping coordinate information exists in the object model, if so, acquiring main texture mapping coordinate information of the object model and auxiliary texture mapping coordinate information for storing texture mapping coordinate position information used for streamer, otherwise, only acquiring main texture mapping coordinate information of the object model.
3. The method of claim 1, wherein the obtaining of the pan and background maps for streamlining rendering of the object model in response to a streamlining rendering triggering operation on the object model comprises:
and responding to the streamer rendering triggering operation of the object model executed by a user, and obtaining a light scanning graph and a background graph for streamer rendering of the object model.
4. The method of claim 3, wherein the obtaining of the pan and background maps for streamlining rendering of the object model in response to a streamlining rendering triggering operation performed on the object model by a user comprises:
responding to the streamer rendering triggering operation of the user on the object model, and presenting a streamer effect configuration interface corresponding to the object model; and
responding to a light scanning map configuration operation executed by a user on the streamer effect configuration interface, and obtaining a light scanning map for carrying out streamer rendering on the object model; and/or obtaining a background image for performing streamer rendering on the object model in response to a background image configuration operation executed by a user on the streamer effect configuration interface.
5. The method of claim 4, wherein the background map configuration operation comprises a content input operation and a background map parameter selection operation, and the obtaining a background map for streamlining the object model in response to the background map configuration operation performed by a user at the streamlining effects configuration interface comprises:
responding to content input operation executed by a user on the streamer effect configuration interface, and obtaining user-defined content information input by the user;
responding to background image parameter selection operation executed by the user on the streamer effect configuration interface, and obtaining background image parameter information appointed by the user;
and generating a background image for performing streaming rendering on the object model according to the custom content information and the background image parameter information.
6. The method of claim 3, wherein the obtaining of the pan and background maps for streamlining rendering of the object model in response to a streamlining rendering triggering operation performed on the object model by a user comprises:
responding to a streamer rendering triggering operation of the object model executed by a user, obtaining one or more target components, and obtaining a light scanning graph and a background graph for streamer rendering the one or more target components;
wherein the performing, according to the sub-texture map coordinate information, the light map, and the background map, stream rendering on the object model includes:
extracting target texture map coordinate position information corresponding to the one or more target components from the texture map coordinate position information;
and performing streamer rendering on the one or more target components according to the target texture map coordinate position information, the light scanning map and the background map.
7. The method of claim 6, wherein the obtaining one or more target components in response to a streamcast rendering triggering operation performed on the object model by a user comprises:
presenting a streamer component selection interface in response to a streamer rendering trigger operation performed on the object model by a user, the streamer component selection interface comprising at least one component of the object model that supports streamer;
and responding to the selection operation performed by the user in the streamer component selection interface, and obtaining one or more target components selected from the at least one component by the user.
8. An apparatus for implementing a streamer effect, wherein the apparatus comprises:
means for obtaining sub-texture map coordinate information of an object model for storing texture map coordinate position information for use by streamer in response to a load trigger operation to load the object model;
means for obtaining a skim map and a background map for streamlining the object model in response to a streamlining rendering trigger operation on the object model;
and the device is used for performing streamer rendering on the object model according to the secondary texture map coordinate information, the light scanning map and the background map.
9. An apparatus, wherein the apparatus comprises:
a memory for storing one or more programs;
one or more processors coupled to the memory,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method recited by any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which computer program can be executed by a processor to perform the method according to any one of claims 1 to 7.
CN202010014440.8A 2020-01-07 2020-01-07 Method and device for realizing streamer effect Active CN111210486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014440.8A CN111210486B (en) 2020-01-07 2020-01-07 Method and device for realizing streamer effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014440.8A CN111210486B (en) 2020-01-07 2020-01-07 Method and device for realizing streamer effect

Publications (2)

Publication Number Publication Date
CN111210486A true CN111210486A (en) 2020-05-29
CN111210486B CN111210486B (en) 2024-01-05

Family

ID=70786000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014440.8A Active CN111210486B (en) 2020-01-07 2020-01-07 Method and device for realizing streamer effect

Country Status (1)

Country Link
CN (1) CN111210486B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053424A (en) * 2020-09-29 2020-12-08 北京完美赤金科技有限公司 Rendering method and device of 3D model
CN112528596A (en) * 2020-12-01 2021-03-19 北京达佳互联信息技术有限公司 Rendering method and device for special effect of characters, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765542A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image rendering method, electronic equipment and computer readable storage medium
US20190073747A1 (en) * 2017-09-05 2019-03-07 Microsoft Technology Licensing, Llc Scaling render targets to a higher rendering resolution to display higher quality video frames
CN109978968A (en) * 2019-04-10 2019-07-05 广州虎牙信息科技有限公司 Video rendering method, apparatus, equipment and the storage medium of Moving Objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190073747A1 (en) * 2017-09-05 2019-03-07 Microsoft Technology Licensing, Llc Scaling render targets to a higher rendering resolution to display higher quality video frames
CN108765542A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image rendering method, electronic equipment and computer readable storage medium
CN109978968A (en) * 2019-04-10 2019-07-05 广州虎牙信息科技有限公司 Video rendering method, apparatus, equipment and the storage medium of Moving Objects

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053424A (en) * 2020-09-29 2020-12-08 北京完美赤金科技有限公司 Rendering method and device of 3D model
CN112053424B (en) * 2020-09-29 2024-03-22 北京完美赤金科技有限公司 Rendering method and device of 3D model
CN112528596A (en) * 2020-12-01 2021-03-19 北京达佳互联信息技术有限公司 Rendering method and device for special effect of characters, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111210486B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
US11217015B2 (en) Method and apparatus for rendering game image
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
EP4070865A1 (en) Method and apparatus for displaying virtual scene, and device and storage medium
KR101497172B1 (en) Altering the appearance of a digital image using a shape
US9275493B2 (en) Rendering vector maps in a geographic information system
US8456467B1 (en) Embeddable three-dimensional (3D) image viewer
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
US20080246760A1 (en) Method and apparatus for mapping texture onto 3-dimensional object model
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
CN103970518A (en) 3D rendering method and device for logic window
CN113112579A (en) Rendering method, rendering device, electronic equipment and computer-readable storage medium
CN111210486B (en) Method and device for realizing streamer effect
CN106780659A (en) A kind of two-dimension situation map generalization method and electronic equipment
KR20160050295A (en) Method for Simulating Digital Watercolor Image and Electronic Device Using the same
Murru et al. Practical augmented visualization on handheld devices for cultural heritage
CN114367113A (en) Method, apparatus, medium, and computer program product for editing virtual scene
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium
CN108171784B (en) Rendering method and terminal
JP2006171760A (en) Memory controller with graphic processing function
KR20160010780A (en) 3D image providing system and providing method thereof
KR101360592B1 (en) Method and apparatus for generating animation
CN113487708B (en) Flow animation implementation method based on graphics, storage medium and terminal equipment
CN108805964B (en) OpenGL ES-based VR set top box starting animation production method and system
US20230123658A1 (en) Generating shadows for digital objects within digital images utilizing a height map
Ugwitz et al. Rendering a series of 3D dynamic visualizations in (geographic) experimental tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 701-25, floor 7, building 5, yard 1, Shangdi East Road, Haidian District, Beijing 100085

Applicant after: Beijing perfect Chijin Technology Co.,Ltd.

Address before: 701-25, floor 7, building 5, yard 1, Shangdi East Road, Haidian District, Beijing 100085

Applicant before: Beijing chijinzhi Entertainment Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant