WO2017143812A1 - 区分对象的方法和装置 - Google Patents

区分对象的方法和装置 Download PDF

Info

Publication number
WO2017143812A1
WO2017143812A1 PCT/CN2016/107119 CN2016107119W WO2017143812A1 WO 2017143812 A1 WO2017143812 A1 WO 2017143812A1 CN 2016107119 W CN2016107119 W CN 2016107119W WO 2017143812 A1 WO2017143812 A1 WO 2017143812A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
pixel
target
pixels
different
Prior art date
Application number
PCT/CN2016/107119
Other languages
English (en)
French (fr)
Inventor
周锦源
王利
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP16891260.8A priority Critical patent/EP3343514A4/en
Priority to JP2018515075A priority patent/JP6526328B2/ja
Priority to KR1020187007925A priority patent/KR102082766B1/ko
Publication of WO2017143812A1 publication Critical patent/WO2017143812A1/zh
Priority to US16/000,550 priority patent/US10957092B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the present invention relates to the field of image processing, and in particular to a method and apparatus for distinguishing objects.
  • a commonly used method is to perform different resource configurations on different object groups so that one or more of the different object groups are different.
  • the objects have the same display characteristics for the user to recognize. For example, in the game counter-strike, in the process of shooting, the user needs to distinguish whether the target object displayed in the display area is his own teammate or enemy. After the user selects whether the identity is a terrorist or a counter-strike, the game will prompt the user to select the corresponding clothing, or provide the user with the clothing corresponding to the selected identity. It can be known that in the course of the game, two different The clothing of the identity player is different. During the game, the user judges whether the object is a teammate or an enemy by displaying the clothing of the object appearing in the area, and further determines that the object needs to be fired.
  • the prior art is that the user distinguishes the target object, so that the objects in different object groups cannot use the same set of resources, so that the reusability of the resources is low, and the objects in the multiple object groups are all used.
  • the same set of resources it is difficult to achieve coordination;
  • the embodiment of the invention provides a method and a device for distinguishing objects, so as to at least solve the technical problem that the prior art needs to configure different resources for different target objects in order to distinguish different target objects in the image, resulting in low resource reuse rate. .
  • a method for distinguishing an object includes: acquiring a plurality of object groups displayed in an image, wherein each object group includes at least one target object, and any one of the plurality of object groups Target objects allow the same resources to be configured; different tag values are set for multiple object groups, where the target objects contained in each object group have the same tag value; each object group is based on the tag value of each object group
  • the pixels of the target object included in the pixel are respectively subjected to pixel correction; wherein pixels of the target object having different mark values are corrected to have different display attributes.
  • an apparatus for distinguishing an object including: a first acquiring module, configured to acquire a plurality of object groups displayed in an image, where each object group includes at least one target An object, a target object in any of a plurality of object groups allows configuration of the same resource; a setting module for setting different tag values for a plurality of object groups, wherein the target objects included in each object group have the same tag value; a correction module, configured to perform pixel correction on pixels of the target object included in each object group according to the tag value of each object group; wherein pixels of the target object with different tag values are corrected to have different display attributes .
  • a plurality of object groups displayed in an image are acquired, wherein each object group includes at least one target object, and target objects in any plurality of object groups are allowed to configure the same resource;
  • the group sets different tag values, wherein the target objects included in each object group have the same tag value, and the pixels of the target object included in each object group are separately performed according to the tag values of each object group.
  • Pixel correction achieves the technical purpose of distinguishing objects when multiple different object groups use the same resource, thereby achieving the technical effect of improving the resource reuse rate, thereby solving the prior art in order to distinguish different objects in the image.
  • Target objects need to be configured with different resources for different target objects, resulting in technical problems with low resource reuse rate.
  • FIG. 1 is a block diagram showing the hardware structure of a computer terminal for a method of distinguishing objects according to an embodiment of the present application
  • FIG. 2 is a flow chart of a method of distinguishing objects according to an embodiment of the present application.
  • Figure 3 (a) is a histogram of processing time of super post-processing according to the prior art
  • 3(b) is a histogram of processing time of super post processing according to an embodiment of the present application.
  • 4(a) is a process flow diagram of a post-processing stage according to the prior art
  • 4(b) is a process flow diagram of an optional post-processing stage capable of distinguishing objects according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of an apparatus for distinguishing objects according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an optional object distinguishing device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an apparatus for distinguishing objects according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an optional object distinguishing device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an apparatus for distinguishing objects according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an apparatus for distinguishing objects according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an optional object distinguishing device according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an optional object distinguishing device according to an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an apparatus for distinguishing objects according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a terminal that distinguishes an object according to an embodiment of the present application.
  • Post-processing is a part of the computer graphics pipeline. The process of processing the image output after rendering the 3D scene is called post-processing.
  • Pipeline The term pipeline is used to describe a process that may involve two or more distinct stages or steps.
  • Rendering target texture In the 3D computer graphics domain, rendering the target texture is a graphic A processing unit (GPU) that allows rendering of 3D scenes into an intermediate storage buffer.
  • GPU graphic A processing unit
  • Color cast is a difference between a display color and a true color due to one or more colors being weak or strong. Color shifting is more common when using liquid crystal displays or instruments such as cameras and printers.
  • Channel A grayscale image that stores different types of information in a digital image.
  • An image can have up to tens of channels.
  • Common RGB and Lab images have three channels by default, while CMYK images have four channels by default.
  • Alpha channel is an 8-bit grayscale channel that uses 256 levels of grayscale to record transparency information in the image, defining transparent, opaque, and translucent areas, where black is transparent, white is opaque, and gray is half. Transparent.
  • Highlight A computer graphics effect used in video games, presentation animations, and high dynamic lighting rendering to mimic object imaging from real cameras. This effect produces streaks or feathery light around high-brightness objects to blur image detail.
  • an embodiment of a method of obtaining push data is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and Although the logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a hardware structural block diagram of a computer terminal for distinguishing objects according to an embodiment of the present application.
  • computer terminal 10 may include one or more (only one shown) processor 102 (processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA)
  • processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA)
  • a memory 104 for storing data
  • a transmission device 106 for communication functions.
  • computer terminal 10 may also include more than that shown in FIG. More or fewer components, or have a different configuration than that shown in Figure 1.
  • the memory 104 can be used to store software programs and modules of the application software, such as the program instructions/modules corresponding to the method for distinguishing objects in the embodiment of the present invention, and the processor 102 executes by executing the software programs and modules stored in the memory 104.
  • Memory 104 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 104 may further include memory remotely located relative to processor 102, which may be coupled to computer terminal 10 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Transmission device 106 is for receiving or transmitting data via a network.
  • the network specific examples described above may include a wireless network provided by a communication provider of the computer terminal 10.
  • the transmission device 106 includes a Network Interface Controller (NIC) that can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission device 106 can be a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the method includes:
  • Step S202 Acquire a plurality of object groups displayed in the image, wherein each object group includes at least one target object, and the target objects in any of the plurality of object groups allow the same resource to be configured.
  • a shooting game in a common shooting game, the user needs to select a group to determine his teammates and enemies. For example, in the game counter-strike, the user can select The terrorists or the counter-strike play the game; in the game through the fire line, the user can choose to lurk or defend the game.
  • users who select the same identity are objects belonging to the same object group, and one of the object groups contains at least one object.
  • the lurkers and defenders in the fire line cannot use the same suit because in the prior art, the user is using When crossing the line of fire, judge whether the other objects in the game are their own teammates or their own enemies by different dresses of the lurkers and defenders.
  • different groups in the game can select the same set of clothing without affecting the user. The distinction between friendly and enemy.
  • Step S204 setting different tag values for the plurality of object groups, wherein the target objects included in each object group have the same tag value.
  • the target object is still a stalker or defender in the fire line.
  • the character in the game is marked according to the nature of the team.
  • there are three object groups the three object groups are friendly, enemy, and the user itself.
  • the friendly party can be marked as 1 and the enemy role as 2, while Mark the user itself as 0.
  • Step S206 performing pixel correction on the pixels of the target object included in each object group according to the tag value of each object group, wherein the pixels of the target object having different tag values are corrected to have different display attributes.
  • any of the above object groups includes at least one target object, and different object groups have different tag values, but objects in the same object group have the same tag value.
  • the above method can be used to achieve the technical purpose of distinguishing multiple target objects when the configuration resources of multiple target objects are the same.
  • object group distinguished by the method provided by the foregoing embodiment may be two object groups, or may be multiple object groups, but the number of the object groups is not limited thereto, and any number of objects Groups can be distinguished by the methods provided in the above steps.
  • FIG. 3 ( a) is a histogram of the processing time of the super post-processing according to the prior art
  • FIG. 3(b) is a histogram of processing time of super post processing according to an embodiment of the present application.
  • the method of performing pixel correction on the pixels of the target object is added, and as shown in FIG. 3(a) and FIG. 3(b), the rendering time after the pixel correction of the pixel of the target object is performed by the present application.
  • the rendering time is slightly improved, but the improvement is not obvious.
  • Erg311 depict this frame image includes 311 objects
  • the rendering time is 148.0us.
  • Erg312 presents this
  • the rendering time of one frame of image including 312 objects is 155.0us, which is only 4.7% compared with the rendering time of the prior art. Therefore, the solution provided by the present application can maintain the original on the basis of reaching the object of distinction. Rendering efficiency.
  • FIG. 4(a) is a process flow diagram of a post-processing stage according to the prior art
  • FIG. 4(b) is a process flow diagram of an optional post-processing stage capable of distinguishing objects according to an embodiment of the present application. 4(a) and 4(b), it can be obtained that the present application commits the pixel correction step in the super post-processing stage.
  • a plurality of object groups displayed in an image are acquired, wherein each object group includes at least one target object, and target objects in any plurality of object groups are allowed to configure the same resource;
  • Multiple object groups set different tag values, where the target objects contained in each object group have the same tag value, by targeting the target objects contained in each object group based on the tag values of each object group.
  • the pixel is separately corrected by pixels, and the technical purpose of distinguishing objects when using the same resource in a plurality of different object groups is achieved, thereby realizing the technical effect of improving the resource multiplexing rate, thereby solving the prior art in order to distinguish different images.
  • the target object needs to be configured with different resources for different target objects, resulting in a technical problem of low resource reuse rate.
  • step S204 setting different tag values for multiple object groups includes:
  • Step S2041 constructing a mapping relationship between the plurality of object groups and the plurality of different tag values.
  • tag value corresponding to each object group is not limited to any range of values, so as to ensure that the tag values of multiple object groups are not the same.
  • the target object is still taken as a stalker or defender in the fire line.
  • there are three object groups namely: the user himself, the user teammate, and the user. Enemy, therefore, the mapping relationship between the object group and the tag value is constructed corresponding to the above three object groups. Since the mapping relationship is only used to distinguish different object groups, the specific tag value is not limited, and only the tag values corresponding to different object groups are determined. Different. For example, the tag values corresponding to the user, the user teammate, and the enemy of the user may be 0, 1, and 2, respectively.
  • Step S2043 Set a corresponding tag value for each of the plurality of object groups by using a mapping relationship, wherein the tag value of each object is set to a tag value corresponding to the object group to which each object belongs.
  • step S2045 a plurality of pixels included in each object are marked with the tag value of each object.
  • the target object is still a stalker or defender in the fire line, and each target object included in each object group has a tag value, due to the target
  • the purpose of the object setting tag value is to distinguish the target objects of different object groups, and the distinguishing method adopted in the present application is to perform pixel correction on the pixels of the target object, therefore, each pixel included in each target object needs to have the same target object as the target object.
  • the tag value has been made to distinguish different objects when performing pixel correction on the target object.
  • step S204 after setting different tag values for the plurality of object groups, the method further includes:
  • Step S2047 the target object is rendered into the first rendering target texture, wherein the rendering target texture has multiple channels.
  • the target object that is, the character object is rendered into the multi-channel in the first rendering target texture RT 0 , for example, by taking the target object as a stalker or defender in the fire line. .
  • the first rendering target texture may include RBG three channels, and may also include CMYK four channels, but is not limited thereto.
  • step S2049 the tag value of the pixel of the target object is normalized to obtain a standard tag value.
  • the obtained tag value is not the same as the original tag value, but after the normalization process, the standard tag values are still different, and can be To distinguish the effect of the object.
  • the target object is still a stalker or defender in the fire line.
  • the target object is still a stalker or defender in the fire line.
  • the standard mark values after normalization can be 0, 0.5, 1, respectively.
  • Step S2051 the standardization value obtained by the normalization process is input to the second rendering target texture, wherein the second rendering target texture has multiple channels, and different standard tag values are input to the second rendering target with different channel values. Texture in multiple channels.
  • the target object corresponding Alpha values input to the second standard marker render target texture in RT 1, wherein the second render target texture still having a plurality of channels, the value of standard marks only a target object with a passage .
  • the channel values of the channels input by different standard tag values are different.
  • the larger the channel value the more the color of the expression is white, and the smaller the channel is, the color is grayed out, so the second rendering target is passed.
  • the texture can get the properties of the target object, that is, which object group the target object belongs to.
  • the second rendering target texture includes standard feature values having only the target object in the channel, and therefore, the second rendering target texture outputs the contour of the target object with a certain gray scale, and does not include the target object. itself.
  • step S206 pixel correction is performed on pixels of the target object included in each object group according to the tag value of each object group, including: having different tag values
  • the pixel of the target object is corrected to a different color, where it will be different
  • the pixel of the target object of the tag value is corrected to a different color including:
  • Step S2061 Obtain a standard tag value corresponding to a pixel of the target object.
  • Step S2063 adjusting the display intensity of the plurality of primary colors constituting each pixel color in the target object according to the standard mark value corresponding to the pixel of the target object, to correct the color of each pixel in the target object, wherein the target having the same tag value
  • the pixels of the object are corrected to the corresponding colors.
  • the color of the pixels included in the target object is composed of the colors of the three channels of RGB. Therefore, when the intensity of the color of the three channels of RGB changes, the pixel of the target object is changed.
  • the display color changes the display color of the target. For example, the RGB value of one pixel of the target object is (58, 110, 165). At this time, the display color of this pixel is blue, when the RGB of this pixel is modified to (248, At 24,237), the display color of the pixel is modified to red.
  • the color of the target object with the same mark value is not the same, and the color of the plurality of pixels in one target object is not the same, therefore, when performing pixel correction on the pixel of the target object, Instead of correcting the pixels of the target object to the same RGB value, the RGB values of the pixels with the same standard mark value are uniformly adjusted, and the adjustment strength is the same. In order to achieve that the pixels having the same standard mark value can obtain the same adjustment strength when correcting, it is necessary to introduce an adjustment constant.
  • step S206, step S2063, adjust the display intensity of the plurality of primary colors constituting each pixel color in the target object according to the standard mark value corresponding to the pixel of the target object:
  • Step S20631 calculating the corrected pixel color of each pixel in the target object by the following formula,
  • Color dest Color scr *Color trans
  • Color dest is used to represent the corrected pixel color of the pixel of the target object
  • Color scr is used to represent the original pixel color of the pixel of the target object
  • Color trans is used to represent the correction constant, wherein the correction constant is used to adjust the composition of the target object.
  • the above correction constant is used to characterize the correction range of the corrected color. It should be noted that the same The pixels of a standard mark value have the same correction constant during the adjustment process, and the correction constants of the pixels with different mark values are not the same, and it is precisely because of the difference of the normal quantity that the objects of different object groups are presented differently. effect.
  • correction constants can be a single-dimensional matrix, and each element in the matrix takes a value between (0, 1).
  • the color of the pixels included in the target object is still taken as an example.
  • the standard mark value of a pixel is 0.5, and the RGB value is (58, 110, 165).
  • Color trans (1, 0.6, 0.6)
  • the above method is not limited to use in the case of three channels of RGB, and is also applicable to the case of CMYK four channels or other multi-channels.
  • the result obtained by the above scheme is that the objects of different object groups exhibit different color casts, so that the user can easily recognize different object groups.
  • the pixels of the object in the object group can be marked as a special mark value.
  • the special mark value is ignored, and no correction is performed.
  • step 206 pixel correction is performed on pixels of the target object included in each object group according to the tag value of each object group, including:
  • Step S2261 performing illumination on the edge pixels of the target object included in each object group
  • the edge pixels of the target object having different mark values are corrected to have different illuminating colors.
  • one of the target objects may be selected for edge illumination processing, and different target colors may be distinguished by different illumination colors to distinguish different objects.
  • the target object in the group may be selected for edge illumination processing, and different target colors may be distinguished by different illumination colors to distinguish different objects.
  • step S206 after performing pixel correction on the pixels of the target object included in each object group according to the tag value of each object group, the method further includes:
  • the pixels of the object are subjected to tone mapping processing, wherein performing tone mapping processing on the pixels of the target object includes:
  • step S208 the contrast and/or brightness of the target object is adjusted by normalizing the pixels of the target object.
  • the image After the pixel correction, it will cause the overall image to be dark. In order to further improve the display effect, after the pixel correction is completed, the image needs to be subjected to tone mapping processing to optimize the image to obtain the final output rendering. Target texture.
  • the method of performing tone mapping on the image may be: normalizing each pixel in the image, that is, mapping pixels with a color range of (0, ⁇ ) to a color range of In the range of [0,1]. After tone mapping, the image's contrast and brightness properties will be further optimized.
  • step 206 before performing pixel correction on the pixels of the target object included in each object group according to the tag value of each object group, the method further includes:
  • Step S2010 performing rendering processing on the pixels of the target object, wherein the rendering processing includes any one or more of the following: motion blur processing, depth of field processing, and highlight processing.
  • step S2011 performing rendering processing on the pixels of the target object includes:
  • Step S2011 the motion blur processing comprises: performing weighted averaging of pixels in a preset range around the target pixel to obtain a new pixel, and adjusting the target pixel to a new pixel, wherein the target pixel is a pixel in a moving direction of the target object.
  • Motion blur mainly simulates the blur effect caused by the fast movement of objects in the scene or the movement of the lens, so that the rendered image is closer to the image captured by the human eye or the camera.
  • the specific method may be: acquiring a plurality of pixels of the target object in the moving direction, and performing weighted averaging on the pixels around the target pixel to obtain a new pixel value, where the new pixel value is a pixel of the target pixel.
  • the depth of field processing includes: performing full-screen blur processing on the pixels of the target object to obtain a result of full-screen blur processing, and mixing the result of the full-screen blur processing with the pixels of the target object.
  • the full screen blur processing is the same as the blur processing in step S2011, except that the full screen blur processing performs blur processing for all the pixels in the entire display area.
  • Step S2025 the highlight processing comprises: outputting the highlight portion of the target object into the texture, blurring the pixels of the highlight portion, and inputting the result of the blur processing into the pixels of the target object through Alpha blending.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention may be soft in nature or in part contributing to the prior art.
  • the form of the product is stored in a storage medium (such as ROM / RAM, disk, CD), including a number of instructions to make a terminal device (can be a mobile phone, computer, server, or network) Apparatus, etc.) performs the methods described in various embodiments of the present invention.
  • FIG. 5 is a schematic structural diagram of an apparatus for distinguishing objects according to an embodiment of the present application. As shown in FIG. 5, the apparatus includes a first acquisition module 50, a setting module 52, and a correction module 54,
  • the first obtaining module 50 is configured to acquire a plurality of object groups displayed in the image, where each object group includes at least one target object, and the target objects in any plurality of object groups are allowed to configure the same resource; the setting module 52 And configured to set different tag values for the plurality of object groups, wherein the target objects included in each object group have the same tag value; the correcting module 54 is configured to: according to the tag value of each of the object groups, The pixels of the target object included in each object group are respectively subjected to pixel correction; wherein pixels of the target object having different mark values are corrected to have different display attributes.
  • any of the above object groups includes at least one target object, and different object groups have different tag values, but objects in the same object group have the same tag value.
  • the above method can be used to achieve the technical purpose of distinguishing multiple target objects when the configuration resources of multiple target objects are the same.
  • object group distinguished by the method provided by the foregoing embodiment may be two object groups, but the number of object groups is not limited thereto, and any number of object groups can be provided by the above steps.
  • the method distinguishes.
  • the process of pixel correction on the pixels of the target object is in the image processing, and in the super post processing of the post-processing stage, the pixels are corrected in the post-processing, which greatly improves the rendering efficiency.
  • the present invention adds a step of performing pixel correction on the pixels of the target object, and as shown in FIG. 3(a) and FIG. 3(b), the image of the target object is applied through the present application.
  • the rendering time after pixel correction is slightly improved compared with the rendering time in the prior art, but the improvement is not obvious. Taking Erg311 (representing this frame image includes 311 objects) as an example, the rendering time is 148.0us.
  • the rendering time of Erg312 (representing 312 objects in this frame image) is 155.0us, which is only 4.7% compared with the rendering time of the prior art. Therefore, the solution provided by the present application achieves the object of distinguishing objects. Based on the original rendering efficiency.
  • a plurality of object groups displayed in an image are acquired, wherein each object group includes at least one target object, and target objects in any plurality of object groups are allowed to configure the same resource;
  • Multiple object groups set different tag values, where the target objects contained in each object group have the same tag value, by targeting the target objects contained in each object group based on the tag values of each object group.
  • the pixel is separately corrected by pixels, and the technical purpose of distinguishing objects when using the same resource in a plurality of different object groups is achieved, thereby realizing the technical effect of improving the resource multiplexing rate, thereby solving the prior art in order to distinguish different images.
  • the target object needs to be configured with different resources for different target objects, resulting in a technical problem of low resource reuse rate.
  • the first obtaining module 50, the setting module 52 and the correcting module 54 may be run in a computer terminal as part of the device, and the functions implemented by the module may be performed by a processor in the computer terminal, the computer terminal It can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID), a PAD, and the like.
  • the setting module 52 includes: a building module 60, a setting sub-module 62, and a marking module 64, wherein
  • the building module 60 is configured to construct a mapping relationship between the plurality of object groups and a plurality of different tag values; the setting sub-module 62 is configured to set a corresponding tag for each of the plurality of object groups by using the mapping relationship a value, wherein the tag value of each object is set to a tag value corresponding to the object group to which each object belongs; the tagging module 64 is configured to adopt the tag of each object The value marks a plurality of pixels contained in each of the objects.
  • the above-mentioned building module 60, setting sub-module 62 and marking module 64 can be run in a computer terminal as part of the device, and the functions implemented by the above-mentioned modules can be executed by a processor in the computer terminal, and the computer terminal is also It can be a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as Android phone, iOS phone, etc.
  • tablet computer tablet computer
  • applause computer tablet computer
  • PAD mobile Internet devices
  • the foregoing apparatus further includes: a first rendering module 70, a first normalization module 72, and an input module 74, where
  • the first rendering module 70 is configured to render the target object into the first rendering target texture, wherein the first rendering target texture is shown to have multiple channels; the first normalization module 72 is configured to use the target object The tag value of the pixel is normalized to obtain a standard tag value; the input module 74 is configured to input the standard tag value obtained by the normalization process to the second render target texture, where the second render target The texture has a plurality of channels, and the target object having different tag values is input to a plurality of channels of the second render target texture having different channel values.
  • the target object that is, the character object is rendered into the multi-channel in the first rendering target texture RT 0 , for example, by taking the target object as a stalker or defender in the fire line. .
  • the first rendering target texture may include RBG three channels, and may also include CMYK four channels, but is not limited thereto.
  • the first rendering module 70, the first normalization module 72, and the input module 74 may be run in a computer terminal as part of the device, and may be implemented by a processor in the computer terminal.
  • the computer terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID), a PAD, and the like.
  • the correction module 54 includes: a correction sub-module for using an image of the target object having different mark values. Correction to a different color, wherein the correction sub-module includes: a second acquisition module 80 and an adjustment module 82, wherein the second acquisition module 80 is configured to acquire the standard mark value corresponding to the pixel of the target object; The module 82 is configured to adjust display intensity of a plurality of primary colors constituting each pixel color in the target object according to the standard tag value corresponding to the pixel of the target object, to correct the color of each pixel in the target object. Wherein the pixels of the target object having the same tag value are corrected to the corresponding colors.
  • the color of the pixels included in the target object is composed of the colors of the three channels of RGB. Therefore, when the intensity of the color of the three channels of RGB changes, the pixel of the target object is changed.
  • the display color changes the display color of the target. For example, the RGB value of one pixel of the target object is (58, 110, 165). At this time, the display color of this pixel is blue, when the RGB of this pixel is modified to (248, At 24,237), the display color of the pixel is modified to red.
  • the color of the target object with the same mark value is not the same, and the color of the plurality of pixels in one target object is not the same, therefore, when performing pixel correction on the pixel of the target object, Instead of correcting the pixels of the target object to the same RGB value, the RGB values of the pixels with the same standard mark value are uniformly adjusted, and the adjustment strength is the same. In order to achieve that the pixels having the same standard mark value can obtain the same adjustment strength when correcting, it is necessary to introduce an adjustment constant.
  • the foregoing second obtaining module 80 and the adjusting module 82 may be run in a computer terminal as part of the device, and the functions implemented by the above module may be performed by a processor in the computer terminal, and the computer terminal may also be intelligent.
  • Mobile devices such as Android phones, iOS phones, etc.
  • tablets such as Samsung phones, iOS phones, etc.
  • applause computers such as Samsung Galaxy Tabs, Samsung Galaxy Tabs, etc.
  • MID mobile Internet devices
  • the adjustment module 82 includes: a calculation module 90, where
  • the Color scr is used to represent the original pixel color of the pixel of the target object, and the Color trans is used to represent a correction constant, wherein the correction constant is used to adjust the amount of each pixel color in the target object.
  • the Color scr is used to represent the original pixel color of the pixel of the target object
  • the Color trans is used to represent a correction constant, wherein the correction constant is used to adjust the amount of each pixel color in the target object.
  • the display intensity of the primary colors is used to calculate
  • the above correction constant is used to characterize the correction range of the correction color. It should be noted that pixels with the same standard mark value have the same correction constant during the adjustment process, and the correction constants of the pixels with different mark values are different. It is precisely because of the difference in the normal amount that the objects of different object groups exhibit different display effects.
  • correction constants can be a single-dimensional matrix, and each element in the matrix takes a value between (0, 1).
  • the color of the pixels included in the target object is still taken as an example.
  • the standard mark value of a pixel is 0.5, and the RGB value is (58, 110, 165).
  • Color trans (1, 0.6, 0.6)
  • the above method is not limited to use in the case of three channels of RGB, and is also applicable to the case of CMYK four channels or other multi-channels.
  • the result obtained by the above scheme is that the objects of different object groups exhibit different color casts, so that the user can easily recognize different object groups.
  • the pixels of the object in the object group can be marked as special mark values.
  • the special mark value is ignored, and no Correction, or correction of the pixel of the object that does not require color cast using the correction constant of (1, 1, 1).
  • the user himself does not need to perform pixel correction processing, so In the above manner, the correction of the user's own pixels is avoided.
  • the foregoing calculation module 90 can be run in a computer terminal as part of the device, and the functions implemented by the above module can be performed by a processor in the computer terminal, and the computer terminal can also be a smart phone (such as an Android mobile phone, Terminal devices such as iOS phones, tablet PCs, applause computers, and mobile Internet devices (MID), PAD, etc.
  • a smart phone such as an Android mobile phone, Terminal devices such as iOS phones, tablet PCs, applause computers, and mobile Internet devices (MID), PAD, etc.
  • the correction module 54 includes:
  • the first processing module 100 is configured to perform illuminating processing on edge pixels of the target object included in each of the object groups, wherein edge pixels of the target object having different mark values are corrected to have different illuminating colors.
  • the foregoing apparatus further includes: a mapping module, configured to perform a tone mapping process on the pixels of the target object, where the mapping module includes:
  • the second normalization module 110 is configured to adjust the contrast and/or brightness of the target object by normalizing the pixels of the target object.
  • the image After the pixel correction, it will cause the overall image to be dark. In order to further improve the display effect, after the pixel correction is completed, the image needs to be subjected to tone mapping processing to optimize the image to obtain the final output rendering. Target texture.
  • the first processing module 100 may be run in a computer terminal as part of the device, and the functions implemented by the module may be performed by a processor in the computer terminal, and the computer terminal may also be a smart phone (such as Android). Mobile phones, iOS phones, etc.), tablet computers, applause computers, and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as Android
  • Mobile phones, iOS phones, etc. tablet computers, applause computers, and mobile Internet devices (MID), PAD and other terminal devices.
  • MID mobile Internet devices
  • the foregoing apparatus further includes:
  • the second rendering module 120 is configured to perform rendering processing on pixels of the target object, where the rendering processing includes any one or more of the following: motion blur processing, depth of field processing, highlight processing.
  • the second rendering module 120 may be run in a computer terminal as part of the device, and the functions implemented by the module may be performed by a processor in the computer terminal, and the computer terminal may also be a smart phone (such as Android). Mobile phones, iOS phones, etc.), tablet computers, applause computers, and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as Android
  • Mobile phones, iOS phones, etc. tablet computers, applause computers, and mobile Internet devices (MID), PAD and other terminal devices.
  • MID mobile Internet devices
  • the foregoing apparatus further includes: a first processing module 130, a second processing module 132, and a third processing module 134, where
  • the first processing module 130 is configured to: perform weighted averaging of pixels in a preset range around the target pixel to obtain a new pixel, and adjust the target pixel to the new pixel, where the target pixel a pixel in a direction of motion of the target object;
  • the second processing module 132 is configured to: perform full-screen blur processing on the pixels of the target object, obtain a result of the full-screen blur processing, and compare the result of the full-screen blur processing with the pixel of the target object Mixing;
  • the third processing module 134 is configured to: output the highlight portion of the target object into the texture, perform blur processing on the pixels of the highlight portion, and input the result of the blur processing to the alpha blend input to In the pixel of the target object.
  • first processing module 130, second processing module 132, and third processing module 134 may be run in a computer terminal as part of the device, and may be implemented by a processor in the computer terminal.
  • the computer terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID), a PAD, and the like.
  • MID mobile Internet device
  • the various functional modules provided by the embodiments of the present application may be in a mobile terminal, a computer terminal, or It operates in a similar computing device and can also be stored as part of a storage medium.
  • embodiments of the present invention may provide a computer terminal, which may be any computer terminal device in a group of computer terminals.
  • a computer terminal may also be replaced with a terminal device such as a mobile terminal.
  • the computer terminal may be located in at least one network device of the plurality of network devices of the computer network.
  • the computer terminal may execute the program code of the following steps in the method for distinguishing objects: acquiring a plurality of object groups displayed in the image, wherein each object group includes at least one target object, any plurality of object groups Target objects in the middle allow the same resources to be configured; different tag values are set for multiple object groups, where the target objects contained in each object group have the same tag value; each tag group based on the tag value of each object group
  • the pixels of the target object included in the object group are respectively subjected to pixel correction; wherein pixels of the target object having different mark values are corrected to have different display attributes.
  • the computer terminal can include: one or more processors, memory, and transmission means.
  • the memory can be used to store software programs and modules, such as the method for extracting the body of the webpage and the program instructions/modules corresponding to the device, and the processor executes various software programs and modules stored in the memory. Functional application and data processing, that is, the method for extracting the above-mentioned webpage text is realized.
  • the memory may include a high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • the memory can further include memory remotely located relative to the processor, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the above transmission device is for receiving or transmitting data via a network.
  • Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device includes a Network Interface Controller (NIC) that can be connected to other networks through a network cable
  • the device is connected to the router to communicate with the Internet or a local area network.
  • the transmission device is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory is used to store preset action conditions and information of the preset rights user, and an application.
  • the processor can call the memory stored information and the application by the transmitting device to execute the program code of the method steps of each of the alternative or preferred embodiments of the above method embodiments.
  • the computer terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID), a PAD, and the like.
  • a smart phone such as an Android phone, an iOS phone, etc.
  • a tablet computer such as an iPad, Samsung Galaxy Tab, Samsung Galaxy Tab, etc.
  • MID mobile Internet device
  • PAD PAD
  • a server or a terminal for implementing the foregoing method for distinguishing an object is further provided.
  • the server or the terminal includes:
  • the communication interface 1402 is configured to acquire a plurality of object groups displayed in the image.
  • the memory 1404 is connected to the communication interface 1402 and is configured to store a plurality of object groups displayed in the obtained obtained image.
  • the processor 1406 is connected to the communication interface 1402 and the memory 1404, and is configured to acquire a plurality of object groups displayed in the image, wherein each object group includes at least one target object, and the target objects in any plurality of object groups are allowed to be configured the same. Resources; set different tag values for multiple object groups, where the target objects contained in each object group have the same tag value; The tag values of the object groups are respectively pixel-corrected for the pixels of the target object included in each object group; wherein the pixels of the target object having different tag values are corrected to have different display attributes.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may be used to save the program code executed by the method for distinguishing objects provided by the foregoing method embodiment and the device embodiment.
  • the foregoing storage medium may be located in any one of the computer terminal groups in the computer network, or in any one of the mobile terminal groups.
  • the storage medium is arranged to store program code for performing the following steps:
  • the storage medium is further configured to store program code for performing the steps of: constructing a mapping relationship between the plurality of object groups and the plurality of different tag values; and setting, by the mapping relationship, each of the plurality of object groups Corresponding tag values, where the tag value of each object is set to the tag value corresponding to the object group to which each object belongs; the tag value for each object is used for each object package Multiple pixels are included for marking.
  • the storage medium is further configured to store program code for performing the step of rendering the target object into the first render target texture, wherein the render target texture has a plurality of channels; the tag value of the pixel of the target object Perform normalization to obtain standard mark values;
  • the storage medium is further configured to store program code for performing the following steps: acquiring a standard tag value corresponding to a pixel of the target object; and adjusting each pixel in the target object according to a standard tag value corresponding to the pixel of the target object The display intensity of a plurality of primary colors of the color to correct the color of each pixel in the target object; wherein the pixels of the target object having the same mark value are corrected to the corresponding colors.
  • the storage medium is further configured to store program code for performing the following steps: illuminating edge pixels of the target object included in each object group, wherein edge pixels of the target object having different mark values are Corrected to have different luminescent colors.
  • the storage medium is further arranged to store program code for performing the following steps: adjusting the contrast and/or brightness of the target object by normalizing the pixels of the target object.
  • the storage medium is further configured to store program code for performing a rendering process on the pixels of the target object, wherein the rendering process comprises any one or more of the following: motion blur processing, depth of field processing, highlight processing .
  • the storage medium is further configured to store program code for performing the following steps: the motion blur processing comprises: weighting the pixels within a preset range around the target pixel to obtain a new pixel, and adjusting the target pixel to a new one. a pixel, wherein the target pixel is a pixel in a moving direction of the target object; the depth of field processing comprises: performing full-screen blur processing on the pixel of the target object to obtain a result of full-screen blur processing, and performing the result of the full-screen blur processing on the pixel of the target object Mixing; highlight processing includes: outputting the highlight portion of the target object to the texture, blurring the pixels of the highlight portion, and inputting the result of the blur processing into the pixels of the target object through Alpha blending.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

Abstract

本发明公开了一种区分对象的方法和装置。其中,该方法包括:获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正;其中,具有不同标记值的目标对象的像素被矫正为具有不同的显示属性。本发明解决了现有技术为了区分图像中不同的目标对象,需要针对不同的目标对象配置不同的资源,导致资源复用率低的技术问题。

Description

区分对象的方法和装置 技术领域
本发明涉及图像处理领域,具体而言,涉及一种区分对象的方法和装置。
背景技术
在现有技术中,对于需要对图像中同时出现的多个对象进行区分的情况,通常使用的方法是,对不同对象组进行不同的资源配置,以使得不同对象组中的一个或多个不同的对象具有相同的显示特征,从而使用户辨认。例如,在游戏反恐精英中,用户在射击的过程中,需要区分显示区域内显示的目标对象为自身的队友还是敌人。在用户选定自身身份是恐怖分子还是反恐精英之后,游戏会提示用户选择相应的服装,或为用户配备与选定的身份相应的服装,可以知晓得是,在游戏的过程中,两种不同身份的玩家的服装是不同的,在游戏过程中,用户通过显示区域内出现的对象的服装来判断对象是队友还是敌人,并进一步判断是都需要对对象进行射击。
然而,在用上述方法对图像中的对象进行区分的过程中,存在如下问题:
(1)使用现有技术是用户对目标对象进行区分,会使得不同对象组中的对象不能使用同一套资源,从而使得资源的复用性较低,当多个对象组中的对象均向使用同一套资源时,难以达到协调;
(2)当目的出现的位置较在用户的视角中的较远位置时,用户难以辨认目标对象的具体属性。
针对现有技术为了区分图像中不同的目标对象,需要针对不同的目标对象配置不同的资源,导致资源复用率低的问题,目前尚未提出有效的解 决方案。
发明内容
本发明实施例提供了一种区分对象的方法和装置,以至少解决现有技术为了区分图像中不同的目标对象,需要针对不同的目标对象配置不同的资源,导致资源复用率低的技术问题。
根据本发明实施例的一个方面,提供了一种区分对象的方法,包括:获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正;其中,具有不同标记值的目标对象的像素被矫正为具有不同的显示属性。
根据本发明实施例的另一方面,还提供了一种区分对象的装置,包括:第一获取模块,用于获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;设置模块,用于对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;矫正模块,用于根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正;其中,具有不同标记值的目标对象的像素被矫正为具有不同的显示属性。
在本发明实施例中,采用获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值的方式,通过根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正,达到了在多个不同的对象组使用同一资源时依旧能够区分对象的技术目的,从而实现了提高资源复用率的技术效果,进而解决了现有技术为了区分图像中不同的目 标对象,需要针对不同的目标对象配置不同的资源,导致资源复用率低的技术问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本申请实施例的一种区分对象的方法的计算机终端的硬件结构框图;
图2是根据本申请实施例的一种区分对象的方法的流程图;
图3(a)是根据现有技术的超级后处理的处理时间柱状图;
图3(b)是根据本申请实施例的超级后处理的处理时间柱状图;
图4(a)是根据现有技术的一种后处理阶段的处理流程图;
图4(b)是根据本申请实施例的一种可选的能够区分对象的后处理阶段的处理流程图;
图5是根据本申请实施例的一种区分对象的装置的结构示意图;
图6是根据本申请实施例的一种可选的区分对象的装置的结构示意图;
图7是根据本申请实施例的一种可选的区分对象的装置的结构示意图;
图8是根据本申请实施例的一种可选的区分对象的装置的结构示意图;
图9是根据本申请实施例的一种可选的区分对象的装置的结构示意图;
图10是根据本申请实施例的一种可选的区分对象的装置的结构示意图;
图11是根据本申请实施例的一种可选的区分对象的装置的结构示意图;
图12是根据本申请实施例的一种可选的区分对象的装置的结构示意图;
图13是根据本申请实施例的一种可选的区分对象的装置的结构示意图;以及
图14是根据本申请实施例的一种区分对象的终端的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
下面对本申请实施例中出现的专业名词做如下解释:
后处理:后处理是计算机图形管线中的一个环节,对渲染完三维场景后输出的图像进行处理的过程,叫做后处理。
管线:管线这个术语用于描述一种过程,它可能涉及两个或更多个独特的阶段或步骤。
渲染目标纹理:在三维计算机图形领域中,渲染目标纹理是一种图形 处理单元(GPU),允许将三维场景渲染到一个中间储存缓冲区中的技术。
偏色:偏色是由于一种或多种颜色弱或强导致的显示色彩与真实色彩之间产生差异。在使用液晶显示器或者照相机、印刷机等仪器时偏色情况较为常见。
通道:是数字图像中存储不同类型信息的灰度图像。一个图像最多可以有数十个通道,常用的RGB和Lab图像默认有三个通道,而CMYK图像则默认有四个通道。
Alpha通道:Alpha通道是一个8位的灰度通道,该通道用256级灰度来记录图像中的透明度信息,定义透明、不透明和半透明区域,其中黑表示透明,白表示不透明,灰表示半透明。
高光:是用于视频游戏、演示动画和高动态光照渲染中的一种计算机图形效果,用于仿造真实摄像机的物体成像。该效果会在高亮度物体周围产生条纹或羽毛状的光芒,以模糊图像细节。
实施例1
根据本发明实施例,提供了一种获取推送数据的方法的实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本申请实施例1所提供的方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。以运行在计算机终端上为例,图1是根据本申请实施例的一种区分对象的计算机终端的硬件结构框图。如图1所示,计算机终端10可以包括一个或多个(图中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)、用于存储数据的存储器104、以及用于通信功能的传输装置106。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,计算机终端10还可包括比图1中所示 更多或者更少的组件,或者具有与图1所示不同的配置。
存储器104可用于存储应用软件的软件程序以及模块,如本发明实施例中的区分对象的方法所对应的程序指令/模块,处理器102通过运行存储在存储器104内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的区分对象的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
根据本发明实施例,提供了一种区分对象的方法,如图2所示,该方法包括:
步骤S202,获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源。
在一种可选的实施例中,以射击类游戏为例,在常见的射击游戏中,用户需要选定组别,以确定自身的队友和敌人,例如,在游戏反恐精英中,用户可以选择恐怖分子或反恐精英进行游戏;在游戏穿越火线中,用户可以选择潜伏着或保卫者进行游戏。在同一游戏中,选择同一身份的用户为属于同一对象组的对象,其中一个对象组至少包含一个对象。
此处需要说明的是,本申请上述实施例允许不同的对象组中的对象配 置相同的资源。
在一种可选的实施例中,以穿越火线中的潜伏者和保卫者为例,在现有技术中,潜伏者和保卫者不能使用同一套服装,因为在现有技术中,用户在使用穿越火线时,通过潜伏者和保卫者不同的着装,来判断游戏中的其他对象是自己的队友还是自己的敌人,在本申请中,游戏中的不同分组能够选择同一套服装,且不影响用户对友方和敌方的分辨。
步骤S204,对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值。
在一种可选的实施例中,仍以目标对象为穿越火线中的潜伏者或保卫者为例,在获取潜伏者和保卫者的对象中,给游戏中的角色根据队伍性质加上标记。在这一示例中,具有三个对象组,三个对象组分别是友方、敌方以及用户自身,例如,在上述示例中,可以将友方标记为1,敌方角色标记为2,同时将用户自身标记为0。
步骤S206,根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正,其中,具有不同标记值的目标对象的像素被矫正为具有不同的显示属性。
此处需要说明的是,上述任意对象组中包含至少一个目标对象,且不同对象组具有不同的标记值,但同一对象组中的对象具有相同的标记值。应用上述方法可以达到当多个目标对象的配置资源相同时,将多个目标对象进行区分的技术目的。
此处还需要说明的是,通过上述实施例提供的方法进行区分的对象组可以是两个对象组,也可以是多个对象组,但对象组的个数不限于此,任意个数的对象组都能够通过上述步骤提供的方法进行区分。
需要注意的是,对目标对象的像素进行像素矫正的过程在图像处理中,后处理阶段的超级后处理中进行,在后处理中对像素进行矫正,极大的提高了渲染效率,图3(a)是根据现有技术的超级后处理的处理时间柱状图, 图3(b)是根据本申请实施例的超级后处理的处理时间柱状图。本申请在超级后处理阶段加入了对目标对象的像素进行像素矫正的步骤,结合图3(a)和图3(b)所示,经过本申请对目标对象的像素进行像素矫正之后的渲染时间与现有技术中的渲染时间稍有提升,但提升并不明显,以Erg311(表示这一帧图像包括311个对象)为例,渲染时间为148.0us,在现有技术中,Erg312(表示这一帧图像包括312个对象)的渲染时间为155.0us,与现有技术的渲染时间相比,仅增加了4.7%,因此本申请提供的方案在达到区分对象的基础上,能够保持原有的渲染效率。
图4(a)是根据现有技术的一种后处理阶段的处理流程图,图4(b)是根据本申请实施例的一种可选的能够区分对象的后处理阶段的处理流程图。结合图4(a)和图4(b),能够得到,本申请尽在超级后处理阶段缴入了像素矫正的步骤。
在本发明实施例的上述步骤中,采用获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值的方式,通过根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正,达到了在多个不同的对象组使用同一资源时依旧能够区分对象的技术目的,从而实现了提高资源复用率的技术效果,进而解决了现有技术为了区分图像中不同的目标对象,需要针对不同的目标对象配置不同的资源,导致资源复用率低的技术问题。
在本申请提供的一种可选实施例中,步骤S204,对多个对象组设置不同的标记值,包括:
步骤S2041,构建多个对象组与多个不同标记值的映射关系。
此处需要说明的是,每个对象组对应的标记值不限于任意数值范围,以保证多个对象组的标记值不相同为目的。
在一种可选的实施例中,仍以目标对象为穿越火线中的潜伏者或保卫者为例,在这一示例中,具有三个对象组,分别为:用户自身、用户队友以及用户的敌人,因此对应上述三个对象组构建对象组与标记值的映射关系,由于该映射关系仅用于区分的不同对象组,因此对具体标记值不做限定,仅确定不同对象组对应的标记值不同即可。例如,用户自身、用户队友以及用户的敌人对应的标记值可以分别是0,1,2。
构建多个对象组与标记值的映射关系后,为了使对象组中的每个对象均能被区分,因此需要对对象组中包含的每个对象进行标记。
步骤S2043,通过映射关系,对多个对象组中的每个对象设置相应的标记值,其中,每个对象的标记值被设置为每个对象所属的对象组对应的标记值。
步骤S2045,采用每个对象的标记值对每个对象包含的多个像素进行标记。
在一种可选的实施例中,仍以目标对象为穿越火线中的潜伏者或保卫者为例,在每个对象组中包含的每个目标对象都具有标记值的情况下,由于对目标对象设置标记值的目的在于区分不同对象组的目标对象,且本申请采用的区分方法是针对目标对象的像素进行像素矫正,因此,每个目标对象包括的每个像素都需要具有与目标对象相同标记值,已使得对目标对象进行像素矫正时能够区别不同的对象。
在本申请提供的一种可选实施例中,步骤S204,在对多个对象组设置不同的标记值之后,方法还包括:
步骤S2047,将目标对象渲染至第一渲染目标纹理中,其中,渲染目标纹理具有多个通道。
在一种可选的实施例中,仍以目标对象为穿越火线中的潜伏者或保卫者为例,将目标对象,即,将角色本体渲染至第一渲染目标纹理RT0中的多通道中。
此处需要说明的是,上述第一渲染目标纹理可以包含RBG三通道,也可以包含CMYK四通道,但不限于此。
步骤S2049,将目标对象的像素的标记值进行归一化处理,得到标准标记值。
此处需要说明的是,将标记值进行归一化处理后,得到的标记值虽然与原标记值不相同,但进行过归一化处理之后,各个标准标记值之间仍保持区别,能够起到区分对象的效果。
在一种可选的实施例中,仍以目标对象为穿越火线中的潜伏者或保卫者为例,当用户自身、用户队友以及用户的敌人对应的标记值分别是0,1,2的情况下,进行归一化处理后的标准标记值可以分别是0,0.5,1。
步骤S2051,将归一化处理得到的标准标记值输入至第二渲染目标纹理,其中,第二渲染目标纹理具有多个通道,且不同的标准标记值被输入至通道值不同的第二渲染目标纹理的多个通道中。
在上述步骤中,将目标对象对应的标准标记值输入至第二渲染目标纹理RT1的Alpha中,其中,第二渲染目标纹理仍具有多个通道,一个目标对象的标准标记值仅占用一个通道。
此处需要说明的是,不同标准标记值输入的通道的通道值是不同的,通道值越大,表现的颜色偏向白色,通道至越小,表现是颜色偏向灰色,因此,通过第二渲染目标纹理便可以得到目标对象的属性,即,目标对象属于哪个对象组中的对象。
此处还需要说明的是,第二渲染目标纹理包含通道中仅具有目标对象的标准特征值,因此,第二渲染目标纹理输出的是具有一定灰度的目标对象的轮廓,并不包括目标对象本身。
在本申请提供的一种可选实施例中,步骤S206,根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正,包括:将具有不同标记值的目标对象的像素矫正为不同颜色,其中,将具有不同 标记值的目标对象的像素矫正为不同颜色包括:
步骤S2061,获取目标对象的像素对应的标准标记值。
步骤S2063,根据目标对象的像素对应的标准标记值,调整目标对象中组成每个像素颜色的多个基色的显示强度,以矫正目标对象中每个像素的颜色,其中,具有相同标记值的目标对象的像素被矫正为相应的颜色。
在一种可选的实施例中,以目标对象中包含的像素的颜色由RGB三个通道的颜色构成,因此,当RGB三个通道的颜色的强度发生变化时,就会改变目标对象的像素的显示颜色,从而改变目标的显示颜色,例如,目标对象其中一个像素的RGB值为(58,110,165),此时这一像素的显示颜色为蓝色,当对这一像素的RGB修改为(248,24,237)时,该像素的显示颜色被修改为枚红色。
此处需要说明的是,具有相同标记值的目标对象的颜色并不相同,同时,一个目标对象中的多个像素的颜色也并不相同,因此,在对目标对象的像素进行像素矫正时,并非将目标对象的像素矫正为同一个RGB值,而是将具有相同标准标记值的像素的RGB值做统一调整,且这一调整强度是相同的。为了达到是具有相同标准标记值的像素在矫正时能够得到相同的调整强度,需要引入调整常数。
在本申请提供的一种可选实施例中,步骤S206,步骤S2063,根据目标对象的像素对应的标准标记值,调整目标对象中组成每个像素颜色的多个基色的显示强度:
步骤S20631,通过如下公式计算目标对象中每个像素的矫正像素色,
Colordest=Colorscr*Colortrans
其中,Colordest用于表征目标对象的像素的矫正像素色,Colorscr用于表征目标对象的像素的原像素色,Colortrans用于表征矫正常量,其中,矫正常量用于调整目标对象中组成每个像素色的多个基色的显示强度。
上述矫正常量用于表征矫正色彩的矫正幅度,需要注意的是,具有同 一标准标记值的像素在调整的过程中具有相同的矫正常量,具有不同标记值的像素的矫正常量是不相同的,也正是因为较正常量的不同,使得不同对象组的对象呈现不同显示效果。
此处还需要说明的是,上述矫正常量可以是单维度矩阵,且矩阵中的每个元素都在(0,1]之间取值。
在一种可选的实施例中,仍以上述目标对象中包含的像素的颜色由RGB三个通道的颜色构成为例,某一像素的标准标记值为0.5,且RGB值为(58,110,165),当Colortrans=(1,0.6,0.6)时,可以认为在该像素的RGB通道中,R通道保持原有值,G通道和B通道分别原有值的0.5,由于R通道表示红色通道,因此经过上述处理后,该像素的显示颜色会偏红色。
上述方法不限于在RGB三个通道的情况下使用,同样适用于CMYK四通道或其他多通道的情况。
此处需要说明的是,由于上述实施例提供的方法在对像素进行像素矫正时,是以每个像素的标准标记值为准,对具有相同标记值的像素使用同一个较正常量进行矫正,且具有同一个表征标记值的像素构成了对象组,因此,通过上述方案得到的结果是,不同对象组的对象呈现不同的偏色,使用户能够轻易的辨识不同的对象组。
此处还需要说明的是,当有对象组不需要偏色时可以将这一对象组中对象的像素标记为特殊标记值,当进行像素矫正时,遇到特殊标记值则忽略,不进行矫正,或为不需要偏色的对象的像素使用(1,1,1)的矫正常量来矫正。例如,在上述穿越火线的游戏中,在对用户的队友和敌人作为相应的像素矫正后,用户自身是不需要进行像素矫正处理的,因此可以采用上述方式避免对用户自身的像素进行矫正。
在本申请提供的一种可选实施例中,步骤206,根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正,包括:
步骤S2261,对每个对象组中包含的目标对象的边缘像素进行发光处 理,其中,具有不同标记值的目标对象的边缘像素被矫正为具有不同的发光颜色。
在一种可选的实施例中,在仅有两个目标对象的情况下,可以选择其中一方进行边缘发光处理,针对多个不同的目标对象,通过不同的发光颜色加以区别,以分辨不同对象组中的目标对象。
在本申请提供的一种可选实施例中,步骤S206,在根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正之后,方法还包括:对目标对象的像素进行色调映射处理,其中,对目标对象的像素进行色调映射处理包括:
步骤S208,通过将目标对象的像素进行归一化处理,以调整目标对象的对比度和/或亮度。
在经过像素矫正后,会引起图像整体偏暗的显示效果,为了进一步提高显示效果,在进行完像素矫正后,还需要对图像的进行色调映射处理,以对图像进行优化,得到最终输出的渲染目标纹理。
在一种可选的实施例中,对图像进行色调映射的方法可以是,将图像中每个像素进行归一化处理,即,将颜色范围为(0,∞]的像素映射到颜色范围为[0,1]的范围中。经过色调映射后,图像的对比度和亮度等属性将谁得到进一步优化。
在本申请提供的一种可选实施例中,步骤206,在根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正之前,方法还包括:
步骤S2010,对目标对象的像素进行渲染处理,其中,渲染处理包括以下任意一个或多个:运动模糊处理、景深处理、高光处理。
在本申请提供的一种可选实施例中,步骤S2011,对目标对象的像素进行渲染处理,包括:
步骤S2011,运动模糊处理包括:将目标像素周围预设范围内的像素进行加权平均得到新的像素,将目标像素调整至新的像素,其中,目标像素为在目标对象的运动方向上的像素。
运动模糊主要模拟场景内物体快速移动或镜头移动产生的模糊效果,使渲染出来的图像更接近于人眼或摄像机捕捉到的画面。具体的方法可以是获取目标对象在运动方向上的多个像素,采用目标像素周围的像素进行加权平均得到新像素值,该新像素值为目标像素的像素。
此处需要说明的是,在目标像素周围的像素已经更改为新像素值的情况下,计算该像素的新像素值时,仍采用周围像素的原像素值计算。
步骤S2023,景深处理包括:对目标对象的像素进行全屏模糊处理,得到全屏模糊处理的结果,将全屏模糊处理的结果与目标对象的像素进行混合。
在上述步骤中,全屏模糊处理与步骤S2011中的模糊处理相同,区别在于全屏模糊处理针对于整个显示区域中的全部像素进行模糊处理。
步骤S2025,高光处理包括:将目标对象的高光部分输出到贴图中,对高光部分的像素进行模糊处理,将模糊处理的结果通过Alpha混合输入至目标对象的像素中。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软 件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
实施例2
根据本发明实施例,还提供了一种用于实施上述区分对象的方法的装置,图5是根据本申请实施例的一种区分对象的装置的结构示意图,如图5所示,该装置包括:第一获取模块50、设置模块52和矫正模块54,
其中,第一获取模块50用于获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;设置模块52用于对所述多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;矫正模块54用于根据每个所述对象组的标记值,对所述每个对象组中包含的目标对象的像素分别进行像素矫正;其中,具有不同标记值的所述目标对象的像素被矫正为具有不同的显示属性。
此处需要说明的是,上述任意对象组中包含至少一个目标对象,且不同对象组具有不同的标记值,但同一对象组中的对象具有相同的标记值。应用上述方法可以达到当多个目标对象的配置资源相同时,将多个目标对象进行区分的技术目的。
此处还需要说明的是,通过上述实施例提供的方法进行区分的对象组可以是两个对象组,但对象组的个数不限于此,任意个数的对象组都能够通过上述步骤提供的方法进行区分。
需要注意的是,对目标对象的像素进行像素矫正的过程为图像处理中,后处理阶段的超级后处理中,在后处理中对像素进行矫正,极大的提高了渲染效率。本申请在超级后处理阶段加入了对目标对象的像素进行像素矫正的步骤,结合图3(a)和图3(b)所示,经过本申请对目标对象的像 素进行像素矫正之后的渲染时间与现有技术中的渲染时间稍有提升,但提升并不明显,以Erg311(表示这一帧图像包括311个对象)为例,渲染时间为148.0us,在现有技术中,Erg312(表示这一帧图像包括312个对象)的渲染时间为155.0us,与现有技术的渲染时间相比,仅增加了4.7%,因此本申请提供的方案在达到区分对象的基础上,能够保持原有的渲染效率。
结合图4(a)和图4(b),能够得到,本申请仅在超级后处理阶段加入了像素矫正的步骤。
在本发明实施例的上述步骤中,采用获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值的方式,通过根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正,达到了在多个不同的对象组使用同一资源时依旧能够区分对象的技术目的,从而实现了提高资源复用率的技术效果,进而解决了现有技术为了区分图像中不同的目标对象,需要针对不同的目标对象配置不同的资源,导致资源复用率低的技术问题。
此处需要说明的是,上述第一获取模块50、设置模块52和矫正模块54可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请上述实施例提供的一种可选方案中,结合图6所示,上述设置模块52包括:构建模块60、设置子模块62和标记模块64,其中
构建模块60用于构建所述多个对象组与多个不同标记值的映射关系;设置子模块62用于通过所述映射关系,对所述多个对象组中的每个对象设置相应的标记值,其中,所述每个对象的标记值被设置为所述每个对象所属的对象组对应的标记值;标记模块64用于采用所述每个对象的标记 值对所述每个对象包含的多个像素进行标记。
此处需要说明的是,上述构建模块60、设置子模块62和标记模块64可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请上述实施例提供的一种可选方案中,结合图7所示,上述装置还包括:第一渲染模块70、第一归一化模块72和输入模块74,其中,
第一渲染模块70用于将所述目标对象渲染至第一渲染目标纹理中,其中,所示第一渲染目标纹理具有多个通道;第一归一化模块72用于将所述目标对象的像素的标记值进行归一化处理,得到标准标记值;输入模块74用于将所述归一化处理得到的所述标准标记值输入至第二渲染目标纹理,其中,所述第二渲染目标纹理具有多个通道,且标记值不同的所述目标对象被输入至通道值不同的所述第二渲染目标纹理的多个通道中。
在一种可选的实施例中,仍以目标对象为穿越火线中的潜伏者或保卫者为例,将目标对象,即,将角色本体渲染至第一渲染目标纹理RT0中的多通道中。
此处需要说明的是,上述第一渲染目标纹理可以包含RBG三通道,也可以包含CMYK四通道,但不限于此。
此处需要说明的是,上述第一渲染模块70、第一归一化模块72和输入模块74可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请上述实施例提供的一种可选方案中,结合图8所示,上述矫正模块54包括:矫正子模块,用于将具有不同标记值的所述目标对象的像 素矫正为不同颜色,其中,所述矫正子模块包括:第二获取模块80和调整模块82,其中,第二获取模块80用于获取所述目标对象的像素对应的所述标准标记值;调整模块82用于根据所述目标对象的像素对应的所述标准标记值,调整所述目标对象中组成每个像素颜色的多个基色的显示强度,以矫正所述目标对象中每个像素的颜色;其中,具有相同标记值的所述目标对象的像素被矫正为相应的颜色。
在一种可选的实施例中,以目标对象中包含的像素的颜色由RGB三个通道的颜色构成,因此,当RGB三个通道的颜色的强度发生变化时,就会改变目标对象的像素的显示颜色,从而改变目标的显示颜色,例如,目标对象其中一个像素的RGB值为(58,110,165),此时这一像素的显示颜色为蓝色,当对这一像素的RGB修改为(248,24,237)时,该像素的显示颜色被修改为枚红色。
此处需要说明的是,具有相同标记值的目标对象的颜色并不相同,同时,一个目标对象中的多个像素的颜色也并不相同,因此,在对目标对象的像素进行像素矫正时,并非将目标对象的像素矫正为同一个RGB值,而是将具有相同标准标记值的像素的RGB值做统一调整,且这一调整强度是相同的。为了达到是具有相同标准标记值的像素在矫正时能够得到相同的调整强度,需要引入调整常数。
此处需要说明的是,上述第二获取模块80和调整模块82可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请上述实施例提供的一种可选方案中,结合图9所示,上述调整模块82包括:计算模块90,其中,
计算模块90用于过如下公式计算所述目标对象中每个像素的矫正像素色,Colordest=Colorscr*Colortrans,其中,所述Colordest用于表征所述目标对 象的像素的矫正像素色,所述Colorscr用于表征所述目标对象的像素的原像素色,所述Colortrans用于表征矫正常量,其中,所述矫正常量用于调整所述目标对象中组成每个像素色的多个基色的显示强度。
上述矫正常量用于表征矫正色彩的矫正幅度,需要注意的是,具有同一标准标记值的像素在调整的过程中具有相同的矫正常量,具有不同标记值的像素的矫正常量是不相同的,也正是因为较正常量的不同,使得不同对象组的对象呈现不同显示效果。
此处还需要说明的是,上述矫正常量可以是单维度矩阵,且矩阵中的每个元素都在(0,1]之间取值。
在一种可选的实施例中,仍以上述目标对象中包含的像素的颜色由RGB三个通道的颜色构成为例,某一像素的标准标记值为0.5,且RGB值为(58,110,165),当Colortrans=(1,0.6,0.6)时,可以认为在该像素的RGB通道中,R通道保持原有值,G通道和B通道分别原有值的0.5,由于R通道表示红色通道,因此经过上述处理后,该像素的显示颜色会偏红色。
上述方法不限于在RGB三个通道的情况下使用,同样适用于CMYK四通道或其他多通道的情况。
此处需要说明的是,由于上述实施例提供的方法在对像素进行像素矫正时,是以每个像素的标准标记值为准,对具有相同标记值的像素使用同一个较正常量进行矫正,且具有同一个表征标记值的像素构成了对象组,因此,通过上述方案得到的结果是,不同对象组的对象呈现不同的偏色,使用户能够轻易的辨识不同的对象组。
此处还需要说明的是,当有对象组不需要偏色时们可以将这一对象组中对象的像素标记为特殊标记值,当进行像素矫正时,遇到特殊标记值则忽略,不进行矫正,或为不需要偏色的对象的像素使用(1,1,1)的矫正常量来矫正。例如,在上述穿越火线的游戏中,在对用户的队友和敌人作为相应的像素矫正后,用户自身是不需要进行像素矫正处理的,因此可 以采用上述方式避免对用户自身的像素进行矫正。
此处需要说明的是,上述计算模块90可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请上述实施例提供的一种可选方案中,结合图10所示,上述矫正模块54包括:
第一处理模块100,用于对所述每个对象组中包含的目标对象的边缘像素进行发光处理,其中,具有不同标记值的所述目标对象的边缘像素被矫正为具有不同的发光颜色。
本申请上述实施例提供的一种可选方案中,结合图11所示,上述装置还包括:映射模块,用于对所述目标对象的像素进行色调映射处理,其中,所述映射模块包括:
第二归一化模块110用于通过将所述目标对象的像素进行归一化处理,以调整所述目标对象的对比度和/或亮度。
在经过像素矫正后,会引起图像整体偏暗的显示效果,为了进一步提高显示效果,在进行完像素矫正后,还需要对图像的进行色调映射处理,以对图像进行优化,得到最终输出的渲染目标纹理。
此处需要说明的是,上述第一处理模块100可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请上述实施例提供的一种可选方案中,结合图12所示,上述装置还包括:
第二渲染模块120用于对所述目标对象的像素进行渲染处理,其中,所述渲染处理包括以下任意一个或多个:运动模糊处理、景深处理、高光处理。
此处需要说明的是,上述第二渲染模块120可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请上述实施例提供的一种可选方案中,结合图13所示,上述装置还包括:第一处理模块130、第二处理模块132和第三处理模块134,其中,
第一处理模块130,用于所述运动模糊处理包括:将目标像素周围预设范围内的像素进行加权平均得到新的像素,将所述目标像素调整至所述新的像素,其中,目标像素为在所述目标对象的运动方向上的像素;
第二处理模块132,用于所述景深处理包括:对所述目标对象的像素进行全屏模糊处理,得到所述全屏模糊处理的结果,将所述全屏模糊处理的结果与所述目标对象的像素进行混合;
第三处理模块134,用于所述高光处理包括:将所述目标对象的高光部分输出到贴图中,对所述高光部分的像素进行模糊处理,将所述模糊处理的结果通过Alpha混合输入至所述目标对象的像素中。
此处需要说明的是,上述第一处理模块130、第二处理模块132和第三处理模块134可以作为装置的一部分运行在计算机终端中,可以通过计算机终端中的处理器来执行上述模块实现的功能,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本申请实施例所提供的各个功能模块可以在移动终端、计算机终端或 者类似的运算装置中运行,也可以作为存储介质的一部分进行存储。
由此,本发明的实施例可以提供一种计算机终端,该计算机终端可以是计算机终端群中的任意一个计算机终端设备。可选地,在本实施例中,上述计算机终端也可以替换为移动终端等终端设备。
可选地,在本实施例中,上述计算机终端可以位于计算机网络的多个网络设备中的至少一个网络设备。
在本实施例中,上述计算机终端可以执行区分对象的方法中以下步骤的程序代码:获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正;其中,具有不同标记值的目标对象的像素被矫正为具有不同的显示属性。
可选地,该计算机终端可以包括:一个或多个处理器、存储器、以及传输装置。
其中,存储器可用于存储软件程序以及模块,如本发明实施例中的网页正文的提取方法及装置对应的程序指令/模块,处理器通过运行存储在存储器内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的网页正文的提取方法。存储器可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器可进一步包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络 设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,具体地,存储器用于存储预设动作条件和预设权限用户的信息、以及应用程序。
处理器可以通过传输装置调用存储器存储的信息及应用程序,以执行上述方法实施例中的各个可选或优选实施例的方法步骤的程序代码。
本领域普通技术人员可以理解,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
实施例3
根据本发明实施例,还提供了一种用于实施上述区分对象的方法的服务器或终端,如图14所示,该服务器或终端包括:
通讯接口1402,设置为获取图像中显示的多个对象组。
存储器1404,与通讯接口1402连接,设置为存储获取的得到的图像中显示的多个对象组。
处理器1406,与通讯接口1402及存储器1404连接,设置为获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;根据每 个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正;其中,具有不同标记值的目标对象的像素被矫正为具有不同的显示属性。
可选地,本实施例中的具体示例可以参考上述实施例1和实施例2中所描述的示例,本实施例在此不再赘述。
实施例4
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于保存上述方法实施例和装置实施例所提供的区分对象的方法所执行的程序代码。
可选地,在本实施例中,上述存储介质可以位于计算机网络中计算机终端群中的任意一个计算机终端中,或者位于移动终端群中的任意一个移动终端中。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S1,获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;
S2,对多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;
S3,根据每个对象组的标记值,对每个对象组中包含的目标对象的像素分别进行像素矫正;其中,具有不同标记值的目标对象的像素被矫正为具有不同的显示属性。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:构建多个对象组与多个不同标记值的映射关系;通过映射关系,对多个对象组中的每个对象设置相应的标记值,其中,每个对象的标记值被设置为每个对象所属的对象组对应的标记值;采用每个对象的标记值对每个对象包 含的多个像素进行标记。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:将目标对象渲染至第一渲染目标纹理中,其中,渲染目标纹理具有多个通道;将目标对象的像素的标记值进行归一化处理,得到标准标记值;
将归一化处理得到的标准标记值输入至第二渲染目标纹理,其中,第二渲染目标纹理具有多个通道,且不同的标准标记值被输入至通道值不同的第二渲染目标纹理的多个通道中。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:获取目标对象的像素对应的标准标记值;根据目标对象的像素对应的标准标记值,调整目标对象中组成每个像素颜色的多个基色的显示强度,以矫正目标对象中每个像素的颜色;其中,具有相同标记值的目标对象的像素被矫正为相应的颜色。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:通过如下公式计算目标对象中每个像素的矫正像素色,Colordest=Colorscr*Colortrans,其中,Colordest用于表征目标对象的像素的矫正像素色,Colorscr用于表征目标对象的像素的原像素色,Colortrans用于表征矫正常量,其中,矫正常量用于调整目标对象中组成每个像素色的多个基色的显示强度。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:对每个对象组中包含的目标对象的边缘像素进行发光处理,其中,具有不同标记值的目标对象的边缘像素被矫正为具有不同的发光颜色。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:通过将目标对象的像素进行归一化处理,以调整目标对象的对比度和/或亮度。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:对目标对象的像素进行渲染处理,其中,渲染处理包括以下任意一个或多个:运动模糊处理、景深处理、高光处理。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:运动模糊处理包括:将目标像素周围预设范围内的像素进行加权平均得到新的像素,将目标像素调整至新的像素,其中,目标像素为在目标对象的运动方向上的像素;景深处理包括:对目标对象的像素进行全屏模糊处理,得到全屏模糊处理的结果,将全屏模糊处理的结果与目标对象的像素进行混合;高光处理包括:将目标对象的高光部分输出到贴图中,对高光部分的像素进行模糊处理,将模糊处理的结果通过Alpha混合输入至目标对象的像素中。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
如上参照附图以示例的方式描述了根据本发明的区分对象的方法及装置。但是,本领域技术人员应当理解,对于上述本发明所提出的区分对象的方法及装置,还可以在不脱离本发明内容的基础上做出各种改进。因此,本发明的保护范围应当由所附的权利要求书的内容确定。
可选地,本实施例中的具体示例可以参考上述实施例1和实施例2中所描述的示例,本实施例在此不再赘述。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (20)

  1. 一种区分对象的方法,其特征在于,包括:
    获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;
    对所述多个对象组设置不同的标记值,其中,每个对象组中包含的目标对象具有相同的标记值;
    根据每个所述对象组的标记值,对所述每个对象组中包含的目标对象的像素分别进行像素矫正;
    其中,具有不同标记值的所述目标对象的像素被矫正为具有不同的显示属性。
  2. 根据权利要求1所述的方法,其特征在于,对所述多个对象组设置不同的标记值,包括:
    构建所述多个对象组与多个不同标记值的映射关系;
    通过所述映射关系,对所述多个对象组中的每个对象设置相应的标记值,其中,所述每个对象的标记值被设置为所述每个对象所属的对象组对应的标记值;
    采用所述每个对象的标记值对所述每个对象包含的多个像素进行标记。
  3. 根据权利要求2所述的方法,其特征在于,在对所述多个对象组设置不同的标记值之后,所述方法还包括:
    将所述目标对象渲染至第一渲染目标纹理中,其中,所述渲染目标纹理具有多个通道;
    将所述目标对象的像素的标记值进行归一化处理,得到标准标记值;
    将所述归一化处理得到的所述标准标记值输入至第二渲染目标纹理,其中,所述第二渲染目标纹理具有多个通道,且不同的所述标 准标记值被输入至通道值不同的所述第二渲染目标纹理的多个通道中。
  4. 根据权利要求3所述的方法,其特征在于,根据每个所述对象组的标记值,对所述每个对象组中包含的目标对象的像素分别进行像素矫正,包括:将具有不同标记值的所述目标对象的像素矫正为不同颜色,其中,将具有不同标记值的所述目标对象的像素矫正为不同颜色包括:
    获取所述目标对象的像素对应的所述标准标记值;
    根据所述目标对象的像素对应的所述标准标记值,调整所述目标对象中组成每个像素颜色的多个基色的显示强度,以矫正所述目标对象中每个像素的颜色;
    其中,具有相同标记值的所述目标对象的像素被矫正为相应的颜色。
  5. 根据权利要求4所述的方法,其特征在于,根据所述目标对象的像素对应的所述标准标记值,调整所述目标对象中组成每个像素颜色的多个基色的显示强度:
    通过如下公式计算所述目标对象中每个像素的矫正像素色,
    Colordest=Colorscr*Colortrans
    其中,所述Colordest用于表征所述目标对象的像素的矫正像素色,所述Colorscr用于表征所述目标对象的像素的原像素色,所述Colortrans用于表征矫正常量,其中,所述矫正常量用于调整所述目标对象中组成每个像素色的多个基色的显示强度。
  6. 根据权利要求3所述的方法,其特征在于,根据每个所述对象组的标记值,对所述每个对象组中包含的目标对象的像素分别进行像素矫正,包括:
    对所述每个对象组中包含的目标对象的边缘像素进行发光处理,其中,具有不同标记值的所述目标对象的边缘像素被矫正为具有不同的发光颜色。
  7. 根据权利要求1所述的方法,其特征在于,在根据每个所述对象组的标记值,对所述每个对象组中包含的目标对象的像素分别进行像素矫正之后,所述方法还包括:对所述目标对象的像素进行色调映射处理,其中,对所述目标对象的像素进行色调映射处理包括:
    通过将所述目标对象的像素进行归一化处理,以调整所述目标对象的对比度和/或亮度。
  8. 根据权利要求1所述的方法,其特征在于,在根据每个所述对象组的标记值,对所述每个对象组中包含的目标对象的像素分别进行像素矫正之前,所述方法还包括:
    对所述目标对象的像素进行渲染处理,其中,所述渲染处理包括以下任意一个或多个:运动模糊处理、景深处理、高光处理。
  9. 根据权利要求8所述的方法,其特征在于,对所述目标对象的像素进行渲染处理,包括:
    所述运动模糊处理包括:将目标像素周围预设范围内的像素进行加权平均得到新的像素,将所述目标像素调整至所述新的像素,其中,目标像素为在所述目标对象的运动方向上的像素;
    所述景深处理包括:对所述目标对象的像素进行全屏模糊处理,得到所述全屏模糊处理的结果,将所述全屏模糊处理的结果与所述目标对象的像素进行混合;
    所述高光处理包括:将所述目标对象的高光部分输出到贴图中,对所述高光部分的像素进行模糊处理,将所述模糊处理的结果通过Alpha混合输入至所述目标对象的像素中。
  10. 一种区分对象的装置,其特征在于,包括:
    第一获取模块,用于获取图像中显示的多个对象组,其中,每个对象组中至少包括一个目标对象,任意多个对象组中的目标对象允许配置相同的资源;
    设置模块,用于对所述多个对象组设置不同的标记值,其中,每 个对象组中包含的目标对象具有相同的标记值;
    矫正模块,用于根据每个所述对象组的标记值,对所述每个对象组中包含的目标对象的像素分别进行像素矫正;
    其中,具有不同标记值的所述目标对象的像素被矫正为具有不同的显示属性。
  11. 根据权利要求10所述的装置,其特征在于,所述设置模块包括:
    构建模块,用于构建所述多个对象组与多个不同标记值的映射关系;
    设置子模块,用于通过所述映射关系,对所述多个对象组中的每个对象设置相应的标记值,其中,所述每个对象的标记值被设置为所述每个对象所属的对象组对应的标记值;
    标记模块,用于采用所述每个对象的标记值对所述每个对象包含的多个像素进行标记。
  12. 根据权利要求11所述的装置,其特征在于,所述装置还包括:
    第一渲染模块,用于将所述目标对象渲染至第一渲染目标纹理中,其中,所示第一渲染目标纹理具有多个通道;
    第一归一化模块,用于将所述目标对象的像素的标记值进行归一化处理,得到标准标记值;
    输入模块,用于将所述归一化处理得到的所述标准标记值输入至第二渲染目标纹理,其中,所述第二渲染目标纹理具有多个通道,且标记值不同的所述目标对象被输入至通道值不同的所述第二渲染目标纹理的多个通道中。
  13. 根据权利要求12所述的装置,其特征在于,所述矫正模块包括:矫正子模块,用于将具有不同标记值的所述目标对象的像素矫正为不同颜色,其中,所述矫正子模块包括:
    第二获取模块,用于获取所述目标对象的像素对应的所述标准标记值;
    调整模块,用于根据所述目标对象的像素对应的所述标准标记值,调整所述目标对象中组成每个像素颜色的多个基色的显示强度,以矫正所述目标对象中每个像素的颜色;
    其中,具有相同标记值的所述目标对象的像素被矫正为相应的颜色。
  14. 根据权利要求13所述的装置,其特征在于,所述调整模块包括:
    计算模块,用于过如下公式计算所述目标对象中每个像素的矫正像素色,
    Colordest=Colorscr*Colortrans
    其中,所述Colordest用于表征所述目标对象的像素的矫正像素色,所述Colorscr用于表征所述目标对象的像素的原像素色,所述Colortrans用于表征矫正常量,其中,所述矫正常量用于调整所述目标对象中组成每个像素色的多个基色的显示强度。
  15. 根据权利要求12所述的装置,其特征在于,所述矫正模块包括:
    第一处理模块,用于对所述每个对象组中包含的目标对象的边缘像素进行发光处理,其中,具有不同标记值的所述目标对象的边缘像素被矫正为具有不同的发光颜色。
  16. 根据权利要求10所述的装置,其特征在于,所述装置还包括:映射模块,用于对所述目标对象的像素进行色调映射处理,其中,所述映射模块包括:
    第二归一化模块,用于通过将所述目标对象的像素进行归一化处理,以调整所述目标对象的对比度和/或亮度。
  17. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    第二渲染模块,用于对所述目标对象的像素进行渲染处理,其中,所述渲染处理包括以下任意一个或多个:运动模糊处理、景深处理、高光处理。
  18. 根据权利要求17所述的装置,其特征在于,所述装置还包括:
    第一处理模块,用于所述运动模糊处理包括:将目标像素周围预设范围内的像素进行加权平均得到新的像素,将所述目标像素调整至所述新的像素,其中,目标像素为在所述目标对象的运动方向上的像素;
    第二处理模块,用于所述景深处理包括:对所述目标对象的像素进行全屏模糊处理,得到所述全屏模糊处理的结果,将所述全屏模糊处理的结果与所述目标对象的像素进行混合;
    第三处理模块,用于所述高光处理包括:将所述目标对象的高光部分输出到贴图中,对所述高光部分的像素进行模糊处理,将所述模糊处理的结果通过Alpha混合输入至所述目标对象的像素中。
  19. 一种计算机终端,用于执行所述权利要求1至9中任意一项所述区分对象的方法提供的步骤的程序代码。
  20. 一种存储介质,用于保存所述权利要求1至9中任意一项所述区分对象的方法所执行的程序代码。
PCT/CN2016/107119 2016-02-26 2016-11-24 区分对象的方法和装置 WO2017143812A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP16891260.8A EP3343514A4 (en) 2016-02-26 2016-11-24 Method and device for differentiating objects
JP2018515075A JP6526328B2 (ja) 2016-02-26 2016-11-24 オブジェクトを区別する方法及び装置
KR1020187007925A KR102082766B1 (ko) 2016-02-26 2016-11-24 객체를 구별하는 방법 및 장치
US16/000,550 US10957092B2 (en) 2016-02-26 2018-06-05 Method and apparatus for distinguishing between objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610109796.3 2016-02-26
CN201610109796.3A CN105678834B (zh) 2016-02-26 2016-02-26 区分对象的方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/000,550 Continuation US10957092B2 (en) 2016-02-26 2018-06-05 Method and apparatus for distinguishing between objects

Publications (1)

Publication Number Publication Date
WO2017143812A1 true WO2017143812A1 (zh) 2017-08-31

Family

ID=56305318

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/107119 WO2017143812A1 (zh) 2016-02-26 2016-11-24 区分对象的方法和装置

Country Status (6)

Country Link
US (1) US10957092B2 (zh)
EP (1) EP3343514A4 (zh)
JP (1) JP6526328B2 (zh)
KR (1) KR102082766B1 (zh)
CN (1) CN105678834B (zh)
WO (1) WO2017143812A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678834B (zh) * 2016-02-26 2019-12-17 腾讯科技(深圳)有限公司 区分对象的方法和装置
CN106408643A (zh) * 2016-08-31 2017-02-15 上海交通大学 一种基于图像空间的图像景深模拟方法
CN109845247A (zh) * 2017-09-28 2019-06-04 京瓷办公信息系统株式会社 监控终端和显示处理方法
US11455769B2 (en) * 2020-01-22 2022-09-27 Vntana, Inc. Container for physically based rendering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103753A (zh) * 2009-12-22 2011-06-22 三星电子株式会社 使用实时相机运动估计检测和跟踪运动对象的方法和终端
CN103208190A (zh) * 2013-03-29 2013-07-17 西南交通大学 基于对象检测的交通流量检测方法
CN103390164A (zh) * 2012-05-10 2013-11-13 南京理工大学 基于深度图像的对象检测方法及其实现装置
CN104350510A (zh) * 2012-06-14 2015-02-11 国际商业机器公司 多线索对象检测和分析
CN105678834A (zh) * 2016-02-26 2016-06-15 腾讯科技(深圳)有限公司 区分对象的方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002095684A1 (fr) * 2001-05-18 2002-11-28 Sony Computer Entertainment Inc. Afficheur
JP4151539B2 (ja) * 2003-09-25 2008-09-17 株式会社セガ ゲームプログラム
JP3868435B2 (ja) * 2004-04-19 2007-01-17 株式会社ソニー・コンピュータエンタテインメント ゲームキャラクタの制御方法
JP2008008839A (ja) * 2006-06-30 2008-01-17 Clarion Co Ltd ナビゲーション装置、その制御方法及びその制御プログラム
GB0616293D0 (en) * 2006-08-16 2006-09-27 Imp Innovations Ltd Method of image processing
US8824787B2 (en) * 2011-12-07 2014-09-02 Dunlop Sports Co., Ltd. Silhouette correction method and system and silhouette extraction method and system
CN102663743B (zh) * 2012-03-23 2016-06-08 西安电子科技大学 一种复杂场景中多摄影机协同的人物追踪方法
JP6200144B2 (ja) * 2012-11-20 2017-09-20 任天堂株式会社 ゲームプログラム、ゲーム処理方法、ゲーム装置及びゲームシステム
JP6320687B2 (ja) * 2013-05-23 2018-05-09 任天堂株式会社 情報処理システム、情報処理装置、プログラムおよび表示方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103753A (zh) * 2009-12-22 2011-06-22 三星电子株式会社 使用实时相机运动估计检测和跟踪运动对象的方法和终端
CN103390164A (zh) * 2012-05-10 2013-11-13 南京理工大学 基于深度图像的对象检测方法及其实现装置
CN104350510A (zh) * 2012-06-14 2015-02-11 国际商业机器公司 多线索对象检测和分析
CN103208190A (zh) * 2013-03-29 2013-07-17 西南交通大学 基于对象检测的交通流量检测方法
CN105678834A (zh) * 2016-02-26 2016-06-15 腾讯科技(深圳)有限公司 区分对象的方法和装置

Also Published As

Publication number Publication date
JP6526328B2 (ja) 2019-06-05
CN105678834B (zh) 2019-12-17
KR102082766B1 (ko) 2020-02-28
EP3343514A1 (en) 2018-07-04
EP3343514A4 (en) 2018-11-07
KR20180042359A (ko) 2018-04-25
US10957092B2 (en) 2021-03-23
US20180285674A1 (en) 2018-10-04
CN105678834A (zh) 2016-06-15
JP2018535473A (ja) 2018-11-29

Similar Documents

Publication Publication Date Title
CN104076928B (zh) 一种调整文字显示区域色调的方法
WO2017143812A1 (zh) 区分对象的方法和装置
CN103593830B (zh) 一种低照度视频图像增强方法
CN106780635B (zh) 一种智能终端的图片适配方法及系统
CN107799093A (zh) 调节终端屏幕亮度的方法、终端及计算机可读存储介质
CN108024105A (zh) 图像色彩调节方法、装置、电子设备及存储介质
CN108717691A (zh) 一种图像融合方法、装置、电子设备及介质
CN103440674A (zh) 一种数字图像蜡笔特效的快速生成方法
CN113132696B (zh) 图像色调映射方法、装置、电子设备和存储介质
CN110149550A (zh) 一种图像数据处理方法和装置
CN110444181A (zh) 显示方法、装置、终端及计算机可读存储介质
CN107592517A (zh) 一种肤色处理的方法及装置
CN112565887B (zh) 一种视频处理方法、装置、终端及存储介质
CN109982059A (zh) 自动白平衡中的照度色度的估计
CN109064431A (zh) 一种图片亮度调节方法、设备及其存储介质
CN106231278B (zh) 视频处理方法及电视系统
CN109859303B (zh) 图像的渲染方法、装置、终端设备及可读存储介质
Akyüz et al. An evaluation of image reproduction algorithms for high contrast scenes on large and small screen display devices
CN113240760B (zh) 一种图像处理方法、装置、计算机设备和存储介质
JP5896204B2 (ja) 画像処理装置及びプログラム
CN107622098A (zh) 网站内容颜色的设置方法和装置、存储介质及电子装置
CN111097169B (zh) 一种游戏图像的处理方法、装置、设备及存储介质
CN113487497A (zh) 图像处理方法、装置和电子设备
CN108376417A (zh) 一种虚拟对象的显示调整方法及相关装置
CN104090764B (zh) 一种终端

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20187007925

Country of ref document: KR

Kind code of ref document: A

Ref document number: 2018515075

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2016891260

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE