WO2018177112A1 - 对象的渲染方法和装置、存储介质、电子装置 - Google Patents
对象的渲染方法和装置、存储介质、电子装置 Download PDFInfo
- Publication number
- WO2018177112A1 WO2018177112A1 PCT/CN2018/078604 CN2018078604W WO2018177112A1 WO 2018177112 A1 WO2018177112 A1 WO 2018177112A1 CN 2018078604 W CN2018078604 W CN 2018078604W WO 2018177112 A1 WO2018177112 A1 WO 2018177112A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel points
- depth
- change value
- depth change
- pixel point
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Definitions
- the present invention relates to the field of image processing, and in particular to a method and apparatus for rendering an object, a storage medium, and an electronic device.
- PR Photorealistic Rendering
- NPR Non-Photorealistic Rendering
- the non-photorealistic rendering of non-photographs refers to the rendering effect similar to the abstract conception by studying the abstract artistic conception drawn by the painter through the brush. It is a kind of computer graphics, mainly used to simulate the artistic painting style. Also used to develop new drawing styles.
- NPR is influenced by oil painting, sketching, technical drawings, and animated cartoons, so it is currently more developed in the field and more mature in general with Western painting art, such as sketches and pens. Paint, carbon strokes, watercolors, cartoon paintings, etc., are less studied for Chinese ink painting.
- the above NPR technology is often applied to real-time rendering.
- the computer In real-time rendering, the computer generates images in real time, and generates enough frames per second, so that the rendered images can interact with the user.
- Real-time rendering is mainly based on a series of key processing by the CPU, and the amount of data processing is large, so the computational resource consumption of the CPU is large, which makes it difficult to run real-time rendering on a computer with relatively low CPU performance.
- the embodiments of the present invention provide a method and device for rendering an object, a storage medium, and an electronic device, so as to at least solve the technical problem that the computing resource consumption of the computer CPU is large when performing real-time rendering in the related art.
- a method for rendering an object includes: acquiring a two-dimensional image obtained by performing image acquisition on a target object; and identifying a first pixel point set in all pixels of the two-dimensional image, Wherein the pixel points in the first set of pixel points are points on the contour of the target object; by calling the image processor, one or more second sets of pixel points are identified in the first set of pixel points, wherein each second The pixel points in the set of pixel points are used to indicate a line segment in the contour of the target object; respectively, the pixel points in each second pixel point set are connected into a line segment in the contour of the target object, and the connection is displayed according to a predetermined rendering manner. Get each line segment.
- an apparatus for rendering an object comprising: an acquiring unit, configured to acquire a two-dimensional image obtained by performing image acquisition on a target object; and a first identifying unit, configured to perform two-dimensional Identifying a first set of pixel points in all pixels of the image, wherein the pixel points in the first set of pixel points are points on the contour of the target object; and the second identifying unit is configured to call the image processor in the first pixel Identifying one or more second sets of pixel points in the set of points, wherein the pixel points in each second set of pixel points are used to indicate a line segment in the outline of the target object; and the rendering unit is configured to respectively The pixel points in the set of two pixel points are connected into one line segment in the outline of the target object, and each line segment obtained by the connection is displayed in a predetermined rendering manner.
- a storage medium comprising a stored program, wherein the program is configured to execute any of the methods described above at runtime.
- an electronic device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor being configured to be executed by a computer program Any of the above methods.
- the processor directly acquires a two-dimensional image obtained by performing image acquisition on the target object, and identifies a first pixel point set for characterizing the contour of the target object in all the pixels of the two-dimensional image, by calling
- the image processor identifies one or more second pixel point sets in the first set of pixel points, respectively connecting the pixel points in each second pixel point set into a line segment in the contour of the target object, and rendering according to a predetermined
- the method displays each line segment obtained by the connection. Since the processing is a two-dimensional graphic, the amount of data processed is greatly reduced, and the data processing task with a heavier load such as a vertex query is handed over to the image processor to solve the correlation.
- the technical problem of large computational resource consumption of the computer CPU during real-time rendering is achieved, thereby achieving the technical effect of reducing the computational resource consumption of the computer CPU during real-time rendering.
- FIG. 1 is a schematic diagram of a hardware environment of a rendering method of an object according to an embodiment of the present invention
- FIG. 2 is a flow chart of an alternative method of rendering an object according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram of an optional target object in accordance with an embodiment of the present invention.
- FIG. 4 is a schematic diagram of an optional target object in accordance with an embodiment of the present invention.
- Figure 5 is a schematic illustration of an alternative object edge texture in accordance with an embodiment of the present invention.
- FIG. 6 is a schematic diagram of pixel points on an optional contour line in accordance with an embodiment of the present invention.
- FIG. 7 is a schematic diagram of an apex on an optional contour line in accordance with an embodiment of the present invention.
- Figure 8 is a schematic illustration of an alternative stroke in accordance with an embodiment of the present invention.
- FIG. 9 is a schematic illustration of an alternative stroke strip in accordance with an embodiment of the present invention.
- Figure 10 is a schematic illustration of an alternative brush stroke in accordance with an embodiment of the present invention.
- FIG. 11 is a schematic diagram of an alternative rendered image in accordance with an embodiment of the present invention.
- FIG. 13 is a schematic diagram of an alternative rendered image in accordance with an embodiment of the present invention.
- FIG. 14 is a schematic diagram of an optional object rendering apparatus in accordance with an embodiment of the present invention.
- FIG. 15 is a structural block diagram of a terminal according to an embodiment of the present invention.
- the real-time rendering mainly includes the following steps:
- Step 1 The CPU (Central Processing Unit) processes the real-time 3D image (ie, the three-dimensional image) based on the 3D (3Dimensions) geometric space, and performs contour edge searching;
- the CPU Central Processing Unit
- Step 2 the computer CPU locks the Z-buffer to perform cull-by-contour apex comparison, that is, performs visibility culling of the silhouette edge vertices, and only retains the visible silhouette edge vertices;
- Step 3 connecting the edge vertices of the contour into strokes, in particular, the CPU performs a comparison of the vertex positions of the visible contours one by one, and performs stroke connection;
- Step 4 The stroke is wrapped in the artist's stroke, for example, with a texture with an alpha channel, and the texture is applied to a triangle to create a stroke.
- the Z-Buffer described above is a technique for performing a "hidden face elimination" operation when coloring an object, so that the portion behind the hidden object is not displayed.
- an embodiment of a method for rendering an object is also provided.
- the rendering method of the foregoing object may be applied to a hardware environment formed by the server 102 and/or the terminal 104 as shown in FIG. 1.
- the server 102 is connected to the terminal 104 through a network.
- the network includes but is not limited to a wide area network, a metropolitan area network, or a local area network.
- the terminal 104 is not limited to a PC, a mobile phone, a tablet, or the like.
- the rendering method of the object in the embodiment of the present invention may be performed by the server 102, may be performed by the terminal 104, or may be performed by the server 102 and the terminal 104 in common.
- the rendering method of the object that the terminal 104 performs in the embodiment of the present invention may also be performed by a client installed thereon.
- the foregoing hardware structure may include only the terminal, and the specific steps are as follows:
- Step S11 the terminal acquires a two-dimensional image obtained by performing image acquisition on the target object
- step S12 the terminal performs rendering according to a predetermined rendering manner.
- the terminal performs rendering according to a predetermined rendering manner. For details, refer to the steps shown in FIG. 2.
- the above hardware structure When executed on the server, the above hardware structure may only include a server, and the specific execution steps are similar to the above, except that the main body of execution is a server.
- FIG. 2 is a flowchart of an optional object rendering method according to an embodiment of the present invention. As shown in FIG. 2, the method may include the following steps:
- Step S202 acquiring a two-dimensional image obtained by performing image acquisition on the target object
- Step S204 identifying a first set of pixel points in all the pixels of the two-dimensional image, the pixel points in the first set of pixel points being points on the contour of the target object;
- Step S206 identifying one or more second pixel point sets in the first pixel point set by calling the image processor, where the pixel points in each second pixel point set are used to indicate a line segment in the contour of the target object;
- Step S208 respectively connecting pixel points in each second pixel point set into one line segment in the contour of the target object, and displaying each line segment obtained by the connection according to a predetermined rendering manner.
- the processor directly acquires a two-dimensional image obtained by performing image acquisition on the target object, and identifies a first pixel point set for characterizing the contour of the target object in all the pixels of the two-dimensional image.
- the rendering mode displays each line segment obtained by the connection. Since the processing is a two-dimensional graphic, the amount of data processed is greatly reduced. At the same time, the data processing task with a heavier load such as the vertex query is completed by the image processor, which can be solved.
- the technical problem of large computational resource consumption of the computer CPU during real-time rendering is achieved, thereby achieving the technical effect of reducing the computational resource consumption of the computer CPU during real-time rendering.
- the above steps S202 to S208 can be performed in the processor, such as in a central processing unit CPU of a computer or mobile device.
- the target object may be a figurative feature such as a character, an animal, an object, or an environment.
- the target object may be one or more; the two-dimensional graphic may specifically be a deep texture image or carry each pixel. a two-dimensional image of depth values; the above-mentioned contour refers to the edge of the target object, such as a teapot, a character, and the like, which are distinguished from the natural environment or other objects; the pixels stored in the first pixel point set are all the recognized contours. The upper point; the second pixel point set holds all the pixels belonging to the same line segment that are identified.
- the above predetermined rendering methods include, but are not limited to, rendering methods such as sketch drawing, pen drawing, charcoal drawing, watercolor painting, cartoon painting, and ink painting.
- the above method is mainly used for non-photorealistic level NPR rendering, but is not limited thereto.
- acquiring the two-dimensional image obtained by performing image acquisition on the target object includes: acquiring the acquired two-dimensional image of the type of the depth texture type, wherein the two-dimensional image of the type of the deep texture type is carried in There is a depth value for the pixel.
- the target object can be directly rendered to obtain a depth texture of the target object.
- identifying the first set of pixel points in all the pixels of the two-dimensional image includes: performing, for each of all the pixels of the two-dimensional image, performing the following steps, wherein each The pixel is recorded as the current pixel point when performing the following steps: obtaining a depth change value of the current pixel point, wherein the depth change value is used to indicate the degree of depth change between the plurality of adjacent pixel points of the current pixel point; In a case where the depth change value of the pixel is greater than or equal to a predetermined depth change threshold, the current pixel point is determined to be a pixel point in the first pixel point set.
- the first depth change value obtained by performing the first filtering process on the plurality of adjacent pixel points by the first filter is obtained by using the first depth change value. And indicating a degree of depth change in the first direction between the plurality of adjacent pixel points; acquiring a second depth change value obtained by performing a second filtering process on the plurality of adjacent pixel points by the second filter, where the second depth is The change value is used to indicate the degree of depth change in the second direction between the plurality of adjacent pixel points, the second direction being different from the first direction; determining the depth of the current pixel point according to the first depth change value and the second depth change value Change value.
- acquiring, by the first filter, the first depth change value obtained by performing the first filtering process on the multiple adjacent pixel points includes: acquiring a first depth change obtained by the first filter performing the first filtering process according to the first formula a value, wherein the first formula is used to calculate a sum of first depth parameters of pixel points adjacent to each other in the first direction among the plurality of adjacent pixel points, the first depth parameter being a pixel point adjacent in the first direction The product of the depth value and the corresponding influence factor.
- the current pixel point For the current pixel point, generally including 8 pixel points adjacent to and surrounding the current pixel point, the current pixel point can be recorded as S11, then the pixel point in the upper left corner is S00, the pixel point directly above For S01, the pixel in the upper right corner is S11, the pixel on the left side is S10, the pixel on the right side is S12, the pixel in the lower left corner is S20, and the pixel below the bottom is S21, and the pixel in the lower right corner is For S22.
- the first direction may be vertical
- SobelX represents the first depth change value, S00, S10, S20, S02, S12, S22
- the coefficients 1, 2, 1, -1, -2, -1 are the influence factors of the corresponding pixels.
- acquiring the second depth change value obtained by performing the second filtering process on the plurality of adjacent pixel points by the second filter includes: acquiring a second depth change obtained by performing the second filtering process by the second filter according to the second formula a value of sobelY, wherein the second formula is used to calculate a sum of second depth parameters of pixel points adjacent to each other in the second direction among the plurality of adjacent pixel points, and the second depth parameter is a pixel adjacent in the second direction The product of the depth value of the point and the corresponding influence factor.
- the second direction may be a horizontal direction
- SobelY represents a second depth change value, S00, S01, S02, S20, S21, S22
- 1, 2, 1, -1, -2, -1 are the influence factors of the corresponding pixel.
- determining the depth change value of the current pixel point according to the first depth change value and the second depth change value includes: setting a depth change value of the current pixel point to a square of the first depth change value and a second depth change value. The sum of squares.
- the depth change value edgeSqr of the current pixel point may be determined according to the following formula.
- edgeSqr (SobelX*SobelX+SobelY*SobelY).
- identifying one or more second pixel point sets in the first pixel point set by calling the image processor includes: transmitting a vertex query request to the image processor, where the vertex query request is carried Position information and depth information of the pixel points in the first pixel point set; receiving response information of the image processor, wherein the response information is used to indicate whether the pixel points in the first pixel point set belong to the second pixel point set.
- the image processor GPU described above has a vertex texture processing function.
- all the vertices of the model are searched to the GPU in a dotted manner. If the vertex is on the contour line, then it is drawn, otherwise If not drawn, this result will be fed back to the CPU in the way of the query result (ie response information), and finally the second pixel set of which vertex is on the contour line will be obtained.
- the processing complexity of the CPU is related to the model. If the model vertex is small, it can be run with a very high number of frames per frame fps (Frames Per Second), but if the model has more vertices. It will be quite difficult. If the GPU is used for processing, since the GPU is a processor dedicated to image processing, the processing speed can be accelerated by hardware acceleration, the CPU can be freed, and the processing resources can be reduced, so that it can be realized. High fps.
- respectively connecting the pixel points in each second pixel point set into one line segment in the contour of the target object includes: connecting the pixel points in the second pixel point set in series according to a preset condition A line segment, wherein a line segment includes at least one stroke, and the preset condition is used to indicate at least one of a number of pixels included in each stroke, a stroke length, and a corner angle between adjacent pixel points included in the stroke.
- the obtained contour vertices can be connected into strokes in the CPU. After the series connection, the number of vertices of a stroke, the length of the stroke, and the angle of rotation between the vertices can be determined to determine how to break the strokes, and finally form an independent stroke.
- displaying each line segment obtained by connecting according to a predetermined rendering manner comprises: expanding each stroke into a stroke strip; drawing the stroke strip using a stroke corresponding to a predetermined rendering manner, wherein the predetermined rendering manner includes a sketch drawing, a pen Painting, charcoal, watercolor, cartoon and ink painting.
- the stroke can be applied to expand the vertices of the stroke into a stroke strip in the screen space, prepare for the stroke, and then apply a corresponding stroke to each stroke to obtain an image that satisfies the demand.
- the present invention also provides a preferred embodiment, which is illustrated in Figure 3, which requires rendering a black ellipse (i.e., target object) as shown in Figure 4 in the game:
- Step S302 acquiring an outline of the object.
- step S302 can be implemented by two sub-steps, as shown in sub-steps S3022 and S3024.
- Step S3022 Obtain a depth texture, and directly render the object to obtain a depth texture of the object, as shown in FIG. 4 .
- step S3024 the contour pixels are obtained by performing filtering processing using a filter.
- Table 1 shows the matrix used by the lateral filter to filter the depth texture.
- Table 2 shows the matrix used by the longitudinal filter to filter the depth texture.
- Table 3 shows the positional relationship of the current pixel and the adjacent pixel and the depth value of the pixel, wherein the position of the table in which the depth value is located is the position of the pixel, and S11 in Table 3 represents the current pixel, and has 8 adjacent thereto. Pixels.
- SobelY S00+2*S01+S02-S20-2*S21-S22, SobelY represents the second depth change value, and S00, S01, S02, S20, S21, and S22 are the depth values of the pixels at the corresponding positions.
- the depth change value edgeSqr of the current pixel point may be determined according to the following formula.
- edgeSqr (SobelX*SobelX+SobelY*SobelY).
- the depth texture is filtered by the horizontal and vertical filters, processed pixel by pixel, and the surrounding pixels of the current pixel are also sampled, S11 represents the current pixel, and edgeSqr represents the degree of change of the value of the pixel around the current pixel, if edgeSqr is greater than one
- the value of the threshold n indicates that the value of the pixel around the current pixel changes drastically. Since the depth information is used, it indicates that the depth of the pixel around the current pixel is intense, indicating that the current pixel is the contour of the object. Output to the contour texture. If edgeSqr is less than the threshold n, then the current pixel is not the outline of the object, then 0 is output to the texture.
- the depth texture shown in FIG. 4 is filtered to obtain a contour line pixel as shown in FIG. 5, and the ellipse line in FIG. 5 is the obtained object edge texture.
- step S304 the contour vertices of the model are acquired, which is specifically implemented by the central processor CPU calling the image processor GPU for querying.
- the above GPU has a vertex texture processing function.
- all the vertices of the model are searched to the GPU in a dotted manner, and specifically all the vertices as shown in FIG. 6 can be sent to the GPU for query (where the dotted line is connected)
- the vertex is the vertex actually on the contour line, but it is not known to the CPU. If the vertex is on the contour line, then draw it, otherwise it will not be drawn. The result will be the result of the query. Feedback to the CPU, and finally get a list of which vertices on the outline, as shown in Figure 7, the GPU can finally identify the vertices on the contours of all the vertices.
- step S306 the contour lines are processed and connected into strokes.
- the obtained contour vertices are connected in series into strokes. After the series connection, the number of vertices of a stroke, the length of the stroke, and the angle of the corner between the vertices can be determined to determine how to break the strokes, and finally form an independent stroke. 8 is shown.
- step S308 a final stroke is generated.
- the stroke of the stroke can be applied to the stroke space of the stroke to expand into the stroke of the stroke, in order to prepare for the stroke, as shown in Figure 9, and then each stroke is applied with the stroke shown in Figure 10, and the final result is shown in Figure 11. Shown.
- the present invention further provides a preferred embodiment.
- the specific implementation scenario may be applied to animation production, etc., for example, the object to be rendered is a teapot, then the depth texture of the teapot may be processed by applying the steps of FIG. 3 above.
- the embodiment is the same as the above processing method for the ellipse in the game, and details are not described herein again.
- the teapot shown in Fig. 13 can be obtained by executing relevant processing by the CPU and calling the GPU for vertex recognition, then performing line processing and drawing with a stroke.
- the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
- the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
- the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
- a rendering apparatus for an object for implementing the rendering method of the above object is also provided.
- 14 is a schematic diagram of an optional object rendering apparatus according to an embodiment of the present invention. As shown in FIG. 14, the apparatus may include: an obtaining unit 142, a first identifying unit 144, a second identifying unit 146, and a rendering unit. 148.
- the acquiring unit 142 is configured to acquire a two-dimensional image obtained by performing image acquisition on the target object;
- a first identifying unit 144 configured to identify a first set of pixel points in all pixels of the two-dimensional image, wherein the pixel points in the first set of pixel points are points on a contour of the target object;
- a second identifying unit 146 configured to identify, by using an image processor, one or more second sets of pixel points in the first set of pixel points, wherein the pixel points in each second set of pixel points are used to indicate the target object a line segment in the outline;
- the rendering unit 148 is configured to respectively connect the pixel points in each second pixel point set into one line segment in the contour of the target object, and display each line segment obtained by the connection according to a predetermined rendering manner.
- the obtaining unit 142 in this embodiment may be used to perform step S202 in the embodiment of the present application.
- the first identifying unit 144 in the embodiment may be used to perform step S204 in the embodiment of the present application.
- the second identification unit 146 in the embodiment may be used to perform step S206 in the embodiment of the present application.
- the rendering unit 148 in this embodiment may be used to perform step S208 in the embodiment of the present application.
- the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the foregoing embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware.
- the technical problem of large consumption of computing resources of the computer CPU further achieves the technical effect of reducing the amount of computing resources consumed by the computer CPU during real-time rendering.
- the above target object may be a figurative feature of a character, an animal, an object, an environment, etc., in practical applications, the target object may be one or more; the two-dimensional graphic may specifically be a deep texture image, or carry each pixel. a two-dimensional image of depth values; the above-mentioned contour refers to the edge of the target object, such as a teapot, a character, and the like, which are distinguished from the natural environment or other objects; the pixels stored in the first pixel point set are all the recognized contours. The upper point; the second pixel point set holds all the pixels belonging to the same line segment that are identified.
- the above predetermined rendering methods include, but are not limited to, rendering methods such as sketch drawing, pen drawing, charcoal drawing, watercolor painting, cartoon painting, and ink painting.
- the above device is mainly used for non-photorealistic level NPR rendering, but is not limited thereto.
- the acquiring unit is further configured to acquire the collected two-dimensional image of the type of the depth texture, wherein the two-dimensional image of the type of the deep texture carries the depth value of the pixel.
- the target object can be directly rendered to obtain a depth texture of the target object.
- the first identifying unit is further configured to perform, for each pixel point of all the pixels of the two-dimensional image, the following step, wherein each pixel point is recorded as a current pixel point when performing the following steps: acquiring current a depth change value of the pixel, wherein the depth change value is used to indicate a degree of depth change between the plurality of adjacent pixel points of the current pixel point; if the depth change value of the current pixel point is greater than or equal to a predetermined depth change threshold value And determining that the current pixel is a pixel in the first set of pixel points.
- the first identification unit includes: a first acquiring module, configured to acquire a first depth change value obtained by performing a first filtering process on the plurality of adjacent pixel points by the first filter, where the first depth change value is used
- the second acquisition module is configured to obtain a second depth change obtained by performing a second filtering process on the plurality of adjacent pixels by the second filter. a value, wherein the second depth change value is used to indicate a degree of depth change in the second direction between the plurality of adjacent pixel points, the second direction being different from the first direction; and a determining module configured to change the value according to the first depth And the second depth change value determines a depth change value of the current pixel point.
- the first acquiring module is further configured to obtain a first depth change value obtained by performing a first filtering process by the first filter according to the first formula, where the first formula is used to calculate the first one of the plurality of adjacent pixels The sum of the first depth parameters of the upwardly adjacent pixel points, the first depth parameter being the product of the depth value of the pixel points adjacent in the first direction and the corresponding influence factor.
- the current pixel point For the current pixel point, generally including 8 pixel points adjacent to and surrounding the current pixel point, the current pixel point can be recorded as S11, then the pixel point in the upper left corner is S00, the pixel point directly above For S01, the pixel in the upper right corner is S11, the pixel on the left side is S10, the pixel on the right side is S12, the pixel in the lower left corner is S20, and the pixel below the bottom is S21, and the pixel in the lower right corner is For S22.
- the first direction may be vertical
- SobelX represents the first depth change value, S00, S10, S20, S02, S12, S22
- 1, 2, 1, -1, -2, -1 are the influence factors of the corresponding pixel.
- the second obtaining module is further configured to obtain a second depth change value sobelY obtained by performing a second filtering process by the second filter according to the second formula, where the second formula is used to calculate the second of the plurality of adjacent pixels.
- the second direction may be horizontal
- SobelY represents the second depth change value, S00, S01, S02, S20, S21, S22
- 1, 2, 1, -1, -2, -1 are the influence factors of the corresponding pixel.
- the determining module is further configured to set a depth change value of the current pixel point as a sum of a square of the first depth change value and a square of the second depth change value.
- the second identifying unit includes: a sending module, configured to send a vertex query request to the image processor, where the vertex query request carries location information and depth information of the pixel points in the first pixel point set; and a receiving module, The response information is used to receive the image processor, wherein the response information is used to indicate whether the pixel point in the first set of pixel points belongs to the second set of pixel points.
- the image processor GPU described above has a vertex texture processing function.
- all the vertices of the model are searched to the GPU in a dotted manner. If the vertex is on the contour line, then it is drawn, otherwise If not drawn, this result will be fed back to the CPU in the way of the query result (ie response information), and finally the second pixel set of which vertex is on the contour line will be obtained.
- the rendering unit is further configured to serially connect the pixel points in the second pixel point set into a line segment according to a preset condition, wherein the one line segment includes at least one stroke, and the preset condition is used to indicate the number of pixels included in each stroke. At least one of a mesh, a stroke length, and a corner angle between adjacent pixels included in the stroke.
- the obtained contour vertices can be connected into strokes in the CPU. After the series connection, the number of vertices of a stroke, the length of the stroke, and the angle of rotation between the vertices can be determined to determine how to break the strokes, and finally form an independent stroke.
- the rendering unit is further configured to expand each stroke into a stroke strip; the stroke strip is drawn using a stroke corresponding to the predetermined rendering manner, wherein the predetermined rendering manner includes a sketch drawing, a pen drawing, a charcoal drawing, a watercolor painting, and a cartoon Painting and ink painting.
- the stroke can be applied to expand the vertex of the stroke into a stroke strip in the screen space, prepare for the stroke, and then apply a corresponding stroke to each stroke to obtain an image that satisfies the demand.
- the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the foregoing embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware, where the hardware environment includes a network environment.
- a storage medium also referred to as a memory
- the storage medium comprising a stored program, wherein the program is configured to execute any of the methods described above at runtime.
- a server or terminal (also referred to as an electronic device) for implementing the above-described rendering method of an object.
- the terminal may include: one or more (only one shown in FIG. 15) processor 1501, memory 1503, and transmission device. 1505 (such as the transmitting device in the above embodiment), as shown in FIG. 15, the terminal may further include an input/output device 1507.
- the memory 1503 can be used to store software programs and modules, such as the program instructions and modules corresponding to the methods and devices in the embodiments of the present invention.
- the processor 1501 performs various functions by running software programs and modules stored in the memory 1503. Application and data processing, that is, the above method is implemented.
- the memory 1503 may include a high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
- memory 1503 can further include memory remotely located relative to processor 1501, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
- the transmission device 1505 described above is used to receive or transmit data via a network, and can also be used for data transmission between the processor and the memory. Specific examples of the above network may include a wired network and a wireless network.
- the transmission device 1505 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
- the transmission device 1505 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
- NIC Network Interface Controller
- RF Radio Frequency
- the memory 1503 is configured to store an application.
- the processor 1501 may call the application stored in the memory 1503 through the transmission device 1505 to perform the steps of: acquiring a two-dimensional image obtained by performing image acquisition on the target object; and identifying the first pixel point in all pixels of the two-dimensional image. a set, wherein a pixel point in the first set of pixel points is a point on a contour of the target object; and one or more second set of pixel points are identified in the first set of pixel points by calling the image processor, wherein each The pixel points in the second pixel point set are used to indicate a line segment in the contour of the target object; respectively, the pixel points in each second pixel point set are connected into a line segment in the contour of the target object, and are according to a predetermined rendering manner. Shows each line segment that is connected.
- the processor 1501 is further configured to: obtain a depth change value of the current pixel point, where the depth change value is used to indicate a degree of depth change between the plurality of adjacent pixel points of the current pixel point; at the current pixel point In the case where the depth change value is greater than or equal to a predetermined depth change threshold, it is determined that the current pixel point is a pixel point in the first set of pixel points.
- the processor directly acquires a two-dimensional image obtained by performing image acquisition on the target object, and identifies a first pixel point set for characterizing the contour of the target object in all the pixels of the two-dimensional image, by calling the image.
- the processor identifies one or more second pixel point sets in the first set of pixel points, respectively connecting the pixel points in each second pixel point set into a line segment in the contour of the target object, and according to a predetermined rendering manner Displaying each line segment obtained by the connection, because the processing is a two-dimensional graphic, the amount of data processed is greatly reduced, and the data processing task with a heavier load such as vertex query is completed by the image processor, and the related technology can be solved.
- the technical resource consumption of the computer CPU is large, and the technical effect of reducing the computing resource consumption of the computer CPU during real-time rendering is achieved.
- the terminal can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (MID). Terminal equipment such as PAD.
- Fig. 15 does not limit the structure of the above electronic device.
- the terminal may also include more or less components (such as a network interface, display device, etc.) than shown in FIG. 15, or have a different configuration than that shown in FIG.
- Embodiments of the present invention also provide a storage medium.
- the foregoing storage medium may be used to execute program code of a rendering method of an object.
- the foregoing storage medium may be located on at least one of the plurality of network devices in the network shown in the foregoing embodiment.
- the storage medium is arranged to store program code for performing the following steps:
- S14 respectively connect the pixel points in each second pixel point set into one line segment in the contour of the target object, and display each line segment obtained by the connection according to a predetermined rendering manner.
- the storage medium is further arranged to store program code for performing the following steps:
- S21 Obtain a depth change value of a current pixel point, where the depth change value is used to indicate a degree of depth change between a plurality of adjacent pixel points of the current pixel point;
- the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
- ROM Read-Only Memory
- RAM Random Access Memory
- a mobile hard disk e.g., a hard disk
- magnetic memory e.g., a hard disk
- the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
- the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
- a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the disclosed client may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
Abstract
一种对象的渲染方法和装置、存储介质、电子装置。其中,该方法包括:获取对目标对象进行图像采集得到的二维图像(S202);在二维图像的所有像素点中识别出第一像素点集合,其中,第一像素点集合中的像素点为目标对象的轮廓上的点(S204);通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,其中,每个第二像素点集合中的像素点用于指示目标对象的轮廓中的一条线段(S206);分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段(S208)。所述方法解决了相关技术中进行实时渲染时对计算机CPU的运算资源消耗量较大的技术问题。
Description
本申请要求于2017年3月30日提交中国专利局,申请号为2017102040312、发明名称“对象的渲染方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本发明涉及图像处理领域,具体而言,涉及一种对象的渲染方法和装置、存储介质、电子装置。
在游戏、动漫等领域中时常用到PR和NPR技术,PR(Photorealistic Rendering)即照片真实级渲染,是指通过对真实物理光影的研究,做出真实世界的渲染效果;NPR(Non-Photorealistic Rendering)即非照片真实级渲染,是指通过研究画家透过画笔勾勒出的抽象意境,做出与该抽象意境类似的渲染效果,是计算机图形学的一类,主要用于模拟艺术式绘画风格,也用于发展新绘制风格。
和传统的追求真实感的计算机图形学不同,NPR受到油画、素描、技术图纸以及动画卡通的影响,所以目前在该领域发展较多而且较成熟的一般与西方绘画艺术有关,如素描画、钢笔画、碳笔画、水彩画、卡通画等,对于中国水墨画的研究则较少。
上述的NPR技术往往应用于实时渲染,在实时渲染时计算机实时生成图像,每秒钟生成足够的帧,从而可以通过渲染得到的图像和用户产生交互。实时渲染主要是基于CPU进行一系列的关键处理,且数据处理量较大,所以对CPU的运算资源量消耗较大,使得在CPU性能相对较低的电脑上运行实时渲染会相当吃力。
针对相关技术中进行实时渲染时对计算机CPU的运算资源消耗量较大的技术问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种对象的渲染方法和装置、存储介质、电子装置,以至少解决相关技术中进行实时渲染时对计算机CPU的运算资源消耗量较大的技术问题。
根据本发明实施例的一个方面,提供了一种对象的渲染方法,包括:获取对目标对象进行图像采集得到的二维图像;在二维图像的所有像素点中识别出第一像素点集合,其中,第一像素点集合中的像素点为目标对象的轮廓上的点;通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,其中,每个第二像素点集合中的像素点用于指示目标对象的轮廓中的一条线段;分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段。
根据本发明实施例的另一方面,还提供了一种对象的渲染装置,包括:获取单元,用于获取对目标对象进行图像采集得到的二维图像;第一识别单元,用于在二维图像的所有像素点中识别出第一像素点集合,其中,第一像素点集合中的像素点为目标对象的轮廓上的点;第二识别单元,用于通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,其中,每个第二像素点集合中的像素点用于指示目标对象的轮廓中的一条线段;渲染单元,用于分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段。
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,其中,该程序被设置为运行时执行上述的任一种方法。
根据本申请实施例的另一方面,还提供了一种电子装置,包括存储器、 处理器及存储在存储器上并可在所述处理器上运行的计算机程序,处理器被设置为通过计算机程序执行上述的任一种方法。
在本发明实施例中,处理器直接获取对目标对象进行图像采集得到的二维图像,在二维图像的所有像素点中识别出用于表征目标对象的轮廓的第一像素点集合,通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段,由于处理的是二维图形,故而处理的数据量会大量减小,同时将顶点查询这类负荷较重的数据处理任务交由图像处理器完成,可以解决相关技术中进行实时渲染时对计算机CPU的运算资源消耗量较大的技术问题,进而达到降低实时渲染时计算机CPU的运算资源消耗量的技术效果。
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的对象的渲染方法的硬件环境的示意图;
图2是根据本发明实施例的一种可选的对象的渲染方法的流程图;
图3是根据本发明实施例的一种可选的目标对象的示意图;
图4是根据本发明实施例的一种可选的目标对象的示意图;
图5是根据本发明实施例的一种可选的物体边缘纹理的示意图;
图6是根据本发明实施例的一种可选的轮廓线上的像素点的示意图;
图7是根据本发明实施例的一种可选的轮廓线上的顶点的示意图;
图8是根据本发明实施例的一种可选的笔画的示意图;
图9是根据本发明实施例的一种可选的笔画带的示意图;
图10是根据本发明实施例的一种可选的笔触的示意图;
图11是根据本发明实施例的一种可选的渲染图像的示意图;
图12是根据本发明实施例的多种可选的笔触的示意图;
图13是根据本发明实施例的一种可选的渲染图像的示意图;
图14是根据本发明实施例的一种可选的对象的渲染装置的示意图;以及
图15是根据本发明实施例的一种终端的结构框图。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本发明实施例,提供了一种可选的对象渲染的实施例,实时渲染主要包括如下几个步骤:
步骤1,计算CPU(Central Processing Unit)基于3D(3Dimensions)几何空间对实时3D图像(即三维图形)进行处理,进行轮廓边的查找;
步骤2,计算机CPU锁住Z-buffer进行逐个轮廓边顶点比较剔除,也即进行轮廓边顶点的可视性剔除,仅保留可见轮廓边顶点;
步骤3,连接轮廓边顶点成笔划,具体是CPU进行逐个可见轮廓边顶点位置比较,进行笔划连接;
步骤4,笔划裹上画家笔触,例如以带有Alpha通道的纹理,作为贴图贴到一个三角形上用来产生笔触。
上述的Z-Buffer(Z缓存)是在为物件进行着色时,执行“隐藏面消除”工作的一项技术,所以隐藏物件背后的部分就不会被显示出来。
在上述实时处理的过程中,所有的步骤均是由计算机CPU完成,且进行实时渲染时处理的是三维图像、在进行轮廓边顶点的可视性剔除是通过Z-Buffer实现,需要进行大量的数据运算处理,由于是基于CPU进行一系列的关键处理,且数据处理量较大,所以对CPU的运算资源量消耗较大,使得在CPU性能相对较低的电脑上运行实时渲染会相当吃力。
为了解决上述问题,根据本发明实施例,还提供了一种对象的渲染方法的方法实施例。
可选地,在本实施例中,上述对象的渲染方法可以应用于如图1所示的由服务器102和/或终端104所构成的硬件环境中。如图1所示,服务器102通过网络与终端104进行连接,上述网络包括但不限于:广域网、城域网或局域网,终端104并不限定于PC、手机、平板电脑等。本发明实施例的对象的渲染方法可以由服务器102来执行,也可以由终端104来执行,还可以是由服务器102和终端104共同执行。其中,终端104执行本发明实施例的对象的渲染方法也可以是由安装在其上的客户端来执行。
例如,在终端上执行时,上述硬件结构可仅包括终端,具体执行步骤如下:
步骤S11,终端获取对目标对象进行图像采集得到的二维图像;
步骤S12,终端按照预定渲染方式进行渲染,具体参见图2所示的步骤。
当在服务器上执行时,上述硬件结构可仅包括服务器,具体执行步骤与上述类似,二者区别仅仅在于执行的主体为服务器。
下面结合图2对本申请的技术方案进行说明,图2是根据本发明实施例的一种可选的对象的渲染方法的流程图,如图2所示,该方法可以包括以下步骤:
步骤S202,获取对目标对象进行图像采集得到的二维图像;
步骤S204,在二维图像的所有像素点中识别出第一像素点集合,第一像素点集合中的像素点为目标对象的轮廓上的点;
步骤S206,通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,每个第二像素点集合中的像素点用于指示目标对象的轮廓中的一条线段;
步骤S208,分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段。
通过上述步骤S202至步骤S208,处理器直接获取对目标对象进行图像采集得到的二维图像,在二维图像的所有像素点中识别出用于表征目标对象的轮廓的第一像素点集合,通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段,由于处理的是二维图形,故而处理的数据量会大量减小,同时将顶点查询这类负荷较重的数据处理任务交由图像处理器完成,可以解决相关技术中进行实时渲染时对计算机CPU的运算资源消耗量较大的技术问题,进而达到降低实时渲染时计算机CPU的运算资源消耗量的技术效果。
上述的步骤S202至步骤S208可以在处理器中执行,如在计算机、移 动设备的中央处理器CPU中执行。
上述的目标对象可以为人物、动物、物体、环境等具象化的特征,在实际应用中,该目标对象可以为一个或多个;二维图形具体可以为深度纹理图片,或者携带每个像素的深度值的二维图片;上述的轮廓是指目标对象的边沿,如茶壶、人物等与自然环境或者其它物相区分的轮廓线;第一像素点集合中保存的像素点即所有识别出的轮廓上的点;第二像素点集合中保存的为识别出的属于同一线段的所有像素点。
上述的预定渲染方式包括但不局限于素描画、钢笔画、炭笔画、水彩画、卡通画以及水墨画等渲染方式。
上述的方法主要用于非照片真实级NPR渲染,但不局限于此。
在步骤S202提供的技术方案中,获取对目标对象进行图像采集得到的二维图像包括:获取采集到的类型为深度纹理类型的二维图像,其中,类型为深度纹理类型的二维图像中携带有像素点的深度值。
可选地,可直接渲染目标对象得到该目标对象的深度纹理。
在步骤S204提供的技术方案中,在二维图像的所有像素点中识别出第一像素点集合包括:对于二维图像的所有像素点中的每个像素点,执行以下步骤,其中,每个像素点在执行以下步骤时被记为当前像素点:获取当前像素点的深度变化值,其中,深度变化值用于指示当前像素点的多个相邻像素点之间的深度变化程度;在当前像素点的深度变化值大于等于预定的深度变化阈值的情况下,确定当前像素点为第一像素点集合中的像素点。
在获取当前像素点的深度变化值时,可以通过如下方式实现:获取第一滤波器对多个相邻像素点进行第一过滤处理得到的第一深度变化值,其中,第一深度变化值用于表示多个相邻像素点之间在第一方向上的深度变化程度;获取第二滤波器对多个相邻像素点进行第二过滤处理得到的第二深度变化值,其中,第二深度变化值用于表示多个相邻像素点之间在第二 方向上的深度变化程度,第二方向与第一方向不同;根据第一深度变化值和第二深度变化值确定当前像素点的深度变化值。
可选地,获取第一滤波器对多个相邻像素点进行第一过滤处理得到的第一深度变化值包括:获取第一滤波器按照第一公式进行第一过滤处理得到的第一深度变化值,其中,第一公式用于计算多个相邻像素点中在第一方向上相邻的像素点的第一深度参数之和,第一深度参数为在第一方向上相邻的像素点的深度值与对应的影响因子的乘积。
对于当前像素点而言,一般包括与之相邻并且围绕在当前像素点周围的8个像素点,可以将当前像素点记为S11,那么其左上角的像素点为S00,正上方的像素点为S01,右上角的像素点为S11,正左侧的像素点为S10,正右侧的像素点为S12,左下角的像素点为S20,正下方的像素点为S21,右下角的像素点为S22。
上述的第一方向可以为纵向,第一公式为SobelX=S00+2*S10+S20-S02-2*S12-S22,SobelX表示第一深度变化值,S00、S10、S20、S02、S12、S22为对应位置的像素的深度值,系数1、2、1、-1、-2、-1为相应像素的影响因子。
可选地,获取第二滤波器对多个相邻像素点进行第二过滤处理得到的第二深度变化值包括:获取第二滤波器按照第二公式进行第二过滤处理得到的第二深度变化值sobelY,其中,第二公式用于计算多个相邻像素点中在第二方向上相邻的像素点的第二深度参数之和,第二深度参数为在第二方向上相邻的像素点的深度值与对应的影响因子的乘积。
上述的第二方向可以为横向,第二公式为SobelY=S00+2*S01+S02-S20-2*S21-S22,SobelY表示第二深度变化值,S00、S01、S02、S20、S21、S22为对应位置的像素的深度值,1、2、1、-1、-2、-1为相应像素的影响因子。
可选地,根据第一深度变化值和第二深度变化值确定当前像素点的深 度变化值包括:将当前像素点的深度变化值设置为第一深度变化值的平方与第二深度变化值的平方之和。
在确定了第一深度变化值SobelX和第二深度变化值SobelY之后,可以按照如下公式确定当前像素点的深度变化值edgeSqr。
edgeSqr=(SobelX*SobelX+SobelY*SobelY)。
在步骤S206提供的技术方案中,通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合包括:发送顶点查询请求至图像处理器,其中,顶点查询请求中携带有第一像素点集合中像素点的位置信息和深度信息;接收图像处理器的响应信息,其中,响应信息用于指示第一像素点集合中像素点是否属于第二像素点集合。
上述的图像处理器GPU具备顶点纹理处理功能,在使用该功能时,将模型的所有顶点以画点的方式向GPU进行查询,如果该顶点是在轮廓线上,那么就把它绘制出来,否则不绘制,这个结果会以查询结果(即响应信息)的方式反馈给CPU,最终就会得到哪些顶点在轮廓线上的第二像素点集合。
在本申请的实施例中,CPU的处理复杂度和模型有关,如果模型顶点少,则可以以很高的每秒传输帧的数量fps(Frames Per Second)运行,但是如果模型顶点较多的模型则会相当吃力,如果采用GPU来进行处理,由于GPU是专用于图像处理的处理器,可通过硬件加速的方式加快处理速度,将CPU解放出来,降低对其处理资源的占用,从而可以实现较高的fps。
在步骤S208提供的技术方案中,分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段包括:按照预设条件将第二像素点集合中的像素点串联成一条线段,其中,一条线段至少包括一个笔画,预设条件用于指示每个笔画包括的像素点数目、笔画长度以及笔画包括的相邻的像素点间的转角角度中的至少之一。
具体可在CPU中将得到的轮廓线顶点进行串联成笔画,串联后可以根据一个笔画的顶点数目,笔画长度,顶点之间的转角角度来决定如何将笔画进行断连,最后形成独立的笔画。
可选地,按照预定渲染方式显示连接得到的每条线段包括:将每个笔画扩展成为笔画带;使用与预定渲染方式对应的笔触对笔画带进行绘制,其中,预定渲染方式包括素描画、钢笔画、炭笔画、水彩画、卡通画以及水墨画。
具体可应用笔触将笔画的顶点在屏幕空间扩展成笔画带,为套用笔触做准备,然后给每个笔画带套用相应的笔触,最终得到满足需求的图像。
本发明还提供了一种优选实施例,该优选实施例如图3所示,需要对游戏中的如图4所示的黑色椭圆(即目标对象)进行渲染:
步骤S302,获取物体的轮廓线。
上述的步骤S302可以通过两个子步骤实现,具体如子步骤S3022和S3024所示。
步骤S3022,获取深度纹理,可以直接渲染物体得到物体的深度纹理,具体如图4所示。
步骤S3024,通过使用过滤器进行过滤处理得到轮廓线像素。
绘制与屏幕大小相同的矩形框,对得到的深度纹理图片进行采样,在像素着色器PixelShader中按照如下方式进行处理。
表1示出的为横向过滤器对深度纹理进行过滤所使用的矩阵。
表1
1 | 0 | -1 |
2 | 0 | -2 |
1 | 0 | -1 |
表2示出的为纵向过滤器对深度纹理进行过滤所使用的矩阵。
表2
1 | 2 | 1 |
0 | 0 | 0 |
-1 | -2 | -1 |
需要说明的是,表1和表2中示出的影响因子为可选的,具体可以根据实际需求进行选取。
表3示出的为当前像素及相邻像素的位置关系以及像素的深度值,其中深度值所在的表格的位置即该像素的位置,表3中S11表示当前像素,具有与之相邻的8个像素。
表3
S00 | S01 | S02 |
S10 | S11 | S12 |
S20 | S21 | S22 |
利用表1示出的矩阵与表3示出的矩阵执行矩阵相乘可以得到实际的用于过滤的第一公式SobelX=S00+2*S10+S20-S02-2*S12-S22,SobelX表示第一深度变化值,S00、S10、S20、S02、S12、S22为对应位置的像素的深度值;利用表2示出的矩阵与表3示出的矩阵执行矩阵相乘可以得到实际的第二公式SobelY=S00+2*S01+S02-S20-2*S21-S22,SobelY表示第二深度变化值,S00、S01、S02、S20、S21、S22为对应位置的像素的深度值。
在确定了第一深度变化值SobelX和第二深度变化值SobelY之后,可以按照如下公式确定当前像素点的深度变化值edgeSqr。
edgeSqr=(SobelX*SobelX+SobelY*SobelY)。
采用横向和纵向过滤器对深度纹理进行过滤,逐像素处理,并且对当前像素的周围8个像素也进行采样,S11表示当前像素,edgeSqr表示当前像素周围像素的值的变化程度,如果edgeSqr大于一个阀值n,则说明当前像素周围的像素的值变化激烈,由于是使用深度信息得到的,那么说明当前像素周围像素的深度变化激烈,则说明当前像素是物体的轮廓。输出到轮廓纹理上,如果edgeSqr小于阀值n,那么说明当前像素不是物体的轮廓,则向纹理输出0。对图4所示的深度纹理进行过滤处理得到轮廓线像素如图5所示,图5中椭圆线条即得到的物体边缘纹理。
步骤S304,获取模型的轮廓顶点,具体是通过中央处理器CPU调用图像处理器GPU进行查询来实现。
上述的GPU具备顶点纹理处理功能,使用该功能时,将模型的所有顶点以画点的方式向GPU进行查询,具体可以将如图6所示的所有顶点发送给GPU进行查询(其中虚线连接的顶点为实际上在轮廓线上的顶点,但是对于CPU而言其并不知晓),如果该顶点是在轮廓线上,那么就把它绘制出来,否则不绘制,这个结果会以查询结果的方式反馈给CPU,最终就会得到哪些顶点在轮廓线上的列表,具体如图7所示,最终可以通过GPU识别出所有顶点中在轮廓线上的顶点。
步骤S306,处理轮廓线,连成笔画。
在CPU中将得到的轮廓线顶点进行串联成笔画,串联后可以根据一个笔画的顶点数目,笔画长度,顶点之间的转角角度来决定如何将笔画进行断连,最后形成独立的笔画,如图8所示。
步骤S308,生成最终笔画。
可应用笔触将笔画的顶点在屏幕空间扩展成笔画带,为套用笔触做准备,具体如图9所示,然后给每个笔画带套用如图10所示的笔触,最终得到的结果如图11所示。可以套用的笔触有很多风格,如图12所示,通过套用对应的风格的笔触可以扩展得到最终的显示风格。风格化的NPR 效果对游戏艺术性的提升有很大的帮助,能够加强游戏的本身气质。
可选地,本发明还提供了一种优选实施例,具体实施场景可以应用于动漫制作等,例如待渲染的对象为茶壶,那么可以应用上述图3的步骤对茶壶的深度纹理进行处理,具体实施方式与上述的对游戏中椭圆的处理方式相同,在此不再赘述。
通过CPU执行相关处理,并调用GPU进行顶点识别,然后进行线条处理、利用笔触进行绘制,即可得到如图13所示的茶壶。
在相关技术中,需要使用几何空间轮廓线查找技术,在原有的模型的顶点数据上加入几何空间轮廓线查找所必须的双法线信息,这是需要先遍历这个模型的所有面,找出共享边,并在边上存储两个共享面的法线信息。这个操作是在渲染前进行预先处理的,只需要处理一次,但计算量很大,非常消耗处理性能。
在本申请的技术方案中,基于显卡的特性进行高效处理,将以前的预计算、CPU和GPU交互等高开销的操作优化掉,大大提升了运行的效率。去除预处理计算(即上述的轮廓线查找),降低系统复杂度,并可以针对一切原始模型进行直接处理;另外,使用GPU特性进行查询,大大高效于相关技术中锁住Z-buffer进行逐轮廓边顶点比较剔除这种低效操作,从而可以降低CPU的开销。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理 解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
根据本发明实施例,还提供了一种用于实施上述对象的渲染方法的对象的渲染装置。图14是根据本发明实施例的一种可选的对象的渲染装置的示意图,如图14所示,该装置可以包括:获取单元142、第一识别单元144、第二识别单元146以及渲染单元148。
获取单元142,用于获取对目标对象进行图像采集得到的二维图像;
第一识别单元144,用于在二维图像的所有像素点中识别出第一像素点集合,其中,第一像素点集合中的像素点为目标对象的轮廓上的点;
第二识别单元146,用于通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,其中,每个第二像素点集合中的像素点用于指示目标对象的轮廓中的一条线段;
渲染单元148,用于分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段。
需要说明的是,该实施例中的获取单元142可以用于执行本申请实施例中的步骤S202,该实施例中的第一识别单元144可以用于执行本申请实施例中的步骤S204,该实施例中的第二识别单元146可以用于执行本申请实施例中的步骤S206,该实施例中的渲染单元148可以用于执行本申请实施例中的步骤S208。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现, 也可以通过硬件实现。
通过上述模块,获取对目标对象进行图像采集得到的二维图像,在二维图像的所有像素点中识别出用于表征目标对象的轮廓的第一像素点集合,通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段,由于处理的是二维图形,故而处理的数据量会大量减小,同时将顶点查询这类负荷较重的数据处理任务交由图像处理器完成,可以解决相关技术中进行实时渲染时对计算机CPU的运算资源消耗量较大的技术问题,进而达到降低实时渲染时计算机CPU的运算资源消耗量的技术效果。
上述的目标对象可以为人物、动物、物体、环境等具象化的特征,是实际应用中,该目标对象可以为一个或多个;二维图形具体可以为深度纹理图片,或者携带每个像素的深度值的二维图片;上述的轮廓是指目标对象的边沿,如茶壶、人物等与自然环境或者其它物相区分的轮廓线;第一像素点集合中保存的像素点即所有识别出的轮廓上的点;第二像素点集合中保存的为识别出的属于同一线段的所有像素点。
上述的预定渲染方式包括但不局限于素描画、钢笔画、炭笔画、水彩画、卡通画以及水墨画等渲染方式。
上述的装置主要用于非照片真实级NPR渲染,但不局限于此。
可选地,获取单元还用于获取采集到的类型为深度纹理类型的二维图像,其中,类型为深度纹理类型的二维图像中携带有像素点的深度值。
可选地,可直接渲染目标对象得到该目标对象的深度纹理。
可选地,第一识别单元还用于对于二维图像的所有像素点中的每个像素点,执行以下步骤,其中,每个像素点在执行以下步骤时被记为当前像素点:获取当前像素点的深度变化值,其中,深度变化值用于指示当前像素点的多个相邻像素点之间的深度变化程度;在当前像素点的深度变化值 大于等于预定的深度变化阈值的情况下,确定当前像素点为第一像素点集合中的像素点。
可选地,第一识别单元包括:第一获取模块,用于获取第一滤波器对多个相邻像素点进行第一过滤处理得到的第一深度变化值,其中,第一深度变化值用于表示多个相邻像素点之间在第一方向上的深度变化程度;第二获取模块,用于获取第二滤波器对多个相邻像素点进行第二过滤处理得到的第二深度变化值,其中,第二深度变化值用于表示多个相邻像素点之间在第二方向上的深度变化程度,第二方向与第一方向不同;确定模块,用于根据第一深度变化值和第二深度变化值确定当前像素点的深度变化值。
上述的第一获取模块还用于获取第一滤波器按照第一公式进行第一过滤处理得到的第一深度变化值,其中,第一公式用于计算多个相邻像素点中在第一方向上相邻的像素点的第一深度参数之和,第一深度参数为在第一方向上相邻的像素点的深度值与对应的影响因子的乘积。
对于当前像素点而言,一般包括与之相邻并且围绕在当前像素点周围的8个像素点,可以将当前像素点记为S11,那么其左上角的像素点为S00,正上方的像素点为S01,右上角的像素点为S11,正左侧的像素点为S10,正右侧的像素点为S12,左下角的像素点为S20,正下方的像素点为S21,右下角的像素点为S22。
上述的第一方向可以为纵向,第一公式为SobelX=S00+2*S10+S20-S02-2*S12-S22,SobelX表示第一深度变化值,S00、S10、S20、S02、S12、S22为对应位置的像素的深度值,1、2、1、-1、-2、-1为相应像素的影响因子。
上述的第二获取模块还用于获取第二滤波器按照第二公式进行第二过滤处理得到的第二深度变化值sobelY,其中,第二公式用于计算多个相邻像素点中在第二方向上相邻的像素点的第二深度参数之和,第二深度参数为在第二方向上相邻的像素点的深度值与对应的影响因子的乘积。
上述的第二方向可以为横向,第一公式为SobelY=S00+2*S01+S02-S20-2*S21-S22,SobelY表示第二深度变化值,S00、S01、S02、S20、S21、S22为对应位置的像素的深度值,1、2、1、-1、-2、-1为相应像素的影响因子。
可选地,确定模块还用于将当前像素点的深度变化值设置为第一深度变化值的平方与第二深度变化值的平方之和。
可选地,第二识别单元包括:发送模块,用于发送顶点查询请求至图像处理器,其中,顶点查询请求中携带有第一像素点集合中像素点的位置信息和深度信息;接收模块,用于接收图像处理器的响应信息,其中,响应信息用于指示第一像素点集合中像素点是否属于第二像素点集合。
上述的图像处理器GPU具备顶点纹理处理功能,在使用该功能时,将模型的所有顶点以画点的方式向GPU进行查询,如果该顶点是在轮廓线上,那么就把它绘制出来,否则不绘制,这个结果会以查询结果(即响应信息)的方式反馈给CPU,最终就会得到哪些顶点在轮廓线上的第二像素点集合。
可选地,渲染单元还用于按照预设条件将第二像素点集合中的像素点串联成一条线段,其中,一条线段至少包括一个笔画,预设条件用于指示每个笔画包括的像素点数目、笔画长度以及笔画包括的相邻的像素点间的转角角度中的至少之一。
具体可在CPU中将得到的轮廓线顶点进行串联成笔画,串联后可以根据一个笔画的顶点数目,笔画长度,顶点之间的转角角度来决定如何将笔画进行断连,最后形成独立的笔画。
可选地,渲染单元还用于将每个笔画扩展成为笔画带;使用与预定渲染方式对应的笔触对笔画带进行绘制,其中,预定渲染方式包括素描画、钢笔画、炭笔画、水彩画、卡通画以及水墨画。
具体可应用笔触将笔画的顶点在屏幕空间扩展成笔画带,为套用笔触 做准备,然后给每个笔画带套用相应的笔触,最终得到满足需求的图像。
在本申请的技术方案中,基于显卡的特性进行高效处理,将以前的预计算、CPU和GPU交互等高开销的操作优化掉,大大提升了运行的效率。去除预处理计算,降低系统复杂度,并可以针对一切原始模型进行直接处理;另外,使用GPU特性进行查询,大大高效于相关技术中锁住Z-buffer进行逐轮廓边顶点比较剔除这种低效操作,从而可以降低CPU的开销。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。
根据本申请实施例的另一方面,还提供了一种存储介质(也称为存储器),该存储介质包括存储的程序,其中,该程序被设置为运行时执行上述的任一种方法。
根据本发明实施例,还提供了一种用于实施上述对象的渲染方法的服务器或终端(也称为电子装置)。
图15是根据本发明实施例的一种终端的结构框图,如图15所示,该终端可以包括:一个或多个(图15中仅示出一个)处理器1501、存储器1503、以及传输装置1505(如上述实施例中的发送装置),如图15所示,该终端还可以包括输入输出设备1507。
其中,存储器1503可用于存储软件程序以及模块,如本发明实施例中的方法和装置对应的程序指令/模块,处理器1501通过运行存储在存储器1503内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器1503可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1503可进一步包括相对于处理器1501远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络 的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置1505用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1505包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1505为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,可选地,存储器1503用于存储应用程序。
处理器1501可以通过传输装置1505调用存储器1503存储的应用程序,以执行下述步骤:获取对目标对象进行图像采集得到的二维图像;在二维图像的所有像素点中识别出第一像素点集合,其中,第一像素点集合中的像素点为目标对象的轮廓上的点;通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,其中,每个第二像素点集合中的像素点用于指示目标对象的轮廓中的一条线段;分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段。
处理器1501还用于执行下述步骤:获取当前像素点的深度变化值,其中,深度变化值用于指示当前像素点的多个相邻像素点之间的深度变化程度;在当前像素点的深度变化值大于等于预定的深度变化阈值的情况下,确定当前像素点为第一像素点集合中的像素点。
采用本发明实施例,处理器直接获取对目标对象进行图像采集得到的二维图像,在二维图像的所有像素点中识别出用于表征目标对象的轮廓的第一像素点集合,通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段,由于处理的是二维图形,故而处理的数据量会大量减小,同时将顶点查询 这类负荷较重的数据处理任务交由图像处理器完成,可以解决相关技术中进行实时渲染时对计算机CPU的运算资源消耗量较大的技术问题,进而达到降低实时渲染时计算机CPU的运算资源消耗量的技术效果。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图15所示的结构仅为示意,终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图15其并不对上述电子装置的结构造成限定。例如,终端还可包括比图15中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图15所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于执行对象的渲染方法的程序代码。
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S11,获取对目标对象进行图像采集得到的二维图像;
S12,在二维图像的所有像素点中识别出第一像素点集合,其中,第一像素点集合中的像素点为目标对象的轮廓上的点;
S13,通过调用图像处理器在第一像素点集合中识别出一个或多个第二像素点集合,其中,每个第二像素点集合中的像素点用于指示目标对象的轮廓中的一条线段;
S14,分别将每个第二像素点集合中的像素点连接成目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条线段。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
S21,获取当前像素点的深度变化值,其中,深度变化值用于指示当前像素点的多个相邻像素点之间的深度变化程度;
S22,在当前像素点的深度变化值大于等于预定的深度变化阈值的情况下,确定当前像素点为第一像素点集合中的像素点。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实 施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
Claims (17)
- 一种对象的渲染方法,包括:获取对目标对象进行图像采集得到的二维图像;在所述二维图像的所有像素点中识别出第一像素点集合,其中,所述第一像素点集合中的像素点为所述目标对象的轮廓上的点;通过调用图像处理器在所述第一像素点集合中识别出一个或多个第二像素点集合,其中,每个所述第二像素点集合中的像素点用于指示所述目标对象的轮廓中的一条线段;分别将每个所述第二像素点集合中的像素点连接成所述目标对象的轮廓中的一条线段,并按照预定渲染方式显示连接得到的每条所述线段。
- 根据权利要求1所述的方法,其中,在所述二维图像的所有像素点中识别出第一像素点集合包括:对于所述二维图像的所有像素点中的每个像素点,执行以下步骤,其中,所述每个像素点在执行以下步骤时被记为当前像素点:获取所述当前像素点的深度变化值,其中,所述深度变化值用于指示所述当前像素点的多个相邻像素点之间的深度变化程度;在所述当前像素点的深度变化值大于等于预定的深度变化阈值的情况下,确定所述当前像素点为所述第一像素点集合中的像素点。
- 根据权利要求2所述的方法,其中,获取所述当前像素点的深度变化值包括:获取第一滤波器对多个所述相邻像素点进行第一过滤处理得到的第一深度变化值,其中,所述第一深度变化值用于表示多个所述相邻像素点之间在第一方向上的深度变化程度;获取第二滤波器对多个所述相邻像素点进行第二过滤处理得到的第二深度变化值,其中,所述第二深度变化值用于表示多个所述相 邻像素点之间在第二方向上的深度变化程度,所述第二方向与所述第一方向不同;根据所述第一深度变化值和所述第二深度变化值确定所述当前像素点的深度变化值。
- 根据权利要求3所述的方法,其中,根据所述第一深度变化值和所述第二深度变化值确定所述当前像素点的深度变化值包括:将所述当前像素点的深度变化值设置为所述第一深度变化值的平方与所述第二深度变化值的平方之和。
- 根据权利要求3所述的方法,其中,获取第一滤波器对多个所述相邻像素点进行第一过滤处理得到的第一深度变化值包括:获取第一滤波器按照第一公式进行所述第一过滤处理得到的所述第一深度变化值,其中,所述第一公式用于计算多个所述相邻像素点中在所述第一方向上相邻的像素点的第一深度参数之和,所述第一深度参数为在所述第一方向上相邻的像素点的深度值与对应的影响因子的乘积;获取第二滤波器对多个所述相邻像素点进行第二过滤处理得到的第二深度变化值包括:获取第二滤波器按照第二公式进行所述第二过滤处理得到的所述第二深度变化值,其中,所述第二公式用于计算多个所述相邻像素点中在所述第二方向上相邻的像素点的第二深度参数之和,所述第二深度参数为在所述第二方向上相邻的像素点的深度值与对应的影响因子的乘积。
- 根据权利要求5所述的方法,其中,获取对目标对象进行图像采集得到的二维图像包括:获取采集到的类型为深度纹理类型的所述二维图像,其中,类型为深度纹理类型的所述二维图像中携带有像素点的深度值。
- 根据权利要求1所述的方法,其中,通过调用图像处理器在所述第一像素点集合中识别出一个或多个第二像素点集合包括:发送顶点查询请求至所述图像处理器,其中,所述顶点查询请求中携带有所述第一像素点集合中像素点的位置信息和深度信息;接收所述图像处理器的响应信息,其中,所述响应信息用于指示所述第一像素点集合中像素点是否属于所述第二像素点集合。
- 根据权利要求1所述的方法,其中,分别将每个所述第二像素点集合中的像素点连接成所述目标对象的轮廓中的一条线段包括:按照预设条件将所述第二像素点集合中的像素点串联成一条所述线段,其中,一条所述线段至少包括一个笔画,所述预设条件用于指示每个所述笔画包括的像素点数目、笔画长度以及所述笔画包括的相邻的像素点间的转角角度中的至少之一。
- 根据权利要求8所述的方法,其中,按照预定渲染方式显示连接得到的每条所述线段包括:将每个所述笔画扩展成为笔画带;使用与所述预定渲染方式对应的笔触对所述笔画带进行绘制,其中,所述预定渲染方式包括素描画、钢笔画、炭笔画、水彩画、卡通画以及水墨画。
- 一种对象的渲染装置,包括:获取单元,被设置为获取对目标对象进行图像采集得到的二维图像;第一识别单元,被设置为在所述二维图像的所有像素点中识别出第一像素点集合,其中,所述第一像素点集合中的像素点为所述目标对象的轮廓上的点;第二识别单元,被设置为通过调用图像处理器在所述第一像素点集合中识别出一个或多个第二像素点集合,其中,每个所述第二像素点集合中的像素点用于指示所述目标对象的轮廓中的一条线段;渲染单元,被设置为分别将每个所述第二像素点集合中的像素点连接成所述目标对象的轮廓中的一条线段,并按照预定渲染方式显示 连接得到的每条所述线段。
- 根据权利要求10所述的装置,其中,所述第一识别单元还被设置为对于所述二维图像的所有像素点中的每个像素点,执行以下步骤,其中,所述每个像素点在执行以下步骤时被记为当前像素点:获取所述当前像素点的深度变化值,其中,所述深度变化值用于指示所述当前像素点的多个相邻像素点之间的深度变化程度;在所述当前像素点的深度变化值大于等于预定的深度变化阈值的情况下,确定所述当前像素点为所述第一像素点集合中的像素点。
- 根据权利要求11所述的装置,其中,所述第一识别单元包括:第一获取模块,被设置为获取第一滤波器对多个所述相邻像素点进行第一过滤处理得到的第一深度变化值,其中,所述第一深度变化值用于表示多个所述相邻像素点之间在第一方向上的深度变化程度;第二获取模块,被设置为获取第二滤波器对多个所述相邻像素点进行第二过滤处理得到的第二深度变化值,其中,所述第二深度变化值用于表示多个所述相邻像素点之间在第二方向上的深度变化程度,所述第二方向与所述第一方向不同;确定模块,被设置为根据所述第一深度变化值和所述第二深度变化值确定所述当前像素点的深度变化值。
- 根据权利要求12所述的装置,其中,所述确定模块还被设置为将所述当前像素点的深度变化值设置为所述第一深度变化值的平方与所述第二深度变化值的平方之和。
- 根据权利要求12所述的装置,其中,所述第一获取模块还被设置为获取第一滤波器按照第一公式进行所述第一过滤处理得到的所述第一深度变化值,其中,所述第一公式用于计算多个所述相邻像素点中在所述第一方向上相邻的像素点的第一深度参数之和,所述第一深度参数为在所述第一方向上相邻的像素点的深度值与对应的影响因子的乘积;所述第二获取模块还被设置为获取第二滤波器按照第二公式进行所述第二过滤处理得到的所述第二深度变化值,其中,所述第二公式用于计算多个所述相邻像素点中在所述第二方向上相邻的像素点的第二深度参数之和,所述第二深度参数为在所述第二方向上相邻的像素点的深度值与对应的影响因子的乘积。
- 根据权利要求10所述的装置,其中,所述第二识别单元包括:发送模块,被设置为发送顶点查询请求至所述图像处理器,其中,所述顶点查询请求中携带有所述第一像素点集合中像素点的位置信息和深度信息;接收模块,被设置为接收所述图像处理器的响应信息,其中,所述响应信息用于指示所述第一像素点集合中像素点是否属于所述第二像素点集合。
- 一种存储介质,其中,所述存储介质中存储有计算机程序,所述计算机程序被设置为运行时执行权利要求1至9中任意一项所述的方法。
- 一种电子装置,包括存储器和处理器,其中,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序,以执行权利要求1至9中任意一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710204031.2 | 2017-03-30 | ||
CN201710204031.2A CN107123077B (zh) | 2017-03-30 | 2017-03-30 | 对象的渲染方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018177112A1 true WO2018177112A1 (zh) | 2018-10-04 |
Family
ID=59718230
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/078604 WO2018177112A1 (zh) | 2017-03-30 | 2018-03-09 | 对象的渲染方法和装置、存储介质、电子装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107123077B (zh) |
WO (1) | WO2018177112A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123077B (zh) * | 2017-03-30 | 2019-01-08 | 腾讯科技(深圳)有限公司 | 对象的渲染方法和装置 |
CN107978014B (zh) * | 2017-12-21 | 2021-06-18 | 卓米私人有限公司 | 一种粒子渲染方法、装置、电子设备及存储介质 |
CN111489411B (zh) * | 2019-01-29 | 2023-06-20 | 北京百度网讯科技有限公司 | 线条绘制方法、装置、图像处理器、显卡及车辆 |
CN111210485B (zh) * | 2020-01-06 | 2023-03-28 | 北京字节跳动网络技术有限公司 | 图像的处理方法、装置、可读介质和电子设备 |
CN112233215B (zh) * | 2020-10-15 | 2023-08-22 | 网易(杭州)网络有限公司 | 轮廓渲染方法、装置、设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101038675A (zh) * | 2006-03-16 | 2007-09-19 | 腾讯科技(深圳)有限公司 | 实现水墨风格渲染的方法及装置 |
CN103366395A (zh) * | 2013-07-06 | 2013-10-23 | 北京航空航天大学 | 一种基于gpu加速的体数据非真实感绘制方法 |
CN103778655A (zh) * | 2014-01-28 | 2014-05-07 | 西安理工大学 | 一种基于自适应水墨扩散的彩色自然图像计算艺术化方法 |
CN105513111A (zh) * | 2015-09-15 | 2016-04-20 | 浙江大学 | 一种基于图像轮廓自动贴合的草图式三维造型方法 |
CN107123077A (zh) * | 2017-03-30 | 2017-09-01 | 腾讯科技(深圳)有限公司 | 对象的渲染方法和装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101587593B (zh) * | 2009-06-19 | 2011-07-27 | 西安交通大学 | 一种基于真实图像素描风格化的方法 |
CN105096358A (zh) * | 2015-08-05 | 2015-11-25 | 云南大学 | 一种线条增强的烙画艺术效果模拟方法 |
CN106097429B (zh) * | 2016-06-23 | 2017-11-28 | 腾讯科技(深圳)有限公司 | 一种图像处理方法和装置 |
-
2017
- 2017-03-30 CN CN201710204031.2A patent/CN107123077B/zh active Active
-
2018
- 2018-03-09 WO PCT/CN2018/078604 patent/WO2018177112A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101038675A (zh) * | 2006-03-16 | 2007-09-19 | 腾讯科技(深圳)有限公司 | 实现水墨风格渲染的方法及装置 |
CN103366395A (zh) * | 2013-07-06 | 2013-10-23 | 北京航空航天大学 | 一种基于gpu加速的体数据非真实感绘制方法 |
CN103778655A (zh) * | 2014-01-28 | 2014-05-07 | 西安理工大学 | 一种基于自适应水墨扩散的彩色自然图像计算艺术化方法 |
CN105513111A (zh) * | 2015-09-15 | 2016-04-20 | 浙江大学 | 一种基于图像轮廓自动贴合的草图式三维造型方法 |
CN107123077A (zh) * | 2017-03-30 | 2017-09-01 | 腾讯科技(深圳)有限公司 | 对象的渲染方法和装置 |
Non-Patent Citations (1)
Title |
---|
NORTHRUP, J. ET AL.: "Artistic Silhouettes: A Hybrid Approach,", IN PROCEEDINGS OF THE FIRST INTERNATIONAL SYMPOSIUM ON NON-PHOTOREALISTIC ANIMATION AND RENDERING, 31 December 2000 (2000-12-31), pages 31 - 37, XP058342297 * |
Also Published As
Publication number | Publication date |
---|---|
CN107123077A (zh) | 2017-09-01 |
CN107123077B (zh) | 2019-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018177112A1 (zh) | 对象的渲染方法和装置、存储介质、电子装置 | |
US11839820B2 (en) | Method and apparatus for generating game character model, processor, and terminal | |
CN115699114B (zh) | 用于分析的图像增广的方法和装置 | |
US20190287283A1 (en) | User-guided image completion with image completion neural networks | |
CN104008569B (zh) | 一种基于深度视频的3d场景生成方法 | |
WO2019024751A1 (zh) | 一种面部表情合成方法、装置、电子设备及存储介质 | |
CN110211218B (zh) | 画面渲染方法和装置、存储介质及电子装置 | |
CN109949693B (zh) | 一种地图绘制方法、装置、计算设备及存储介质 | |
CN109840881A (zh) | 一种3d特效图像生成方法、装置及设备 | |
CN105447125A (zh) | 一种电子设备及化妆辅助方法 | |
CN107610239B (zh) | 一种脸谱的虚拟试戴方法及装置 | |
CN105608699B (zh) | 一种图像处理方法及电子设备 | |
CN110570507A (zh) | 一种图像渲染方法及装置 | |
US10922852B2 (en) | Oil painting stroke simulation using neural network | |
CN109377552B (zh) | 图像遮挡计算方法、装置、计算设备及存储介质 | |
CN108198231A (zh) | 电力gis矢量图形实时绘制方法、存储介质 | |
CN114723888B (zh) | 三维发丝模型生成方法、装置、设备、存储介质及产品 | |
CN107203962B (zh) | 一种利用2d图片制作伪3d图像的方法及电子设备 | |
TW201807667A (zh) | 網路圖片的載入方法、裝置和系統 | |
CN104952093A (zh) | 虚拟染发方法和装置 | |
US9704290B2 (en) | Deep image identifiers | |
CN111107264A (zh) | 图像处理方法、装置、存储介质以及终端 | |
CN110599576A (zh) | 文件渲染系统、方法及电子设备 | |
CN114063872A (zh) | 图画生成方法和装置、存储介质及电子设备 | |
CN104156999A (zh) | 一种三维场景渲染方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18774555 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18774555 Country of ref document: EP Kind code of ref document: A1 |