CN115797533A - Model edge tracing method, device, equipment and storage medium - Google Patents

Model edge tracing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115797533A
CN115797533A CN202211406239.XA CN202211406239A CN115797533A CN 115797533 A CN115797533 A CN 115797533A CN 202211406239 A CN202211406239 A CN 202211406239A CN 115797533 A CN115797533 A CN 115797533A
Authority
CN
China
Prior art keywords
model
change rate
screen space
pixel
edge area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211406239.XA
Other languages
Chinese (zh)
Inventor
马庆涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Longitudinal Tour Network Technology Co ltd
Original Assignee
Shanghai Longitudinal Tour Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Longitudinal Tour Network Technology Co ltd filed Critical Shanghai Longitudinal Tour Network Technology Co ltd
Priority to CN202211406239.XA priority Critical patent/CN115797533A/en
Publication of CN115797533A publication Critical patent/CN115797533A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a model delineation method, a device, equipment and a storage medium, wherein the model delineation method comprises the steps of executing depth channel rendering on a model to be delineated to obtain a screen space depth map; performing normal channel rendering on the model to be traced to obtain a screen space normal map; performing region label mapping on the model to be traced to obtain a region label mapping; determining a first edge area of the model to be stroked based on the depth change rate of each pixel in the screen space depth map; determining a second edge area of the model to be stroked based on the normal change rate of each pixel in the screen space normal diagram; determining a third edge area of the model to be traced based on the color change rate of each pixel of the area label mapping; and performing edge painting on the model to be painted based on the first edge area, the second edge area and the third edge area. According to the method and the device, the accuracy of model edge detection is improved, and therefore the effect that the edge area of an object is rendered in a high-quality mode, namely, the edge is traced is achieved under the condition of limited performance.

Description

Model delineation method, device, equipment and storage medium
Technical Field
The invention relates to the field of computer application, in particular to a model delineation method, a model delineation device, model delineation equipment and a storage medium.
Background
The existing model delineation method is simple, and due to the fact that the edge region is detected incorrectly or the condition that a delineation region needs to be drawn cannot be detected, when the complex model is overlapped, the problems of delineation breakage, inner delineation abnormity and the like are prone to occurring.
Therefore, how to improve the accuracy of model edge detection so as to achieve the effect of rendering the edge region of the object with high quality, i.e. tracing, under the condition of limited performance is a technical problem to be solved in the field.
Disclosure of Invention
In order to overcome the defects of the related technologies, the invention provides a model edge tracing method, a device, equipment and a storage medium, so that the accuracy of model edge detection is improved, and the effect of high-quality rendering of an object edge area, namely edge tracing, is achieved under the condition of limited performance.
According to an aspect of the present invention, there is provided a model delineation method, comprising:
performing depth channel rendering on the model to be traced to obtain a screen space depth map;
performing normal channel rendering on the model to be traced to obtain a screen space normal map;
performing area label mapping on the model to be traced to obtain an area label mapping;
determining a first edge area of the model to be stroked based on the depth change rate of each pixel in the screen space depth map;
determining a second edge area of the model to be stroked based on the normal change rate of each pixel in the screen space normal map;
determining a third edge area of the model to be traced based on the color change rate of each pixel of the area label map;
and based on the first edge area, the second edge area and the third edge area, performing edge tracing on the model to be traced.
In some embodiments of the present application, the determining a first edge region of the model to be stroked based on a depth change rate of each pixel in the screen space depth map includes:
convolving the screen space depth map by using a Laplacian operator to obtain the depth change rate of each pixel in the screen space depth map;
and adding the pixels with the depth change rate larger than the upper limit threshold value of the depth change rate or smaller than the lower limit threshold value of the depth change rate into the first edge area.
In some embodiments of the present application, the laplacian is via a smoothing process.
In some embodiments of the present application, the determining a second edge region of the model to be stroked based on the normal change rate of each pixel in the screen space normal map comprises:
convolving the screen space normal map by using a Sobel operator to obtain the normal change rate of each pixel in the screen space normal map;
and adding the pixels of which the normal change rate is greater than the upper limit threshold value of the normal change rate or less than the lower limit threshold value of the normal change rate into the second edge area.
In some embodiments of the present application, the performing region label mapping on the model to be traced includes:
and marking different areas of the model to be stroked with different colors to obtain the area marking map.
In some embodiments of the application, the determining a third edge region of the model to be stroked based on the color change rate of each pixel of the region label map includes:
convolving the region label mapping by using a Laplace operator to obtain the color change rate of each pixel in the region label mapping;
and adding the pixels with the color change rate larger than the upper threshold value of the color change rate or smaller than the lower threshold value of the color change rate into a third edge area.
In some embodiments of the present application, the color change rate is a color contrast change rate.
According to still another aspect of the present application, there is also provided a model edge tracing apparatus including:
the screen space depth map acquisition module is used for executing depth channel rendering on the model to be stroked to obtain a screen space depth map;
the screen space normal map acquisition module is used for performing normal channel rendering on the model to be stroked to obtain a screen space normal map;
the area label mapping acquisition module is used for performing area label mapping on the model to be traced to obtain an area label mapping;
the first edge area determining module is used for determining a first edge area of the model to be described based on the depth change rate of each pixel in the screen space depth map;
the second edge area determining module is used for determining a second edge area of the model to be stroked based on the normal change rate of each pixel in the screen space normal map;
a third edge region determining module, configured to determine a third edge region of the model to be drawn based on a color change rate of each pixel of the region label map;
and the stroke module is used for performing stroke on the model to be stroked based on the first edge area, the second edge area and the third edge area.
According to still another aspect of the present invention, there is also provided an electronic apparatus, including: a processor; a storage medium having stored thereon a computer program which, when executed by the processor, performs the steps as described above.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps as described above.
Compared with the prior art, the invention has the advantages that:
according to the method and the device, on the basis of the depth change rate and the normal change rate, the accuracy and the controllability of the detection of the edge area are enhanced by combining the area mark mapping, so that the effect of rendering the edge area of an object in a high-quality manner, namely edge tracing, can be achieved under the condition of limited performance.
Drawings
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 shows a flow diagram of a model edge tracing method according to an embodiment of the invention.
Fig. 2 shows a flowchart for determining a first edge region of the model to be stroked based on a depth change rate of each pixel in the screen space depth map according to an embodiment of the present invention.
Fig. 3 shows a flowchart for determining a second edge region of the model to be stroked based on the normal change rate of each pixel in the screen space normal map according to an embodiment of the present invention.
Fig. 4 shows a flowchart for determining a third edge region of the model to be traced based on the color change rate of each pixel of the region label map according to an embodiment of the present invention.
Fig. 5 shows a model diagram of a model sidetracking apparatus according to an embodiment of the present invention.
Fig. 6 schematically illustrates a computer-readable storage medium in an exemplary embodiment of the invention.
Fig. 7 schematically illustrates an electronic device in an exemplary embodiment of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, some steps may be combined or partially combined, and thus, the actual execution order may be changed according to the actual situation.
Referring now to FIG. 1, FIG. 1 illustrates a flow diagram of a model delineation method according to an embodiment of the present invention. The model delineation method comprises the following steps:
step S110: and performing depth channel rendering on the model to be stroked to obtain a screen space depth map.
Step S120: and performing normal channel rendering on the model to be stroked to obtain a screen space normal map.
Step S130: and performing area label mapping on the model to be traced to obtain an area label mapping.
Specifically, step S130 may be implemented by marking different regions of the model to be outlined with different colors, and obtaining the region label map.
Specifically, before performing steps S110 to S130, a step of widening the model to be stroked may be further included.
Step S140: and determining a first edge area of the model to be stroked based on the depth change rate of each pixel in the screen space depth map.
Specifically, referring to fig. 2, fig. 2 shows a flowchart of determining a first edge region of the model to be stroked based on a depth change rate of each pixel in the screen space depth map according to an embodiment of the present invention. The determining a first edge region of the model to be stroked based on the depth change rate of each pixel in the screen space depth map comprises:
step S141: and (4) convolving the screen space depth map by using a Laplace operator to obtain the depth change rate of each pixel in the screen space depth map.
The laplacian is a second order differential operator in an n-dimensional euclidean space.
Wherein, the laplacian operator may be:
0 -1 0
-1 4 -1
0 -1 0
the laplacian may be via a smoothing process. For example, the laplacian may be smoothed by the gaussian standard deviation.
Thus, for each pixel of the screen space depth map, four pixels (including five pixels in total) above, below, left, and right can be extracted, so as to be multiplied by a corresponding pixel value (such as a pixel gray value) based on the laplacian operator respectively, and weighted and summed, thereby obtaining the depth change rate of the pixel.
Step S142: and adding the pixels with the depth change rate larger than the upper limit threshold value of the depth change rate or smaller than the lower limit threshold value of the depth change rate into the first edge area.
Specifically, the upper depth change rate threshold and the lower depth change rate threshold may be set as needed.
Step S150: and determining a second edge area of the model to be stroked based on the normal change rate of each pixel in the screen space normal map.
Specifically, referring to fig. 3, fig. 3 shows a flowchart of determining a second edge region of the model to be stroked based on a normal change rate of each pixel in the screen space normal map according to an embodiment of the present invention. The determining a second edge region of the model to be stroked based on the normal change rate of each pixel in the screen space normal map comprises:
step S151: and (3) convolving the screen space normal map by using a Sobel operator to obtain the normal change rate of each pixel in the screen space normal map.
In particular, the Sobeloperator (Sobeloperator) is mainly used for obtaining the first order gradient of a digital image, and is a discrete difference operator. The Sobel operator considers that the influence of the pixels in the neighborhood on the current pixel is not equivalent, so that the pixels with different distances have different weights, and the influence on the operator result is different. Generally, the further the distance, the less effect is produced.
Wherein, in the Sobel operator:
the transverse convolution kernel is:
-1 0 1
-2 0 2
-1 0 1
the vertical convolution kernel is:
1 2 1
0 0 0
-1 -2 -1
and performing plane convolution on the transverse convolution kernel and the longitudinal convolution kernel of the Sobel operator and the image to obtain transverse and longitudinal brightness difference approximate values as the normal change rate of each pixel.
Step S152: and adding the pixels of which the normal change rate is greater than the upper threshold value of the normal change rate or less than the lower threshold value of the normal change rate into a second edge area.
Specifically, the upper normal change rate threshold and the lower normal change rate threshold may be set as needed.
Specifically, since the second edge region detected by the normal line change rate is mainly to avoid that the edge region of the object whose depth does not change much is not detected in step S140, step S150 may be performed only on the portion of the screen space normal map other than the first edge region, so as to reduce the data processing amount and ensure the accuracy and precision of the edge detection.
Step S160: and determining a third edge area of the model to be drawn based on the color change rate of each pixel of the area label mapping.
Specifically, referring to fig. 4, fig. 4 is a flowchart illustrating a process of determining a third edge region of the model to be traced based on a color change rate of each pixel of the region label map according to an embodiment of the present invention. The determining a third edge region of the model to be traced based on the rate of change of color of each pixel of the region label map comprises:
step S161: and (3) convolving the region label mapping by using a Laplace operator to obtain the color change rate of each pixel in the region label mapping.
Specifically, the color change rate may be a color contrast change rate.
Specifically, the area label map may be manually labeled, and the present application is not limited thereto, and other automatic labeling methods are also within the scope of the present application.
Specifically, the area mark chartlet is drawn in advance and bound on the model and is less related to the object motion and the scene change, so that the influence of the object motion and the scene change is less, and the stroke accuracy can be improved.
The laplacian may be via a smoothing process. For example, the laplacian may be smoothed by the gaussian standard deviation.
Thus, for each pixel of the region label map, four pixels (including five pixels in total) above, below, left, and right can be extracted, so as to be multiplied by a corresponding pixel value (such as a pixel gray value) based on the laplacian operator, and weighted and summed, so as to obtain the color change rate of the pixel.
Further, the user may adjust the color contrast of the different region label maps to further adjust the accuracy of the edge detection.
Step S162: and adding the pixels with the color change rate larger than the upper threshold value of the color change rate or smaller than the lower threshold value of the color change rate into a third edge area.
Specifically, step S160 is to prevent a situation where some edge region depths and normal change rates do not reach the detection threshold, thereby causing a decrease in the accuracy of the stroking.
Step S170: and performing edge painting on the model to be subjected to edge painting based on the first edge area, the second edge area and the third edge area.
In some embodiments of the present application, the model delineation method may perform delineation in real time as the model moves, scene changes. In some specific implementations, the model delineation method can be executed only on the model and part of the model displayed in the screen of the device, so as to reduce the computational power consumption of the system; in other implementations, complete delineation may also be performed on other models within a set distance range of the model displayed in the device screen to avoid situations where the delineation is not in time due to too fast scene motion. Further, in a scene such as a game having a master object, the scene change speed may also be determined based on the behavior prediction of the master object. For example, in a non-combat scene, the scene change is generally slow, and the model stroking method can be performed only on the model and part of the model displayed on the screen of the device; in a battle scene, the scene change is fast generally, and other models within a set distance range of a main control object can be completely stroked in advance.
In some specific implementations, because the number of models appearing at the same time may be greater than one, and each model has a front-back relationship in a screen depth space, positions of the models displayed on the screen and a mutual occlusion relationship can be determined through a view angle direction, when the model has an occlusion, a stroking may not be performed on an occlusion part, so as to reduce the computational power consumption of the system.
In the model delineation method provided by the invention, a method for searching the edge of a user-defined object by the aid of a region mapping is additionally added by combining a depth recognition delineation algorithm, an image processing boundary searching algorithm and a normal line change boundary searching algorithm. Therefore, a high-quality and high-precision hybrid rendering method is formed, and the object edge can be quickly and accurately found and cartoon drawing rendering can be performed under limited computing power.
The above are merely a plurality of specific implementations of the model delineation method of the present invention, and each implementation may be implemented independently or in combination, and the present invention is not limited thereto. Furthermore, the flow charts of the present invention are merely schematic, the execution sequence between the steps is not limited thereto, and the steps can be split, combined, exchanged sequentially, or executed synchronously or asynchronously in other ways within the protection scope of the present invention.
Referring now to FIG. 5, FIG. 5 illustrates a block diagram of a model stroking apparatus in accordance with an embodiment of the present invention. The model stroking apparatus 200 includes a screen space depth map obtaining module 210, a screen space normal map obtaining module 220, an area label map obtaining module 230, a first edge area determining module 240, a second edge area determining module 250, a third edge area determining module 260, and a stroking module 270.
The screen space depth map obtaining module 210 is configured to perform depth channel rendering on the model to be stroked to obtain a screen space depth map;
the screen space normal map obtaining module 220 is configured to perform normal channel rendering on the model to be stroked to obtain a screen space normal map;
the area label mapping obtaining module 230 is configured to perform area label mapping on the model to be traced, so as to obtain an area label mapping;
the first edge area determining module 240 is configured to determine a first edge area of the model to be stroked based on a depth change rate of each pixel in the screen space depth map;
the second edge region determining module 250 is configured to determine a second edge region of the model to be stroked based on a normal rate of change of each pixel in the screen space normal map;
the third edge area determining module 260 is configured to determine a third edge area of the model to be drawn based on a color change rate of each pixel of the area label map;
the stroking module 270 is configured to perform stroking on the model to be stroked based on the first edge region, the second edge region, and the third edge region.
In the model edge-tracing device of the exemplary embodiment of the invention, on the basis of the depth change rate and the normal line change rate, the accuracy and controllability of the edge region detection are enhanced by combining the region mark mapping, so that the effect of rendering the edge region of the object, namely edge tracing, with high quality can be achieved under the condition of limited performance.
Fig. 5 is a schematic diagram of the model edge tracing apparatus 200 provided in the present invention, and the splitting, combining, and adding of modules are within the scope of the present invention without departing from the spirit of the present invention. The model rendering apparatus 200 provided by the present invention may be implemented by software, hardware, firmware, plug-in, and any combination thereof, and the present invention is not limited thereto.
In an exemplary embodiment of the present invention, there is also provided a computer-readable storage medium, on which a computer program is stored, which when executed by, for example, a processor, may implement the steps of the model delineation method described in any one of the above embodiments. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the model delineation method section of the present specification mentioned above, when said program product is run on the terminal device.
Referring to fig. 6, a program product 700 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this respect, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the tenant computing device, partly on the tenant device, as a stand-alone software package, partly on the tenant computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the tenant computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computing devices (e.g., over the internet using an internet service provider).
In an exemplary embodiment of the invention, there is also provided an electronic device that may include a processor and a memory for storing executable instructions of the processor. Wherein the processor is configured to perform the steps of the model delineation method of any of the above embodiments via execution of the executable instructions.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.), or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 500 according to this embodiment of the invention is described below with reference to fig. 7. The electronic device 500 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the electronic device 500 is embodied in the form of a general purpose computing device. The components of the electronic device 500 may include, but are not limited to: at least one processing unit 510, at least one memory unit 520, a bus 530 that couples various system components including the memory unit 520 and the processing unit 510, a display unit 540, and the like.
Wherein the storage unit stores program code executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present invention described in the model delineation method section above in this specification. For example, the processing unit 510 may perform the steps shown in fig. 1.
The memory unit 520 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM) 5201 and/or a cache memory unit 5202, and may further include a read only memory unit (ROM) 5203.
The memory unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5205, such program modules 5205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
Bus 530 may be a local bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or any of a variety of bus architectures.
The electronic device 500 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a tenant to interact with the electronic device 500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 500 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. Also, the electronic device 500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 560. The network adapter 560 may communicate with other modules of the electronic device 500 via the bus 530. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, or a network device, etc.) to execute the above-mentioned model delineation method according to the embodiment of the present invention.
Compared with the prior art, the invention has the advantages that:
on the basis of the depth change rate and the normal line change rate, the accuracy and controllability of edge region detection are enhanced by combining the region mark mapping, so that the effect of rendering the edge region of an object in high quality, namely edge tracing, can be achieved under the condition of limited performance.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (10)

1. A method of model delineation comprising:
performing depth channel rendering on the model to be traced to obtain a screen space depth map;
performing normal channel rendering on the model to be traced to obtain a screen space normal map;
performing area label mapping on the model to be traced to obtain an area label mapping;
determining a first edge area of the model to be stroked based on the depth change rate of each pixel in the screen space depth map;
determining a second edge area of the model to be stroked based on the normal change rate of each pixel in the screen space normal map;
determining a third edge area of the model to be traced based on the color change rate of each pixel of the area label map;
and based on the first edge area, the second edge area and the third edge area, performing edge tracing on the model to be traced.
2. The model stroking method of claim 1, wherein said determining a first edge region of the model to be stroked based on a depth change rate of each pixel in the screen space depth map comprises:
convolving the screen space depth map by using a Laplacian operator to obtain the depth change rate of each pixel in the screen space depth map;
and adding the pixels with the depth change rate larger than the upper limit threshold value of the depth change rate or smaller than the lower limit threshold value of the depth change rate into the first edge area.
3. The model delineation method of claim 2 wherein said laplacian is via a smoothing process.
4. The model stroking method of claim 1, wherein said determining a second edge region of the model to be stroked based on a normal rate of change of each pixel in the screen space normal map comprises:
convolving the screen space normal map by using a Sobel operator to obtain the normal change rate of each pixel in the screen space normal map;
and adding the pixels of which the normal change rate is greater than the upper limit threshold value of the normal change rate or less than the lower limit threshold value of the normal change rate into the second edge area.
5. The model delineation method of claim 1 wherein said performing region label mapping on said model to be delineated comprises:
marking different areas of the model to be stroked with different colors to obtain the area marking map.
6. The model delineation method of claim 5 wherein said determining a third edge region of said model to be delineated based on the rate of change of color of each pixel of said region marker map comprises:
convolving the region label mapping by using a Laplace operator to obtain the color change rate of each pixel in the region label mapping;
and adding the pixels with the color change rate larger than the upper threshold value of the color change rate or smaller than the lower threshold value of the color change rate into a third edge area.
7. The model stroking method of claim 6, wherein said color change rate is a color contrast change rate.
8. A model edging apparatus, comprising:
the screen space depth map acquisition module is used for executing depth channel rendering on the model to be stroked to obtain a screen space depth map;
the screen space normal map acquisition module is used for performing normal channel rendering on the model to be stroked to obtain a screen space normal map;
the area mark chartlet obtaining module is used for performing area mark chartlet drawing on the model to be traced to obtain an area mark chartlet;
the first edge area determining module is used for determining a first edge area of the model to be described based on the depth change rate of each pixel in the screen space depth map;
the second edge area determining module is used for determining a second edge area of the model to be stroked based on the normal change rate of each pixel in the screen space normal map;
a third edge region determination module, configured to determine a third edge region of the model to be traced based on a color change rate of each pixel of the region label map;
and the tracing module is used for tracing the model to be traced based on the first edge area, the second edge area and the third edge area.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory having stored thereon a computer program that, when executed by the processor, performs:
a model delineation method as claimed in any one of claims 1 to 7.
10. A storage medium having a computer program stored thereon, the computer program when executed by a processor performing:
a model delineation method as claimed in any one of claims 1 to 7.
CN202211406239.XA 2022-11-10 2022-11-10 Model edge tracing method, device, equipment and storage medium Pending CN115797533A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211406239.XA CN115797533A (en) 2022-11-10 2022-11-10 Model edge tracing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211406239.XA CN115797533A (en) 2022-11-10 2022-11-10 Model edge tracing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115797533A true CN115797533A (en) 2023-03-14

Family

ID=85436689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211406239.XA Pending CN115797533A (en) 2022-11-10 2022-11-10 Model edge tracing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115797533A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435110A (en) * 2023-10-11 2024-01-23 书行科技(北京)有限公司 Picture processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435110A (en) * 2023-10-11 2024-01-23 书行科技(北京)有限公司 Picture processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11880977B2 (en) Interactive image matting using neural networks
US9607391B2 (en) Image object segmentation using examples
US8873835B2 (en) Methods and apparatus for correcting disparity maps using statistical analysis on local neighborhoods
CN108830780B (en) Image processing method and device, electronic device and storage medium
CN110189336B (en) Image generation method, system, server and storage medium
CN107895377B (en) Foreground target extraction method, device, equipment and storage medium
EP2866196A1 (en) An apparatus, a method and a computer program for image segmentation
US20190057532A1 (en) Realistic augmentation of images and videos with graphics
CN106997613B (en) 3D model generation from 2D images
CN110060205B (en) Image processing method and device, storage medium and electronic equipment
CN115063618B (en) Defect positioning method, system, equipment and medium based on template matching
JP2022168167A (en) Image processing method, device, electronic apparatus, and storage medium
CN115797533A (en) Model edge tracing method, device, equipment and storage medium
CN111724396A (en) Image segmentation method and device, computer-readable storage medium and electronic device
CN103837135B (en) Workpiece inspection method and system thereof
CN112598687B (en) Image segmentation method and device, storage medium and electronic equipment
CN107833185B (en) Image defogging method and device, storage medium and electronic equipment
CN110874170A (en) Image area correction method, image segmentation method and device
CN114627438A (en) Target detection model generation method, target detection method, device and medium
US20230048643A1 (en) High-Precision Map Construction Method, Apparatus and Electronic Device
CN116091481A (en) Spike counting method, device, equipment and storage medium
CN115359008A (en) Display interface testing method and device, storage medium and electronic equipment
EP4207745A1 (en) Method for embedding image in video, and method and apparatus for acquiring planar prediction model
US11392806B2 (en) Differentiable rasterizer for vector font generation and editing
CN111583376B (en) Method and device for eliminating black edge in illumination map, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination