CN116363675A - Sensitive word replacement method and device for three-dimensional model, electronic equipment and storage medium - Google Patents
Sensitive word replacement method and device for three-dimensional model, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116363675A CN116363675A CN202310166272.8A CN202310166272A CN116363675A CN 116363675 A CN116363675 A CN 116363675A CN 202310166272 A CN202310166272 A CN 202310166272A CN 116363675 A CN116363675 A CN 116363675A
- Authority
- CN
- China
- Prior art keywords
- plane
- model
- dimensional model
- sensitive word
- initial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 239000000178 monomer Substances 0.000 claims abstract description 77
- 238000000605 extraction Methods 0.000 claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 36
- 238000009877 rendering Methods 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 230000011218 segmentation Effects 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000006798 recombination Effects 0.000 claims description 4
- 238000005215 recombination Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 10
- 238000000586 desensitisation Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 235000015220 hamburgers Nutrition 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012958 reprocessing Methods 0.000 description 1
- 238000011896 sensitive detection Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/418—Document matching, e.g. of document images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- Artificial Intelligence (AREA)
- Architecture (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a method, a device, electronic equipment and a storage medium for detecting and replacing sensitive words of a three-dimensional model, and relates to the technical field of sensitive word detection, wherein the method comprises the following steps: acquiring an initial three-dimensional model, determining an effective monomer model based on the initial three-dimensional model, and extracting a plane of the effective monomer model to obtain a target plane; performing plane texture rendering on the target plane to obtain a re-rendered plane map; text extraction and keyword matching are carried out on the plane map, and a sensitive word area is determined; and performing texture replacement on the texture image containing the sensitive word area in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement. The method and the device improve the processing efficiency and the processing effect of detecting and desensitizing the sensitive words of the three-dimensional model.
Description
Technical Field
The application relates to the technical field of sensitive word detection, in particular to a method, a device, electronic equipment and a storage medium for detecting and replacing a sensitive word of a three-dimensional model.
Background
Currently, the existing sensitive word detection is usually to detect text information and two-dimensional pictures, and directly or indirectly extract texts from the pictures to perform language recognition and understanding, so as to judge whether the sensitive words in the words and sentences exist or not.
The method is characterized in that the method is used for obtaining the complete texture resynchronization modification in the initial three-dimensional model and is in a stage of manual selection and replacement, and is difficult to manually complete in a large-scale three-dimensional application scene, a great amount of labor cost and time cost are required, and the replacement efficiency through manual inspection is extremely low.
Disclosure of Invention
The invention aims to provide a three-dimensional model sensitive word replacement method, a three-dimensional model sensitive word replacement device, electronic equipment and a storage medium, so that the processing efficiency and the processing effect of three-dimensional model sensitive word detection and desensitization are improved.
In a first aspect, the present invention provides a method for detecting and replacing sensitive words of a three-dimensional model, the method comprising:
acquiring an initial three-dimensional model, determining an effective monomer model based on the initial three-dimensional model, and extracting a plane which is the effective monomer model to obtain a target plane;
performing plane texture rendering on the target plane to obtain a re-rendered plane map;
text extraction and keyword matching are carried out on the plane map, and a sensitive word area is determined;
and performing texture replacement on the texture image containing the sensitive word area in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement.
In an alternative embodiment, the initial three-dimensional model comprises a three-dimensional city model; determining an effective monomer model based on the initial three-dimensional model, and carrying out plane extraction on the effective monomer model to obtain a target plane, wherein the method comprises the following steps:
performing point cloud segmentation on the three-dimensional city model through a point cloud neural network to obtain an effective monomer model corresponding to the three-dimensional city model; wherein the effective monomer model comprises at least one or more of the following monomer models: building a monomer model and a road monomer model;
and carrying out plane fitting treatment on the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and extracting a target plane contained in the effective monomer model.
In an alternative embodiment, performing a plane fitting process on each plane included in the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and extracting a target plane included in the effective monomer model includes:
performing plane fitting processing on the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and marking the plane if the plane is extracted;
if the unfired plane exists, adjusting a preset distance threshold value and a preset normal vector included angle threshold value, and performing plane fitting processing based on the adjusted distance threshold value and the adjusted normal vector included angle threshold value to obtain a preliminary fit plane;
and carrying out recombination processing based on the normal vector of the preliminary fitting plane and the plane distance to obtain each plane contained in the effective monomer model.
In an alternative embodiment, performing planar texture rendering on a target plane to obtain a re-rendered planar map, including:
calculating the center point and normal vector of each plane contained in the effective monomer model;
and carrying out plane texture rendering on the corresponding target plane based on the center point and the normal vector corresponding to each plane by using a parallel projection camera to obtain a re-rendered plane map.
In an alternative embodiment, text extraction and keyword matching are performed on the planar map, and determining the sensitive word area includes:
text extraction is carried out on the plane map based on a pre-trained text detection model, text information is extracted through a text recognition model, and an image bounding box corresponding to the text information is extracted;
and determining the sensitive word information which is determined by the text information matched with the pre-configured sensitive word library, and determining the corresponding image bounding box as a sensitive word area.
In an alternative embodiment, the method further comprises:
and carrying out image blurring processing on the sensitive word area to obtain a blurred image.
In an alternative embodiment, performing texture replacement on a texture image containing a sensitive word area in an initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement, including:
mapping the text bounding box on a corresponding target plane through parallel projection to obtain a target bounding box;
matching an initial bounding box corresponding to the initial three-dimensional model based on the target bounding box;
and replacing the texture map corresponding to the initial bounding box with the texture map corresponding to the blurred image to obtain the desensitized three-dimensional model with the replaced sensitive word.
In a second aspect, the present invention provides a device for detecting and replacing sensitive words of a three-dimensional model, the device comprising:
the model plane extraction module is used for acquiring an initial three-dimensional model, determining an effective monomer model based on the initial three-dimensional model, and carrying out plane extraction on the effective monomer model to obtain a target plane;
the texture rendering module is used for performing plane texture rendering on the target plane to obtain a re-rendered plane map;
the text extraction and matching module is used for carrying out text extraction and keyword matching on the plane map and determining a sensitive word area;
and the sensitive word replacement module is used for carrying out texture replacement on the texture image containing the sensitive word area in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement.
In a third aspect, the invention provides an electronic device comprising a processor and a memory, the memory storing computer executable instructions executable by the processor, the processor executing the computer executable instructions to implement the method of sensitive word replacement for a three-dimensional model of any of the preceding embodiments.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the method of sensitive word replacement for a three-dimensional model of any of the preceding embodiments.
According to the method, an initial three-dimensional model is firstly obtained, an effective monomer model is determined through the initial three-dimensional model, plane extraction is carried out on the effective monomer model to obtain a target plane, plane texture rendering is further carried out on the target plane to obtain a re-rendered plane map, text extraction and keyword matching are carried out on the plane map to determine a sensitive word area, texture replacement is carried out on texture images containing the sensitive word area in the initial three-dimensional model, and a desensitized three-dimensional model after the sensitive word replacement is obtained. According to the method, the plane of the three-dimensional model is extracted, the sensitive words in the three-dimensional model are determined through the sensitive word areas in the extracted plane, and texture replacement is carried out on the sensitive word areas, so that a desensitized three-dimensional model which does not contain the sensitive words is obtained, the sensitive word detection and desensitization modes of the three-dimensional model are realized, and the processing efficiency and the processing effect of the sensitive word detection and desensitization of the three-dimensional model are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for detecting and replacing sensitive words of a three-dimensional model according to an embodiment of the present application;
FIG. 2 is a flowchart of another method for sensitive word replacement of a three-dimensional model according to an embodiment of the present disclosure;
FIG. 3 is a block diagram of a specific alternative method for detecting sensitive words according to an embodiment of the present application;
FIG. 4 is a diagram of a processing effect corresponding to a "clinic" determined as a sensitive word according to an embodiment of the present application;
FIG. 5 is a diagram of processing effects corresponding to a sensitive word determined by "Chinese hamburger" according to an embodiment of the present application;
FIG. 6 is a block diagram of a three-dimensional model sensitive word replacement device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Currently, the existing sensitive word detection is still carried on text information and two-dimensional pictures, and the text is directly or indirectly extracted from the pictures to carry out language identification and understanding, so that whether the sensitive words in the words and sentences exist or not is judged.
However, with new conceptual products such as metauniverse and digital twinning, the demands on sensitive detection, replacement, confidentiality and the like of data such as three-dimensional models are becoming more prominent and important. The geographic information and the image information carried by the three-dimensional model can be provided with confidentiality information easily, so that great practical value is exerted, and meanwhile, sensitive information leakage can be brought by combining the images with the geographic information. The method has the advantages that the technical value of data sensitive decryption, reprocessing and the like is huge and necessary, more existing technology stays in the input of two-dimensional pictures, the three-dimensional model fragmented textures cannot be directly applied to the existing technology, the three-dimensional model textures often do not have the complete semantics of common photos due to the rendering requirement, how to acquire available complete textures from the three-dimensional model, and the available complete textures are synchronously changed and are also in the stage of manual selection and replacement, so that the urban digital twinning and other large-scale application scenes are difficult to finish by manual work.
Based on the above, the embodiment of the application provides a method, a device, electronic equipment and a storage medium for detecting and replacing sensitive words of a three-dimensional model, which realize the detection and desensitization modes of the sensitive words of the three-dimensional model and improve the processing efficiency of the detection and desensitization of the sensitive words of the three-dimensional model.
Referring to fig. 1, an embodiment of the present application provides a method for detecting a sensitive word of a three-dimensional model, which mainly includes the following steps:
step S102, an initial three-dimensional model is obtained, an effective monomer model is determined based on the initial three-dimensional model, and plane extraction is performed on the effective monomer model to obtain a target plane.
The initial three-dimensional model may be generated by acquiring a remote sensing image, may be a three-dimensional point cloud model, and may be a three-dimensional scene model (such as a meta space model) in a virtual scene. In one embodiment, the initial three-dimensional model may be a three-dimensional model for an actual city or may be a three-dimensional model for a virtual city.
In one embodiment, first, the obtained initial three-dimensional model is subjected to point cloud segmentation to segment out an effective monomer model which can be used for performing sensitive word detection. For example, when the three-dimensional model is a city model, the tree model contained in the city model generally does not have a need for sensitive words to be processed, so the tree model can be divided and deleted, and the model with possibly sensitive information of buildings, roads and the like is reserved.
The sensitive information referred to in this embodiment may include bad information that does not conform to relevant regulations, ethical specifications, etc., and may also include information that needs to be kept secret, such as some key geographic information.
The extracted target plane may include an outer surface of a building, a plane of a road, a billboard of a store, etc. when the initial three-dimensional model is subjected to plane extraction; in another embodiment, the initial three-dimensional model may also be a three-dimensional model for a virtual city, and the extracted target plane may include the outer surface of a building in the virtual city, the plane of a road, a billboard of a store, and so on.
And step S104, performing plane texture rendering on the target plane to obtain a re-rendered plane map.
In one embodiment, planar texture rendering is performed on a target plane, which may be a calculation for a camera view angle, which is a view angle of a camera acquired for a three-dimensional model, and texture picture rendering. By arranging the parallel projection camera, a corresponding photo, namely, a re-rendered plane map, can be obtained.
And S106, extracting texts and matching keywords from the plane map, and determining a sensitive word area.
In one embodiment, the neural network model may be pre-trained in text extraction of the planar map, with labeled sensitive words (such as words that do not meet legal and ethical requirements, or confidential words, or other custom words) training the model to extract text from the model. In one embodiment, the sensitive words may be detected based on OCR recognition of the picture.
And S108, performing texture replacement on the texture image containing the sensitive word area in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement.
In one embodiment, the texture image containing the bright and dark information of the sensitive word area can be replaced by a pre-configured texture, so that the obtained desensitization three-dimensional model can not contain sensitive words, poor information is prevented from being scattered, and key geographic information is prevented from being leaked.
For easy understanding, the following describes in detail the method for detecting and replacing a sensitive word of the three-dimensional model provided in the embodiment of the present application.
In an optional embodiment, the initial three-dimensional model includes a three-dimensional city model, an effective monomer model is determined based on the initial three-dimensional model, and plane extraction is performed on the effective monomer model to obtain a target plane, which may include the following steps 1.1) and 1.2):
step 1.1), performing point cloud segmentation on the three-dimensional city model through a point cloud neural network to obtain an effective monomer model corresponding to the three-dimensional city model; wherein the effective monomer model comprises at least one or more of the following monomer models: building a monomer model and a road monomer model;
step 1.2), carrying out plane fitting treatment on the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and extracting a target plane contained in the effective monomer model.
For step 1.1) above, in one embodimentIn the formula, the semantic segmentation model of the point cloud can be trained through RandlNetAnd dividing the three-dimensional model. Because the tree model basically does not have the requirement of sensitive word replacement, the monomer model extracted by the embodiment can be buildings, road marks and the like with possible sensitive information, so that the point cloud segmentation can be performed to remove useless information of sundries such as trees and the like. After removing these unwanted interference grids, each individual building, road sign can also be screened and filtered and extracted as a single body.
Aiming at the step 1.2), carrying out plane fitting treatment on each plane contained in the monomer model based on a preset distance threshold value and a preset normal vector included angle threshold value, extracting a target plane contained in the effective monomer model, and when the method is implemented, further comprising the following steps of 1.2.1) and 1.2.2):
step 1.2.1), carrying out plane fitting treatment on the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and marking the plane if the plane is extracted;
step 1.2.2), if a non-fitting plane exists, adjusting a preset distance threshold and a preset normal vector included angle threshold, and performing plane fitting processing based on the adjusted distance threshold and the adjusted normal vector included angle threshold to obtain a preliminary fitting plane;
step 1.2.3), carrying out recombination processing based on the normal vector of the preliminary fitting plane and the plane distance to obtain each plane contained in the effective monomer model. And (3) enlarging a preset distance threshold and a normal vector included angle threshold for the rest unfinished planes, and performing plane fitting again to iterate so as to finish plane fitting of the whole monomer model S and extract each plane P.
Further, the above-mentioned planar texture rendering is performed on the target plane to obtain a re-rendered planar map, which may include the following steps 2.1) and 2.2):
step 2.1), calculating the center point and the normal vector of each plane contained in the effective monomer model;
and 2.2), carrying out plane texture rendering on the corresponding target plane based on the center point and the normal vector corresponding to each plane by using a parallel projection camera to obtain a re-rendered plane map.
Further, the text extraction and keyword matching are performed on the planar map, and the determination of the sensitive word area may include the following steps 3.1) and 3.2):
step 3.1), carrying out text extraction on the plane map based on a pre-trained text detection model, and extracting text information and an image bounding box corresponding to the text information through a text recognition model;
and 3.2), determining sensitive word information which is determined by text information matched with a pre-configured sensitive word stock, and determining a corresponding image bounding box as a sensitive word area.
In addition, in order to ensure that bad information is prevented from being scattered and key geographic information is prevented from being leaked, image blurring processing can be performed on sensitive word areas to obtain blurred images.
Further, the foregoing performing texture replacement on the texture image containing the sensitive word region in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement may include the following steps 4.1) to 4.3) when implemented:
step 4.1), mapping the text bounding box on a corresponding target plane through parallel projection to obtain a target bounding box;
step 4.2), matching an initial bounding box corresponding to the initial three-dimensional model based on the target bounding box;
and 4.3), replacing the texture map corresponding to the initial bounding box with the texture map corresponding to the blurred image to obtain the desensitized three-dimensional model with the replaced sensitive word.
FIG. 2 shows a flow chart of another method for sensitive word replacement of a three-dimensional model, which may mainly include the following process flows:
the first step: the three-dimensional model is used for dividing buildings, trees and road marks.
And a second step of: and carrying out plane extraction on the monomer grid model of the building and the road mark.
And a third step of: and calculating a camera view angle and rendering texture pictures.
Fourth step: and based on OCR recognition of the picture, detecting the sensitive words, and carrying out fuzzy processing on the sensitive part of the rendered picture.
Fifth step: and carrying out the same blurring on all corresponding textures in the original three-dimensional model.
The first step belongs to a preprocessing process of a three-dimensional model, and the embodiment focuses on sensitive removal and confidentiality processing of characters, so that poor information dissemination and key geographic information leakage are prevented. The information can not be reconstructed in the tree model, and the semantic segmentation model of the point cloud is trained through RandlNetThe three-dimensional model is segmented, and the method extracts the building, road sign and the like possibly containing the information, and removes the useless information of sundries such as trees and the like. And after the useless interference grids are removed, the individual of each building and road mark can be screened and filtered and extracted as a single body.
And the second step and the third step are carried out on each monomer model to be processed according to the monomer segmentation result of the first step. The device carries out plane segmentation by using the position and the normal of the point cloud as threshold values through RANSAC fitting, calculates the center point and the normal of each fitting plane independently, and sets a parallel projection camera to obtain a corresponding photo.
Fourthly, according to the photo obtained in the last step, character recognition is carried out through a neural network model(RCNN), detection->And (DBNET), extracting sensitive words through a keyword matching technology, and further carrying out texture blurring on the corresponding positions of the rendered picture.
And fifthly, calculating the part at the same position as the texture blurring in the fourth step for each texture used by the plane containing the sensitive word according to the texture UV information recorded by the original three-dimensional model and the pose information of the parallel projection camera in the third step, and replacing the texture. Thus, the whole three-dimensional model sensitive word detection replacement is completed.
The framework of the whole three-dimensional model texture sensitive word detection replacement method is shown in fig. 3, and the implementation steps are as follows:
a) For inputting the three-dimensional city model M, a point cloud neural network model is usedDividing to obtain monomer model of each building and road sign>;
b) According to monomer modelsThe algorithm vector is calculated for each of these planes +.>A distance threshold d=0.2 is established, a normal vector angle threshold a=10, and a RANSAC plane fitting is performed. And (3) increasing the threshold d and a for the rest unfinished planes, and performing RANSAC plane fitting again, so that the plane fitting of the whole monomer model S is completed and each plane P is extracted.
The specific algorithm is as follows:
step1 establishes a distance threshold d=0.2, an angle threshold a=10, and the total number of planes is recorded asThe remaining unclassified plane is +.>Plane label i=0;
step2, integrating the angle a and the distance d threshold value, performing RASAC plane fitting, and extracting a plane if the plane is extractedRecording a label as i, turning step3, otherwise turning step4;
step4a=a+5, d=d+0.03, if a is less than 35, step2 is turned, and Step5 is turned in the opposite direction;
Step5 j=0,l=0;
step7l=l+1 if l < i > turns Step6;
step8l=j, j=j+1, if j < I, step6 is turned;
step9 ends.
c) For each planeCalculate its center point +.>Normal vector->Rendering the planar texture with a parallel projection camera, resulting in an image +.>。
d) Images of respective planesUse of OCR neural network model +.>Performing text detection extraction, and using text recognition model +.>Get text +.>And the bounding box of the text corresponding image +.>. I.e. < ->Text->Keyword search is carried out in a self-defined sensitive word stock W, and the image with sensitive words is +.>Corresponding text bounding box->Recording and blurring the part of the image according to the bounding box to obtain a blurred image +.>。
e) Blurred image obtained according to step4Surrounding frame->Plane obtained in step 2->By flatteningLine projection->Will->Mapping to plane +.>The upper mark is a surrounding frame->Then by +.>Corresponding texture coordinates>And synchronously replacing textures of the original model.
Fig. 4 and 5 show the detection replacement effect of the sensitive word of the three-dimensional model corresponding to different sensitive words, respectively, wherein fig. 4 is a processing effect of determining "clinic" as the corresponding sensitive word, and fig. 5 is a processing effect of determining "Chinese hamburger" as the corresponding sensitive word. By way of example only, in an actual application, the type of sensitive word may be determined according to actual requirements.
In summary, the method for detecting and replacing the sensitive words of the three-dimensional model provided by the embodiment of the application can be used for processing the three-dimensional texture directly in a manner of separating from a two-dimensional vector aiming at desensitization of the texture map of the three-dimensional model, and can be effectively combined to the three-dimensional texture in comparison with the related means aiming at the image at present, so that the processing efficiency and the processing effect of detecting and desensitizing the sensitive words of the three-dimensional model are improved.
Based on the above method embodiment, the embodiment of the present application further provides a device for detecting a sensitive word of a three-dimensional model, as shown in fig. 6, where the device mainly includes the following parts:
the model plane extraction module 62 is configured to obtain an initial three-dimensional model, determine an effective monomer model based on the initial three-dimensional model, and perform plane extraction on the effective monomer model to obtain a target plane;
the texture rendering module 64 is configured to perform planar texture rendering on the target plane to obtain a re-rendered planar map;
the text extraction and matching module 66 is used for performing text extraction and keyword matching on the plane map to determine a sensitive word area;
the sensitive word replacement module 68 is configured to perform texture replacement on a texture image including a sensitive word region in the initial three-dimensional model, so as to obtain a desensitized three-dimensional model after the sensitive word replacement.
The embodiment of the application provides a sensitive word replacement device of a three-dimensional model, which is used for determining sensitive words in the three-dimensional model by extracting a plane of the three-dimensional model and extracting a sensitive word area in the plane, and performing texture replacement on the sensitive word area, so that a desensitized three-dimensional model which does not contain the sensitive words is obtained, sensitive word detection and desensitization modes of the three-dimensional model are realized, and processing efficiency of sensitive word detection and desensitization of the three-dimensional model is improved.
In some implementations, the initial three-dimensional model includes a three-dimensional city model;
the model plane extraction module 62 is further configured to:
performing point cloud segmentation on the three-dimensional city model through a point cloud neural network to obtain an effective monomer model corresponding to the three-dimensional city model; wherein the effective monomer model comprises at least one or more of the following monomer models: building a monomer model and a road monomer model;
and carrying out plane fitting treatment on the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and extracting a target plane contained in the effective monomer model.
In some embodiments, the model plane extraction module 62 is further configured to:
performing plane fitting processing on the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and marking the plane if the plane is extracted;
if the unfired plane exists, adjusting a preset distance threshold value and a preset normal vector included angle threshold value, and performing plane fitting processing based on the adjusted distance threshold value and the adjusted normal vector included angle threshold value to obtain a preliminary fit plane;
and carrying out recombination processing based on the normal vector of the preliminary fitting plane and the plane distance to obtain each plane contained in the effective monomer model.
In some embodiments, the texture rendering module 64 is further configured to:
calculating the center point and normal vector of each plane contained in the effective monomer model;
and carrying out plane texture rendering on the corresponding target plane based on the center point and the normal vector corresponding to each plane by using a parallel projection camera to obtain a re-rendered plane map.
In some embodiments, the text extraction and matching module 66 is further configured to:
text extraction is carried out on the plane map based on a pre-trained text detection model, text information is extracted through a text recognition model, and an image bounding box corresponding to the text information is extracted;
and determining the sensitive word information which is determined by the text information matched with the pre-configured sensitive word library, and determining the corresponding image bounding box as a sensitive word area.
In some embodiments, the apparatus further comprises: the fuzzy processing module is used for:
and carrying out image blurring processing on the sensitive word area to obtain a blurred image.
In some implementations, the sensitive word replacement module 68 is further configured to:
mapping the text bounding box on a corresponding target plane through parallel projection to obtain a target bounding box;
matching an initial bounding box corresponding to the initial three-dimensional model based on the target bounding box;
and replacing the texture map corresponding to the initial bounding box with the texture map corresponding to the blurred image to obtain the desensitized three-dimensional model with the replaced sensitive word.
The implementation principle and the generated technical effects of the sensitive word-replacing device for the three-dimensional model provided by the embodiment of the application are the same as those of the embodiment of the method, and for the sake of brief description, reference can be made to corresponding contents in the embodiment of the sensitive word-replacing method for the three-dimensional model where the embodiment of the sensitive word-replacing device for the three-dimensional model is not mentioned.
The embodiment of the present application further provides an electronic device, as shown in fig. 7, which is a schematic structural diagram of the electronic device, where the electronic device 100 includes a processor 71 and a memory 70, where the memory 70 stores computer executable instructions that can be executed by the processor 71, and the processor 71 executes the computer executable instructions to implement a method for detecting a sensitive word of any of the three-dimensional models.
In the embodiment shown in fig. 7, the electronic device further comprises a bus 72 and a communication interface 73, wherein the processor 71, the communication interface 73 and the memory 70 are connected by the bus 72.
The memory 70 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and the at least one other network element is achieved via at least one communication interface 73 (which may be wired or wireless), which may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 72 may be an ISA (Industry Standard Architecture ) bus, PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The bus 72 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one bi-directional arrow is shown in FIG. 7, but not only one bus or type of bus.
The processor 71 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in the processor 71. The processor 71 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor 71 reads the information in the memory, and in combination with its hardware, performs the steps of the sensitive word replacement method of the three-dimensional model of the foregoing embodiment.
The embodiment of the application further provides a computer readable storage medium, where the computer readable storage medium stores computer executable instructions, where the computer executable instructions, when being called and executed by a processor, cause the processor to implement the method for detecting a sensitive word of the three-dimensional model, and the specific implementation can refer to the foregoing method embodiment and will not be repeated herein.
The computer program product of the method, the apparatus, the electronic device and the storage medium for detecting the sensitive word of the three-dimensional model provided in the embodiments of the present application includes a computer readable storage medium storing program codes, and the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
The relative steps, numerical expressions and numerical values of the components and steps set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present application, it should be noted that, directions or positional relationships indicated by terms such as "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or are directions or positional relationships that are conventionally put in use of the inventive product, are merely for convenience of description of the present application and simplification of description, and do not indicate or imply that the apparatus or element to be referred to must have a specific direction, be configured and operated in a specific direction, and thus should not be construed as limiting the present application.
In the description of the present application, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
Claims (10)
1. A method for sensitive word-replacement of a three-dimensional model, the method comprising:
acquiring an initial three-dimensional model, determining an effective monomer model based on the initial three-dimensional model, and carrying out plane extraction on the effective monomer model to obtain a target plane;
performing plane texture rendering on the target plane to obtain a re-rendered plane map;
text extraction and keyword matching are carried out on the plane map, and a sensitive word area is determined;
and performing texture replacement on the texture image containing the sensitive word area in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word is replaced.
2. The method of claim 1, wherein the initial three-dimensional model comprises a three-dimensional city model; determining an effective monomer model based on the initial three-dimensional model, and carrying out plane extraction on the effective monomer model to obtain a target plane, wherein the method comprises the following steps:
performing point cloud segmentation on the three-dimensional city model through a point cloud neural network to obtain an effective monomer model corresponding to the three-dimensional city model; wherein the effective monomer model comprises at least one or more of the following monomer models: building a monomer model and a road monomer model;
and carrying out plane fitting treatment on the monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and extracting a target plane contained in the effective monomer model.
3. The method for detecting sensitive words of a three-dimensional model according to claim 2, wherein performing a plane fitting process on each plane included in the monomer model based on a preset distance threshold and a preset normal vector angle threshold, and extracting a target plane included in the effective monomer model includes:
performing plane fitting processing on the effective monomer model based on a preset distance threshold and a preset normal vector included angle threshold, and marking the plane if the plane is obtained through extraction;
if the unfired plane exists, adjusting the preset distance threshold and the preset normal vector included angle threshold, and performing plane fitting processing based on the adjusted distance threshold and the adjusted normal vector included angle threshold to obtain a preliminary fit plane;
and carrying out recombination processing based on the normal vector and the plane distance of the preliminary fitting plane to obtain each plane contained in the effective monomer model.
4. The method for detecting and replacing sensitive words of a three-dimensional model according to claim 3, wherein performing planar texture rendering on the target plane to obtain a re-rendered planar map comprises:
calculating the center point and the normal vector of each plane contained in the effective monomer model;
and carrying out plane texture rendering on the corresponding target plane based on the center point and the normal vector corresponding to each plane by using a parallel projection camera to obtain a re-rendered plane map.
5. The method for detecting and replacing sensitive words in a three-dimensional model according to claim 4, wherein the steps of extracting text and matching keywords from the planar map, and determining a sensitive word area include:
text extraction is carried out on the plane map based on a pre-trained text detection model, text information is extracted and mentioned through a text recognition model, and an image bounding box corresponding to the text information is extracted;
and determining the sensitive word information which is determined by the text information matched with the pre-configured sensitive word library, and determining the corresponding image bounding box as a sensitive word area.
6. The method of claim 5, further comprising:
and carrying out image blurring processing on the sensitive word area to obtain a blurred image.
7. The method for detecting and replacing a sensitive word in a three-dimensional model according to claim 6, wherein performing texture replacement on a texture image containing the sensitive word region in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement, comprises:
mapping the text bounding box on a corresponding target plane through parallel projection to obtain a target bounding box;
matching an initial bounding box corresponding to the initial three-dimensional model based on the target bounding box;
and replacing the texture map corresponding to the initial bounding box with the texture map corresponding to the blurred image to obtain the desensitized three-dimensional model with the replaced sensitive word.
8. A sensitive word-finding device for a three-dimensional model, the device comprising:
the model plane extraction module is used for acquiring an initial three-dimensional model, determining a swordsman monomer model based on the initial three-dimensional model, and carrying out plane extraction on the effective monomer model to obtain a target plane;
the texture rendering module is used for performing plane texture rendering on the target plane to obtain a re-rendered plane map;
the text extraction and matching module is used for carrying out text extraction and keyword matching on the plane map and determining a sensitive word area;
and the sensitive word replacement module is used for carrying out texture replacement on the texture image containing the sensitive word area in the initial three-dimensional model to obtain a desensitized three-dimensional model after the sensitive word replacement.
9. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the method of sensitive word-replacement of a three-dimensional model of any one of claims 1 to 7.
10. A computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of sensitive word replacement of a three-dimensional model according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310166272.8A CN116363675A (en) | 2023-02-27 | 2023-02-27 | Sensitive word replacement method and device for three-dimensional model, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310166272.8A CN116363675A (en) | 2023-02-27 | 2023-02-27 | Sensitive word replacement method and device for three-dimensional model, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116363675A true CN116363675A (en) | 2023-06-30 |
Family
ID=86912460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310166272.8A Pending CN116363675A (en) | 2023-02-27 | 2023-02-27 | Sensitive word replacement method and device for three-dimensional model, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116363675A (en) |
-
2023
- 2023-02-27 CN CN202310166272.8A patent/CN116363675A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784186B (en) | Pedestrian re-identification method and device, electronic equipment and computer-readable storage medium | |
CN111553406B (en) | Target detection system, method and terminal based on improved YOLO-V3 | |
CN114930408A (en) | System, method and computer program product for automatically extracting information from a flowchart image | |
JP5775225B2 (en) | Text detection using multi-layer connected components with histograms | |
CN108108731B (en) | Text detection method and device based on synthetic data | |
EP2246808A2 (en) | Automated method for alignment of document objects | |
CN106610969A (en) | Multimodal information-based video content auditing system and method | |
CA2656425A1 (en) | Recognizing text in images | |
CN112528997B (en) | Tibetan-Chinese bilingual scene text detection method based on text center region amplification | |
CN113762309B (en) | Object matching method, device and equipment | |
TWI847497B (en) | Store deduplication processing method, device, equipment and storage medium | |
CN112949455A (en) | Value-added tax invoice identification system and method | |
CN111563505A (en) | Character detection method and device based on pixel segmentation and merging | |
CN117556079B (en) | Remote sensing image content retrieval method, remote sensing image content retrieval device, electronic equipment and medium | |
CN115171138A (en) | Method, system and equipment for detecting image text of identity card | |
CN117831042A (en) | Remote sensing image target detection and segmentation method, device, equipment and storage medium | |
CN114266901A (en) | Document contour extraction model construction method, device, equipment and readable storage medium | |
CN112801923A (en) | Word processing method, system, readable storage medium and computer equipment | |
CN116363675A (en) | Sensitive word replacement method and device for three-dimensional model, electronic equipment and storage medium | |
JP2009032109A (en) | Document image search method, document image registration method, and program and apparatus for the same | |
US11734790B2 (en) | Method and apparatus for recognizing landmark in panoramic image and non-transitory computer-readable medium | |
Steinebach et al. | Image hashing robust against cropping and rotation | |
CN114399626A (en) | Image processing method, image processing apparatus, computer device, storage medium, and program product | |
Yadav et al. | Rfpssih: reducing false positive text detection sequels in scenery images using hybrid technique | |
CN113888758B (en) | Curved character recognition method and system based on complex scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |