CN114067063A - Method, system, electronic device and storage medium for generating scene map - Google Patents

Method, system, electronic device and storage medium for generating scene map Download PDF

Info

Publication number
CN114067063A
CN114067063A CN202111211645.6A CN202111211645A CN114067063A CN 114067063 A CN114067063 A CN 114067063A CN 202111211645 A CN202111211645 A CN 202111211645A CN 114067063 A CN114067063 A CN 114067063A
Authority
CN
China
Prior art keywords
image
feature
scene
illegal
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111211645.6A
Other languages
Chinese (zh)
Inventor
李泽学
张双力
丛林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixian Advanced Technology Co ltd
Original Assignee
Hangzhou Yixian Advanced Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yixian Advanced Technology Co ltd filed Critical Hangzhou Yixian Advanced Technology Co ltd
Priority to CN202111211645.6A priority Critical patent/CN114067063A/en
Publication of CN114067063A publication Critical patent/CN114067063A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a system, an electronic device and a storage medium for generating a scene map, wherein the method for generating the scene map comprises the following steps: acquiring a repeated texture image, and determining repeated texture features in a scene image set according to the repeated texture image; for each image in the scene image set, determining whether the repeated texture feature exists, if so, determining the area of the repeated texture feature on the image; the method comprises the steps of removing image content in the area to obtain a target scene image set, executing visual reconstruction according to the target scene image set to obtain a scene map, or obtaining image characteristics outside the area to obtain a target characteristic set, executing visual reconstruction according to the target characteristic set to obtain the scene map.

Description

Method, system, electronic device and storage medium for generating scene map
Technical Field
The present application relates to the field of scene three-dimensional reconstruction technologies, and in particular, to a method, a system, an electronic device, and a storage medium for generating a scene map.
Background
Scene reconstruction, i.e. reconstructing spatial information of a real scene, the scenes may be indoors, such as a museum, a living room, an office, etc., or outdoors, such as a house, a square building, etc., and the size of the scene may be tens of square meters, or thousands of square meters.
With the rise of technical concepts such as augmented reality, robots, digital twins and the like, the requirement for constructing spatial information of a specific scene is continuously expanded, and a high-quality scene model can provide accurate spatial reference and reliable digital archive, which is an important basis for implementing and applying the technology.
In the related art, a reconstruction mode of a larger-scale scene is generally realized in a laser scanning mode or a visual shooting mode, but a sensor for laser scanning is tens of thousands or hundreds of thousands of times, equipment needs to be maintained by a specially-assigned person, the scanning operation also needs to be operated by a specially-assigned person, and the cost is higher; under the condition of relatively limited cost, a relatively large-scale scene reconstruction mode is generally realized in a visual shooting mode, only RGB pictures need to be shot in the mode, equipment is simple, the operation threshold is low, and the method is the scheme with the lowest cost at present.
However, since a scene may have some repeated textures, for example, the scene has multiple identical decoration posters, the decoration posters are distributed at different positions of the scene, and visually, spatial ambiguity caused by the repeated textures cannot be distinguished, so that the reconstruction robustness is low for some scenes with more repeated textures in a visual shooting mode, and a reconstructed map may have serious spatial errors.
Aiming at the problem that when a scene with repeated textures is reconstructed in the related technology, a reconstructed scene map has a spatial error, an effective solution is not provided.
Disclosure of Invention
The embodiment of the application provides a method, a system, an electronic device and a storage medium for generating a scene map, so as to at least solve the problem that when a scene with repeated textures is reconstructed in the related art, the reconstructed scene map has a spatial error.
In a first aspect, an embodiment of the present application provides a method for generating a scene map, where the method includes:
acquiring a repeated texture image, and determining repeated texture features in a scene image set according to the repeated texture image;
for each image in the scene image set, determining whether the repeated texture feature exists, if so, determining the area of the repeated texture feature on the image;
removing the image content in the region to obtain a target scene image set, performing visual reconstruction according to the target scene image set to obtain a scene map,
or acquiring the image features outside the region to obtain a target feature set, and executing visual reconstruction according to the target feature set to obtain a scene map.
In some embodiments, the repeated texture images are manually selected or are selected from the scene image set by an algorithm.
In some embodiments, the determining, for each image in the scene image set, whether the repeated texture feature exists includes:
for each illegal feature pattern in the set of illegal feature patterns, determining whether the illegal feature pattern exists in each image in the scene image set, if so, determining a region of the illegal feature pattern in the image and covering the region with a mask,
wherein the creating process of the illegal feature pattern set comprises the following steps: and determining the area of the repeated texture features on the repeated texture image, covering the area with a mask to generate one or more masks, defining the combination of each mask and the repeated texture image as an illegal feature mode, and obtaining the illegal feature mode set.
In some of these embodiments, determining whether the illegal feature pattern is present in the image comprises:
for each illegal feature mode, extracting feature points at a mask coverage area so as to obtain a corresponding first feature point set for each illegal feature mode; for each image in the scene image set, extracting feature points of a whole image so as to obtain a corresponding second feature point set for each image;
determining whether the same feature points exist in the first feature point set corresponding to the illegal feature mode in the second feature point set corresponding to the image;
and if so, representing that the illegal feature mode exists in the image.
In some embodiments, the determining the area of the illegal feature pattern in the image comprises:
determining the positions of all the illegal feature points on the image to obtain a set of illegal feature points of the image, wherein the illegal feature points are the same feature points of the second feature point set and the first feature point set;
clustering the illegal feature points in the set, and calculating a convex hull of each type of illegal feature points to obtain a plurality of convex hulls; and determining the area covered by each convex hull to obtain the area of the illegal feature pattern in the image.
In some embodiments, the obtaining of the image features outside the region and obtaining the target feature set includes:
extracting image features outside the region on the image using a feature extraction algorithm;
or removing the image features falling into the region in the second feature point set to obtain the image features outside the region.
In a second aspect, an embodiment of the present application provides a system for generating a scene map, where the system includes:
the determining module is used for acquiring a repeated texture image and determining repeated texture features in a scene image set according to the repeated texture image; for each image in the scene image set, determining whether the repeated texture feature exists, if so, determining the area of the repeated texture feature on the image;
and the reconstruction module is used for removing the image content in the region to obtain a target scene image set, performing visual reconstruction according to the target scene image set to obtain a scene map, or is used for acquiring the image characteristics outside the region to obtain a target characteristic set, and performing visual reconstruction according to the target characteristic set to obtain the scene map.
In some embodiments, the determining, for each image in the scene image set, whether the repeated texture feature exists includes:
for each illegal feature pattern in the set of illegal feature patterns, determining whether the illegal feature pattern exists in each image in the scene image set, if so, determining a region of the illegal feature pattern in the image and covering the region with a mask,
wherein the creating process of the illegal feature pattern set comprises the following steps: and determining the area of the repeated texture features on the repeated texture image, covering the area with a mask to generate one or more masks, defining the combination of each mask and the repeated texture image as an illegal feature mode, and obtaining the illegal feature mode set.
In a third aspect, an embodiment of the present application provides an electronic apparatus, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for generating a scene map.
In a fourth aspect, the present application provides a storage medium, in which a computer program is stored, where the computer program is configured to execute the method for generating a scene map when running.
Compared with the related art, the method for generating the scene map, provided by the embodiment of the application, determines the repeated texture features in the scene image set according to the repeated texture image by acquiring the repeated texture image; for each image in the scene image set, determining whether the repeated texture feature exists, if so, determining the area of the repeated texture feature on the image; the method comprises the steps of removing image content in the area to obtain a target scene image set, performing visual reconstruction according to the target scene image set to obtain a scene map, or obtaining image features outside the area to obtain a target feature set, performing visual reconstruction according to the target feature set to obtain the scene map, solving the problem that when a scene with repeated textures is reconstructed in the related technology, the reconstructed scene map has a spatial error, and obtaining a correct scene three-dimensional map.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic application environment diagram of a method for generating a scene map according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of generating a scene map according to a first embodiment of the present application;
FIG. 3 is a flow chart of a method of generating a scene map according to a second embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an illegal feature pattern according to a second embodiment of the present application;
FIG. 5 is a flow chart of a method of illegal feature pattern retrieval according to a second embodiment of the present application;
FIG. 6 is a schematic diagram of an intermediate result of generating an illegal feature mask according to a second embodiment of the present application;
fig. 7 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The method for generating a scene map provided by the present application may be applied to an application environment shown in fig. 1, where fig. 1 is an application environment schematic diagram of the method for generating a scene map according to the embodiment of the present application, and as shown in fig. 1, a server 101 obtains a repeated texture image, and determines a repeated texture feature in a scene image set according to the repeated texture image; the server 101 determines whether the repeated texture feature exists in each image in the scene image set, and if so, determines the area of the repeated texture feature on the image; the server 101 removes image content in the area to obtain a target scene image set, and performs visual reconstruction according to the target scene image set to obtain a scene map, or the server 101 obtains image features outside the area to obtain a target feature set, and performs visual reconstruction according to the target feature set to obtain the scene map, wherein the server 101 may be implemented by an independent server or a server cluster composed of a plurality of servers.
The present embodiment provides a method for generating a scene map, and fig. 2 is a flowchart of a method for generating a scene map according to a first embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, obtaining a repeated texture image, and determining repeated texture features in a scene image set according to the repeated texture image, wherein the scene image set is a set of original images shot for a scene;
step S202, determining whether the repeated texture feature exists in each image in the scene image set, and if so, determining the area of the repeated texture feature on the image;
step S203, removing the image content in the region to obtain a target scene image set, performing visual reconstruction according to the target scene image set to obtain a scene map,
or acquiring the image characteristics outside the region to obtain a target characteristic set, and executing visual reconstruction according to the target characteristic set to obtain a scene map.
Through steps S201 to S203, compared to the problem that the reconstructed scene map has a spatial error when reconstructing a scene with a repetitive texture in the related art, in this embodiment, before reconstructing the map, repetitive texture features with spatial ambiguity are filtered from an original image, and then a conventional visual mapping process is continuously performed, so that a correct scene three-dimensional map is obtained, and the problem that the reconstructed scene map has a spatial error when reconstructing a scene with a repetitive texture in the related art is solved.
Optionally, fig. 3 is a flowchart of a method for generating a scene map according to a second embodiment of the present application, and as shown in fig. 3, the flowchart includes the following steps:
step S301, generating an illegal feature pattern; finding out a repeated texture Pattern in a scene according to image data (hereinafter referred to as scene data) shot in the scene, wherein the repeated texture Pattern is used as an illegal Feature Pattern (Invalid Feature Pattern) set of the scene;
step S302, searching illegal feature patterns; for each illegal feature pattern, retrieving the pattern in each image of the scene data; if the pattern is retrieved, recording the position and range of the pattern in the image (represented by the mask of the image);
step S303, filtering illegal features; after step S302 is completed, a mask of an illegal feature pattern of each image in the scene data is obtained; according to the mask, all image features falling in the mask range can be filtered out;
step S304, three-dimensional reconstruction; and performing three-dimensional reconstruction on the scene map by using the scene image data with the filtered illegal features.
Each step of step S301 to step S304 is explained in detail below.
(1) Description of illegal feature pattern generation:
an illegal feature pattern is defined as a piece of image content, and the content belongs to repeated textures in a scene and has serious negative influence on three-dimensional reconstruction; in order to represent the image content, an image content can be specified by combining an original image and a mask image, for example, a poster, which appears many times at different positions of a scene and belongs to a repeated texture, and the original image where the repeated texture is located and the corresponding mask image of the repeated texture in the original image form an expression of an illegal feature pattern; fig. 4 is a schematic diagram illustrating an illegal feature pattern according to a second embodiment of the present application, where as shown in fig. 4, a white area is a position of a poster, the white area is covered by a mask image, image contents covered by the mask image in an original image all belong to repeated textures, and a combination of the original image and the mask image is the illegal feature pattern;
it should be noted that in the scene image data, there may be many images in which such posters are shot, and the shot distances and angles are not consistent, and only one or more representative images are selected as the illegal feature mode of the texture of the posters; the robust feature mode retrieval algorithm can retrieve the same feature modes shot at different distances and different visual angles only according to the representative feature mode; on the other hand, the image of the illegal feature pattern does not necessarily have to be directly from the image data of the scene, and a picture of a certain repetitive texture may be separately taken to generate the illegal feature pattern corresponding to the texture;
in addition, the illegal feature patterns of the scene may be various, and at this time, a plurality of groups of illegal feature pattern data exist; even if a plurality of illegal feature modes are simultaneously contained in one image, a plurality of white areas are contained in one mask image; under the condition that the scene contains other illegal feature patterns, the feature patterns and the poster feature patterns form a set of illegal feature patterns;
finally, in order to obtain these illegal feature patterns, the task can be automatically accomplished through an algorithm, for example, automatically identifying representative illegal feature patterns from image data of a scene through a deep learning network, in addition to manual selection or shooting.
(2) Description of illegal feature pattern retrieval:
the purpose of this step is, to every characteristic pattern in the illegal characteristic pattern set, search in every picture of the scene data, if there is this characteristic pattern in some picture, record the mask scope in this picture of this pattern; finally, each image of the scene data has a mask image with an illegal feature mode, and some images may also contain a plurality of illegal feature modes; of course, for a scene image without an illegal feature pattern, a mask image may not be generated for the scene image, so as to indicate that the image does not contain the illegal feature pattern; fig. 5 is a flowchart of a method of illegal feature pattern retrieval according to a second embodiment of the present application, as shown in fig. 5, the flowchart includes the steps of:
step S501, extracting the features of the whole image for each scene image, wherein the features can be SIFT features, for example;
step S502, for each illegal feature pattern image, extracting features of the illegal feature region by combining the mask image of the pattern, wherein the features can be SIFT features, for example;
step S503, matching the characteristics of each scene image with the characteristics of each illegal characteristic pattern image; taking each scene image as a unit, recording the positions of the feature points matched by the illegal features as a set of the illegal feature points of the scene image;
step S504, clustering the illegal feature points in the illegal feature point set of each scene image, and calculating a convex hull (covex hull) for each type of feature points; the area covered by each convex hull is the illegal characteristic area of the image, and accordingly a mask of the illegal characteristic mode is generated; optionally, appropriate post-processing may be performed on the obtained mask, for example, the post-processing may be image dilation, so as to obtain a better illegal feature pattern mask;
FIG. 6 is a schematic diagram of an intermediate result of generating an illegal feature mask according to a second embodiment of the present application, where as shown in FIG. 6, a particle point represents a retrieved illegal feature point, and a polygon represents a convex hull of each class after clustering; the black area represents a normal feature area, and the non-black area is the generated illegal feature area.
(3) Description on illegal feature filtering:
the purpose of the illegal feature filtering is to prevent image contents corresponding to the illegal feature mode from participating in the three-dimensional reconstruction process, otherwise, because the image contents are repeated textures, the three-dimensional reconstruction algorithm can output wrong three-dimensional maps; the filtering of illegal features can be achieved in a number of ways, two of which are listed below:
the first method is as follows: directly deleting the image content corresponding to the illegal feature mode; because the illegal feature mask corresponding to each scene image is obtained in the explanation about the illegal feature pattern retrieval (the non-black regions are all illegal feature regions), the image content of the illegal feature regions can be deleted according to the mask, for example, the pixel value falling in the illegal feature regions can be directly set to 0, and then the processed scene image data is sent to the subsequent three-dimensional reconstruction link;
the second method comprises the following steps: deleting image feature points corresponding to the illegal feature modes; generally, a three-dimensional reconstruction algorithm (e.g., an SFM algorithm) based on pure vision firstly extracts image feature points (e.g., SIFT feature points) from an original image, and then performs three-dimensional reconstruction based on the feature points; in this case, the original image and the illegal feature mask can be simultaneously input during feature extraction, and at this time, the feature extraction algorithm cannot extract feature points in the illegal feature region, that is, the feature points corresponding to the illegal feature mode are deleted; and inputting the filtered characteristic points into a subsequent three-dimensional reconstruction algorithm.
(4) Description on three-dimensional reconstruction:
the scene data filtered by the illegal features can be sent to a common three-dimensional reconstruction algorithm (such as an SFM algorithm) based on pure vision to carry out three-dimensional reconstruction of a scene map; if the feature filtering is carried out by the first mode, the modified scene image can be sent to an SFM algorithm; if the feature is filtered according to the second mode, the filtered scene feature point data can be sent to the SFM algorithm, and the SFM algorithm does not execute the feature extraction process; since SFM is relatively common and mature, its reconstruction process is not described in detail here.
In addition, in combination with the method for generating a scene map in the foregoing embodiment, the embodiment of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the above-described embodiments of the method of generating a scene map.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of generating a scene map. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 7, there is provided an electronic device, which may be a server, and an internal structure diagram of which may be as shown in fig. 7. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize a method for generating a scene map, and the database is used for storing data.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of generating a scene map, the method comprising:
acquiring a repeated texture image, and determining repeated texture features in a scene image set according to the repeated texture image;
for each image in the scene image set, determining whether the repeated texture feature exists, if so, determining the area of the repeated texture feature on the image;
removing the image content in the region to obtain a target scene image set, performing visual reconstruction according to the target scene image set to obtain a scene map,
or acquiring the image features outside the region to obtain a target feature set, and executing visual reconstruction according to the target feature set to obtain a scene map.
2. The method of claim 1, wherein the repetitive texture image is selected manually or by an algorithm from the set of scene images.
3. The method according to claim 1 or 2, wherein the determining, for each image in the scene image set, whether the repeated texture feature exists comprises:
for each illegal feature pattern in the set of illegal feature patterns, determining whether the illegal feature pattern exists in each image in the scene image set, if so, determining a region of the illegal feature pattern in the image and covering the region with a mask,
wherein the creating process of the illegal feature pattern set comprises the following steps: and determining the area of the repeated texture features on the repeated texture image, covering the area with a mask to generate one or more masks, defining the combination of each mask and the repeated texture image as an illegal feature mode, and obtaining the illegal feature mode set.
4. The method of claim 3, wherein determining whether the illegal feature pattern is present in the image comprises:
for each illegal feature mode, extracting feature points at a mask coverage area so as to obtain a corresponding first feature point set for each illegal feature mode; for each image in the scene image set, extracting feature points of a whole image so as to obtain a corresponding second feature point set for each image;
determining whether the same feature points exist in the first feature point set corresponding to the illegal feature mode in the second feature point set corresponding to the image;
and if so, representing that the illegal feature mode exists in the image.
5. The method of claim 4, wherein the determining the area of the illegal feature pattern in the image comprises:
determining the positions of all the illegal feature points on the image to obtain a set of illegal feature points of the image, wherein the illegal feature points are the same feature points of the second feature point set and the first feature point set;
clustering the illegal feature points in the set, and calculating a convex hull of each type of illegal feature points to obtain a plurality of convex hulls; and determining the area covered by each convex hull to obtain the area of the illegal feature pattern in the image.
6. The method of claim 4, wherein the step of obtaining the image features outside the region to obtain the target feature set comprises:
extracting image features outside the region on the image using a feature extraction algorithm;
or removing the image features falling into the region in the second feature point set to obtain the image features outside the region.
7. A system for generating a scene map, the system comprising:
the determining module is used for acquiring a repeated texture image and determining repeated texture features in a scene image set according to the repeated texture image; for each image in the scene image set, determining whether the repeated texture feature exists, if so, determining the area of the repeated texture feature on the image;
and the reconstruction module is used for removing the image content in the region to obtain a target scene image set, performing visual reconstruction according to the target scene image set to obtain a scene map, or is used for acquiring the image characteristics outside the region to obtain a target characteristic set, and performing visual reconstruction according to the target characteristic set to obtain the scene map.
8. The system according to claim 7, wherein the determining, for each image in the scene image set, whether the repeated texture feature exists comprises:
for each illegal feature pattern in the set of illegal feature patterns, determining whether the illegal feature pattern exists in each image in the scene image set, if so, determining a region of the illegal feature pattern in the image and covering the region with a mask,
wherein the creating process of the illegal feature pattern set comprises the following steps: and determining the area of the repeated texture features on the repeated texture image, covering the area with a mask to generate one or more masks, defining the combination of each mask and the repeated texture image as an illegal feature mode, and obtaining the illegal feature mode set.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method of generating a scene map according to any one of claims 1 to 6.
10. A storage medium, in which a computer program is stored, wherein the computer program is configured to execute the method of generating a scene map according to any one of claims 1 to 6 when running.
CN202111211645.6A 2021-10-18 2021-10-18 Method, system, electronic device and storage medium for generating scene map Pending CN114067063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111211645.6A CN114067063A (en) 2021-10-18 2021-10-18 Method, system, electronic device and storage medium for generating scene map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111211645.6A CN114067063A (en) 2021-10-18 2021-10-18 Method, system, electronic device and storage medium for generating scene map

Publications (1)

Publication Number Publication Date
CN114067063A true CN114067063A (en) 2022-02-18

Family

ID=80235050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111211645.6A Pending CN114067063A (en) 2021-10-18 2021-10-18 Method, system, electronic device and storage medium for generating scene map

Country Status (1)

Country Link
CN (1) CN114067063A (en)

Similar Documents

Publication Publication Date Title
CN111369681B (en) Three-dimensional model reconstruction method, device, equipment and storage medium
CN111598993B (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
CN111652974B (en) Method, device, equipment and storage medium for constructing three-dimensional face model
CN110648397B (en) Scene map generation method and device, storage medium and electronic equipment
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN108765315B (en) Image completion method and device, computer equipment and storage medium
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN109886223B (en) Face recognition method, bottom library input method and device and electronic equipment
CN109842811B (en) Method and device for implanting push information into video and electronic equipment
CN113657357B (en) Image processing method, image processing device, electronic equipment and storage medium
CN113469092B (en) Character recognition model generation method, device, computer equipment and storage medium
CN113870401A (en) Expression generation method, device, equipment, medium and computer program product
CN112243518A (en) Method and device for acquiring depth map and computer storage medium
CN112085835A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN111353965A (en) Image restoration method, device, terminal and storage medium
CN110719415B (en) Video image processing method and device, electronic equipment and computer readable medium
CN114067063A (en) Method, system, electronic device and storage medium for generating scene map
CN115564639A (en) Background blurring method and device, computer equipment and storage medium
Li et al. Reference-guided landmark image inpainting with deep feature matching
US20150278636A1 (en) Image processing apparatus, image processing method, and recording medium
US20230237778A1 (en) Real time face swapping system and methods thereof
CN112819928B (en) Model reconstruction method and device, electronic equipment and storage medium
CN112700481B (en) Texture map automatic generation method and device based on deep learning, computer equipment and storage medium
TWI757965B (en) Deep learning method for augmented reality somatosensory game machine
CN114565872A (en) Video data processing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination