CN108492360B - Digital model generation method and device - Google Patents

Digital model generation method and device Download PDF

Info

Publication number
CN108492360B
CN108492360B CN201810239790.7A CN201810239790A CN108492360B CN 108492360 B CN108492360 B CN 108492360B CN 201810239790 A CN201810239790 A CN 201810239790A CN 108492360 B CN108492360 B CN 108492360B
Authority
CN
China
Prior art keywords
image
transparent plate
determining
model
digital model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810239790.7A
Other languages
Chinese (zh)
Other versions
CN108492360A (en
Inventor
叶毓平
曾启文
张�杰
雒冬梅
张龙
刘彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Softcom Smart City Technology Co ltd
Original Assignee
Beijing Softong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Softong Intelligent Technology Co ltd filed Critical Beijing Softong Intelligent Technology Co ltd
Priority to CN201810239790.7A priority Critical patent/CN108492360B/en
Publication of CN108492360A publication Critical patent/CN108492360A/en
Application granted granted Critical
Publication of CN108492360B publication Critical patent/CN108492360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a digital model generation method and device, which can acquire an image of the bottom of a transparent plate provided with a solid model; determining the image of the bottom of the solid model arranged on the transparent plate and the arrangement position of the solid model on the transparent plate from the image of the bottom of the transparent plate; determining a digital model unit matched with the image at the bottom of the solid model; and generating a digital model according to the digital model unit and the layout position. According to the invention, a plurality of grids do not need to be divided on the transparent plate, and the solid model is arranged on the transparent plate by taking the grids as units, so that the arrangement position of the solid model is more flexible, the bottom graphs of the solid model are more diversified, and the accuracy of the generated digital model is improved; the problems that in the prior art, the generated digital model is inaccurate and cannot really reflect the real world due to the fact that the entity model is arranged by taking the grid as a unit are solved.

Description

Digital model generation method and device
Technical Field
The invention relates to the technical field of computer application, in particular to a digital model generation method and device.
Background
With the development of science and technology, the application of the digital model generation technology is more and more extensive. Particularly in the field of building display, the digital model (generally, the digital model is a complete digital building model which is previously built according to a building plan) is displayed to a customer, so that a user can know the building effect of a building to a great extent.
In the prior art, a transparent plate is generally divided into a plurality of grids, a solid model is arranged on the grids, and a digital model is generated through identification of the solid model arranged on the grids. However, although the digital model can be generated in the prior art, the problem that the generated digital model is inaccurate and cannot really represent the real world due to the fact that the entity model is arranged by taking a grid as a unit often exists. For example, the digital model cannot truly represent the architectural effect of a building in the real world (e.g., the architectural position, the architectural shape, etc. of the building).
In view of this, it is an urgent need to provide a method and an apparatus for generating a digital model to improve the accuracy of generating the digital model.
Disclosure of Invention
In view of this, embodiments of the present invention provide a digital model generation method and apparatus, so as to improve accuracy of digital model generation.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
a digital model generation method, comprising:
acquiring an image of the bottom of a transparent plate provided with a solid model;
determining an image of the bottom of a solid model laid on the transparent plate and a laying position of the solid model on the transparent plate from the image of the bottom of the transparent plate;
determining a digital model cell matching an image of a bottom of the solid model;
and generating a digital model according to the digital model unit and the layout position.
Preferably, the determining, from the images of the bottom of the transparent plate, the images of the bottom of the dummies laid out on the transparent plate and the layout positions of the dummies on the transparent plate includes:
carrying out image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate;
and determining the graphic image as an image of the bottom of the solid model arranged on the transparent plate, and determining the position of the graphic image in the image of the bottom of the transparent plate as the arrangement position of the solid model on the transparent plate.
Preferably, the image detection of the image of the bottom of the transparent plate to determine each graphic image included in the image of the bottom of the transparent plate includes:
and performing image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate based on a non-maximum suppression algorithm.
Preferably, the determining a digital model unit matching with the image of the bottom of the solid model includes:
determining a target image feature of an image of a bottom of the solid model;
searching image features matched with the target image features from at least one preset image feature;
and determining a preset digital model unit corresponding to the searched image characteristics as a digital model unit matched with the image at the bottom of the entity model.
Preferably, the searching for an image feature matching the target image feature from at least one preset image feature includes:
calculating the similarity between each image feature and the target image feature according to each preset image feature in at least one image feature;
determining whether the similarity is not less than a preset similarity threshold;
if the similarity is determined to be not smaller than a preset similarity threshold, determining the image feature as the searched image feature matched with the target image feature;
and if the similarity is determined to be smaller than a preset similarity threshold, determining that the image feature is not the searched image feature matched with the target image feature.
Preferably, the generating a digital model according to the digital model unit and the layout position includes:
and according to the arrangement position of the entity model on the transparent plate, arranging a digital model unit matched with the image at the bottom of the entity model to generate a digital model.
A digital model generation apparatus comprising:
the image acquisition unit is used for acquiring an image of the bottom of a transparent plate provided with a solid model;
the solid model determining unit is used for determining the image of the bottom of the solid model arranged on the transparent plate and the arrangement position of the solid model on the transparent plate from the image of the bottom of the transparent plate;
a digital model unit determination unit for determining a digital model unit that matches an image of the bottom of the solid model;
and the digital model generating unit is used for generating a digital model according to the digital model unit and the layout position.
Preferably, the solid model determining unit includes:
the image determining unit is used for carrying out image detection on the image of the bottom of the transparent plate and determining each image included in the image of the bottom of the transparent plate;
and the realization model determining subunit is used for determining the graphic image as an image of the bottom of the solid model arranged on the transparent plate, and determining the position of the graphic image in the image of the bottom of the transparent plate as the arrangement position of the solid model on the transparent plate.
Preferably, the graphic image determination unit is specifically configured to:
and performing image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate based on a non-maximum suppression algorithm.
Preferably, the digital model unit determining unit includes:
a target image feature determination unit for determining a target image feature of an image of the bottom of the solid model;
the searching unit is used for searching image characteristics matched with the target image characteristics from at least one preset image characteristic;
and the digital model unit determining subunit is used for determining a preset digital model unit corresponding to the searched image characteristics as the digital model unit matched with the image at the bottom of the entity model.
The embodiment of the application provides a digital model generation method and a digital model generation device, which can acquire an image of the bottom of a transparent plate provided with a solid model; determining the image of the bottom of the solid model arranged on the transparent plate and the arrangement position of the solid model on the transparent plate from the image of the bottom of the transparent plate; determining a digital model unit matched with the image at the bottom of the solid model; and generating a digital model according to the digital model unit and the layout position. According to the invention, a plurality of grids do not need to be divided on the transparent plate, and the solid model is arranged on the transparent plate by taking the grids as units, so that the arrangement position of the solid model is more flexible, the bottom graphs of the solid model are more diversified, and the accuracy of the generated digital model is improved; the problems that in the prior art, the generated digital model is inaccurate and the real world cannot be truly embodied due to the fact that the entity model is arranged by taking the grid as a unit are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a digital model generation method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a prior art entity model layout method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a solid model layout method according to an embodiment of the present application;
FIG. 4 is an image of the bottom of a transparent sheet with a solid model laid thereon according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for determining an image of the bottom of a dumb laid out on a transparent plate and a position where the dumb is laid out on the transparent plate from an image of the bottom of the transparent plate according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of a method for determining a number model element that matches an image of a bottom portion of a solid model according to an embodiment of the present application;
FIG. 7 is a diagram illustrating a digital model according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a digital model generation apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment is as follows:
fig. 1 is a flowchart of a digital model generation method according to an embodiment of the present disclosure.
As shown in fig. 1, the method includes:
s101, obtaining an image of the bottom of a transparent plate provided with a solid model;
alternatively, the entity model used in the present invention may have various forms, for example, it may be a happy model.
Specifically, as shown in fig. 2, the prior art solid model 21 may be laid on a transparent plate 23 on which a grid 22 is drawn. The prior art generates a digital model by laying out a solid model on a transparent sheet in units of grids 22 (e.g., selecting any one of the grids on the transparent sheet on which a solid model is laid out), and by identifying the solid model laid out on the transparent sheet. However, in the prior art, generally, a grid is drawn on a transparent plate, and a solid model is laid on the transparent plate in units of the grid, so that the flexibility of the solid model laid on the transparent plate is limited, and further, a digital model generated according to the solid model laid on the transparent plate is inaccurate, and the real world cannot be truly embodied.
In order to avoid the above-mentioned problems in the prior art, an embodiment of the present application provides a digital model generation method, which is specifically shown in fig. 1; the data model generation method provided by the embodiment of the application does not limit the mode of laying the solid model on the transparent plate, namely, the solid model is not laid on the transparent plate by taking the grid as a unit, and further, the position of the solid model laid on the transparent plate is more flexible, and the bottom of the solid model laid on the transparent plate is more diversified. Optionally, the solid model is usually placed on the transparent plate, so that the bottom of the solid model is in contact with the transparent plate, and the solid model is laid on the transparent plate. Namely, the solid model is placed on the transparent plate in a mode that the bottom of the solid model is in contact with the transparent plate, and the solid model is arranged on the transparent plate.
Optionally, the digital model generation method provided in the embodiment of the present application does not limit the layout manner of the entity model on the transparent template. That is, the method for generating a digital model provided in the embodiment of the present application does not require drawing a grid on a transparent plate, and lays out a solid model on the transparent plate with the grid as a unit; but can directly lay the entity model on the transparent plate without drawing the grid; therefore, when the solid model is arranged on the transparent plate, the position and the bottom shape of the arranged solid model do not need to be limited, and the mode of arranging the solid model can be more flexible.
Optionally, referring to fig. 3, which is a schematic diagram of an entity model layout method provided in the embodiment of the present application, the digital model generation method shown in fig. 1 may be implemented based on the entity model layout method shown in fig. 3 provided in the embodiment of the present application. The transparent plate 31 shown in fig. 3 is provided with a solid model 32 (the solid model 32 is a cube), a solid model 33 (the solid model 33 is a cylinder), and a solid model 34 (the solid model 34 is a cuboid).
Correspondingly, the digital model generation method provided by the embodiment of the application can acquire the image of the bottom of the transparent plate provided with the entity model.
Optionally, the method of obtaining the image of the bottom of the transparent plate on which the solid model is disposed may be: a support rod 35 is arranged at the bottom of the transparent plate provided with the solid model to support the transparent plate provided with the solid model to leave the ground; an image capturing device 36 is placed below the transparent plate and captures an image of the bottom of the transparent plate.
The above is only a preferred way to obtain the image of the bottom of the transparent plate with the solid model provided in the embodiment of the present application, and specifically, the inventor can arbitrarily set the specific way to obtain the image of the bottom of the transparent plate with the solid model according to his own needs, and is not limited herein.
S102, determining the image of the bottom of the solid model distributed on the transparent plate and the distribution position of the solid model on the transparent plate from the image of the bottom of the transparent plate;
optionally, please refer to fig. 4 for the image of the bottom of the transparent plate with the solid model shown in fig. 3. As shown in fig. 4, the acquired image 41 of the bottom of the transparent plate shows a graphic image 42 (the graphic image 42 is an image of the bottom of the phantom 32 laid out on the transparent plate), a graphic image 43 (the graphic image 43 is an image of the bottom of the phantom 33 laid out on the transparent plate), and a graphic image 44 (the graphic image 44 is an image of the bottom of the phantom 34 laid out on the transparent plate). The position of the graphic image 42 in the image 41 is the layout position of the solid model 32 on the transparent plate; the position of the graphic image 43 in the image 41 is the layout position of the solid model 33 on the transparent plate; the position of the graphic image 44 in the image 41 is the position of the dummies 34 laid out on the transparent plate.
Also, the color and shape of the graphic image 42, the color and shape of the graphic image 43, and the color and shape of the graphic image 44 may be displayed in fig. 4.
S103, determining a digital model unit matched with the image at the bottom of the entity model;
optionally, after determining the image of the bottom of the dummies arranged on the transparent plate, a digital model unit matching the image of the bottom of the dummies arranged on the transparent plate is determined.
And S104, generating a digital model according to the digital model unit and the layout position.
In this embodiment of the present application, preferably, generating a digital model according to the digital model unit and the layout position includes: and laying a digital model unit matched with the image at the bottom of the solid model according to the laying position of the solid model on the transparent plate to generate the digital model.
Optionally, according to the arrangement position of the solid model on the transparent plate, arranging a digital model unit matched with the image at the bottom of the solid model to generate the digital model. Referring to figure 7 for the resulting digital model corresponding to the transparent sheet populated with the physical model shown in figure 3, the digital model includes a digital model region 71 corresponding to the transparent plate 31, a digital model unit 72 displayed in the digital model region 71 and matching with the image of the bottom of the physical model 32 (the position of the digital model unit 72 in the digital model region 71 may be the arrangement position of the physical model 32 on the transparent plate), a digital model unit 73 matching with the image of the bottom of the physical model 33 (the position of the digital model unit 73 in the digital model region 71 may be the arrangement position of the physical model 33 on the transparent plate), and a digital model unit 74 that matches an image of the bottom of the solid model 34 (the position where the digital model unit 74 is located in the digital model region 71 may be the layout position of the solid model 34 on the transparent plate).
Optionally, an embodiment of the present application provides a flowchart of a method for determining, from an image of the bottom of the transparent plate, an image of the bottom of the dummies arranged on the transparent plate and arrangement positions of the dummies on the transparent plate, specifically please refer to fig. 5.
As shown in fig. 5, the method includes:
s501, detecting images of the bottom of the transparent plate, and determining each graphic image included in the images of the bottom of the transparent plate;
optionally, the image of the bottom of the transparent plate is subjected to image detection, and each graphic image displayed in the image of the bottom of the transparent plate is determined. In the embodiment of the present application, preferably, the image detection technology can detect each graphic image included in the image, and please refer to the prior art for a specific implementation manner of the image detection technology, which is not limited herein.
For example, by performing image detection on the image of the bottom of the transparent plate as shown in fig. 4, it can be determined that the image of the bottom of the transparent plate includes 3 graphic images (graphic image 42, graphic image 43, and graphic image 44, respectively).
In the embodiment of the present application, preferably, the image detection of the image of the bottom of the transparent plate, and the determining of each graphic image included in the image of the bottom of the transparent plate, include: and performing image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate based on a non-maximum suppression algorithm. Determining each of the graphic images included in the image of the bottom of the transparent plate based on the maximum value suppression method can make the acquired graphic image more accurate.
S502, determining the graphic image as an image of the bottom of the solid model arranged on the transparent plate, and determining the position of the graphic image in the image of the bottom of the transparent plate as the arrangement position of the solid model on the transparent plate.
In the embodiment of the present application, it is preferable that, for each of the graphic images determined in step S501, the graphic image be determined as an image of the bottom of one dummies laid on the transparent plate, and the position of the graphic image in the image of the bottom of the transparent plate be determined as the laying position of the dummies on the transparent plate. That is, for each graphic image determined in step S501, it is determined that a corresponding solid model is disposed on the transparent plate, the image at the bottom of the solid model is the graphic image, and the disposition position of the solid model on the transparent plate is the position of the graphic image in the image at the bottom of the transparent plate.
To facilitate understanding of a digital model generation method provided in the embodiment of the present application, a method for determining a digital model unit matching with an image at the bottom of a solid model in the digital model generation method provided in the embodiment of the present application will be described in detail, and see fig. 6 in detail.
As shown in fig. 6, the method includes:
s601, determining the target image characteristics of the image at the bottom of the solid model;
optionally, image feature recognition is performed on the image at the bottom of the solid model, so as to obtain an image feature (referred to as a target image feature herein) of the image at the bottom of the solid model.
Optionally, the image feature recognition is performed on the image at the bottom of the solid model to obtain the target image feature of the image at the bottom of the solid model, which is: and carrying out image feature recognition on the graph image determined as the image at the bottom of the entity model to obtain the target image feature of the image at the bottom of the entity model.
S602, searching image features matched with the target image features from at least one preset image feature;
in the embodiment of the present application, it is preferable that at least one image feature is preset, and for each of the preset at least one image feature, a digital model unit corresponding to the image feature is preset. Optionally, searching for an image feature matching with the target image feature from at least one preset image feature includes: calculating the similarity between the image feature and a target image feature aiming at each preset image feature in at least one image feature; determining whether the similarity is not less than a preset similarity threshold; if the similarity is determined to be not smaller than a preset similarity threshold, determining the image feature as the searched image feature matched with the target image feature; and if the similarity is determined to be smaller than the preset similarity threshold, determining that the image feature is not the searched image feature matched with the target image feature.
In the embodiment of the present application, preferably, the specific numerical value of the similarity threshold may be set by the inventor according to his own needs, and is not limited herein.
S603, determining the preset digital model unit corresponding to the searched image characteristics as the digital model unit matched with the image at the bottom of the entity model.
In this embodiment of the application, preferably, after finding the image feature matched with the target image feature, a preset digital model unit corresponding to the found image feature is obtained, and the obtained digital model unit is determined as a digital model unit matched with the bottom image of the solid model (the solid model is the solid model in step S601, and the image feature of the bottom image of the solid model is the target image feature).
The embodiment of the application provides a digital model generation method, which can acquire an image of the bottom of a transparent plate provided with a solid model; determining the image of the bottom of the solid model arranged on the transparent plate and the arrangement position of the solid model on the transparent plate from the image of the bottom of the transparent plate; determining a digital model unit matched with the image at the bottom of the solid model; and generating a digital model according to the digital model unit and the layout position. According to the invention, a plurality of grids do not need to be divided on the transparent plate, and the solid model is arranged on the transparent plate by taking the grids as units, so that the arrangement position of the solid model is more flexible, the bottom graph of the solid model is more diversified, and the accuracy of the generated digital model is improved; the problems that in the prior art, the generated digital model is inaccurate and cannot really reflect the real world due to the fact that the entity model is arranged by taking the grid as a unit are solved.
Fig. 8 is a schematic structural diagram of a digital model generation apparatus according to an embodiment of the present application.
As shown in fig. 8, the apparatus includes:
an image acquisition unit 81 for acquiring an image of the bottom of a transparent plate on which a solid model is laid;
a solid model determining unit 82 for determining an image of the bottom of the solid model laid on the transparent plate and a laying position of the solid model on the transparent plate from the image of the bottom of the transparent plate;
a digital model unit determining unit 83 for determining a digital model unit matching the image of the bottom of the solid model;
and a digital model generating unit 84 for generating a digital model according to the digital model unit and the layout position.
In this embodiment, preferably, the entity model determining unit includes: a graphic image determination unit for performing image detection on the image of the bottom of the transparent plate and determining each graphic image included in the image of the bottom of the transparent plate; and the realization model determining subunit is used for determining the graphic image as an image of the bottom of the solid model arranged on the transparent plate, and determining the position of the graphic image in the image of the bottom of the transparent plate as the arrangement position of the solid model on the transparent plate.
In this embodiment, preferably, the graphic image determining unit is specifically configured to: and performing image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate based on a non-maximum suppression algorithm.
In the embodiment of the present application, preferably, the digital model unit determining unit includes: a target image feature determination unit for determining a target image feature of an image of the bottom of the solid model; the searching unit is used for searching image characteristics matched with the target image characteristics from at least one preset image characteristic; and the digital model unit determining subunit is used for determining the preset digital model unit corresponding to the searched image characteristics as the digital model unit matched with the image at the bottom of the entity model.
In this embodiment, preferably, the searching unit includes: the similarity calculation unit is used for calculating the similarity between the image feature and the target image feature aiming at each image feature in at least one preset image feature; the comparison unit is used for determining whether the similarity is not less than a preset similarity threshold value; the first determining unit is used for determining the image feature as the searched image feature matched with the target image feature if the similarity is determined to be not smaller than the preset similarity threshold; and the second determining unit is used for determining that the image feature is not the searched image feature matched with the target image feature if the similarity is smaller than the preset similarity threshold.
In this embodiment of the present application, preferably, the digital model generating unit is specifically configured to: and laying a digital model unit matched with the image at the bottom of the solid model according to the laying position of the solid model on the transparent plate to generate the digital model.
The embodiment of the application provides a digital model generation device, which can acquire an image of the bottom of a transparent plate provided with a solid model; determining the image of the bottom of the solid model arranged on the transparent plate and the arrangement position of the solid model on the transparent plate from the image of the bottom of the transparent plate; determining a digital model unit matched with the image at the bottom of the solid model; and generating a digital model according to the digital model unit and the layout position. According to the invention, a plurality of grids do not need to be divided on the transparent plate, and the solid model is arranged on the transparent plate by taking the grids as units, so that the arrangement position of the solid model is more flexible, the bottom graphs of the solid model are more diversified, and the accuracy of the generated digital model is improved; the problems that in the prior art, the generated digital model is inaccurate and cannot really reflect the real world due to the fact that the entity model is arranged by taking the grid as a unit are solved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for generating a digital model, comprising:
acquiring an image of the bottom of a transparent plate provided with a solid model;
determining an image of the bottom of a solid model arranged on the transparent plate and the arrangement position of the solid model on the transparent plate from the image of the bottom of the transparent plate;
determining a digital model element that matches an image of a bottom of the solid model;
generating a digital model according to the digital model unit and the layout position;
the determining a digital model element that matches an image of a bottom of the solid model includes:
determining a target image feature of an image of a bottom of the solid model;
searching for an image feature matched with the target image feature from at least one preset image feature;
and determining a preset digital model unit corresponding to the searched image characteristics as a digital model unit matched with the image at the bottom of the entity model.
2. The method as claimed in claim 1, wherein said determining, from the images of the bottom of the transparent sheet, the images of the bottom of the dummies laid out on the transparent sheet and the positions where the dummies are laid out on the transparent sheet comprises:
carrying out image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate;
and determining the graphic image as an image of the bottom of the solid model arranged on the transparent plate, and determining the position of the graphic image in the image of the bottom of the transparent plate as the arrangement position of the solid model on the transparent plate.
3. The method of claim 2, wherein the image detecting the image of the bottom of the transparent sheet, determining the respective graphic images included in the image of the bottom of the transparent sheet, comprises:
and performing image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate based on a non-maximum suppression algorithm.
4. The method according to claim 1, wherein the searching for the image feature matching the target image feature from the preset at least one image feature comprises:
calculating the similarity between each image feature and the target image feature according to each preset image feature in at least one image feature;
determining whether the similarity is not less than a preset similarity threshold;
if the similarity is determined to be not smaller than a preset similarity threshold, determining the image feature as the searched image feature matched with the target image feature;
and if the similarity is smaller than a preset similarity threshold value, determining that the image feature is not the searched image feature matched with the target image feature.
5. The method of claim 1, wherein generating a digital model from the digital model elements and the deployment locations comprises:
and laying digital model units matched with the images at the bottom of the solid model according to the laying position of the solid model on the transparent plate to generate the digital model.
6. A digital model generation apparatus, comprising:
the image acquisition unit is used for acquiring an image of the bottom of a transparent plate provided with a solid model;
the solid model determining unit is used for determining the image of the bottom of the solid model arranged on the transparent plate and the arrangement position of the solid model on the transparent plate from the image of the bottom of the transparent plate;
a digital model unit determining unit for determining a digital model unit matched with the image of the bottom of the solid model;
the digital model generating unit is used for generating a digital model according to the digital model unit and the layout position;
the digital model unit determination unit includes:
a target image feature determination unit for determining a target image feature of an image of the bottom of the solid model;
the searching unit is used for searching image characteristics matched with the target image characteristics from at least one preset image characteristic;
and the digital model unit determining subunit is used for determining a preset digital model unit corresponding to the searched image characteristics as the digital model unit matched with the image at the bottom of the entity model.
7. The apparatus of claim 6, wherein the solid model determining unit comprises:
a graphic image determination unit for performing image detection on the image of the bottom of the transparent plate and determining each graphic image included in the image of the bottom of the transparent plate;
and the realization model determining subunit is used for determining the graphic image as an image of the bottom of the solid model arranged on the transparent plate, and determining the position of the graphic image in the image of the bottom of the transparent plate as the arrangement position of the solid model on the transparent plate.
8. The apparatus according to claim 7, wherein the graphical image determination unit is specifically configured to:
and performing image detection on the image of the bottom of the transparent plate, and determining each graphic image included in the image of the bottom of the transparent plate based on a non-maximum suppression algorithm.
CN201810239790.7A 2018-03-22 2018-03-22 Digital model generation method and device Active CN108492360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810239790.7A CN108492360B (en) 2018-03-22 2018-03-22 Digital model generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810239790.7A CN108492360B (en) 2018-03-22 2018-03-22 Digital model generation method and device

Publications (2)

Publication Number Publication Date
CN108492360A CN108492360A (en) 2018-09-04
CN108492360B true CN108492360B (en) 2022-07-26

Family

ID=63319182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810239790.7A Active CN108492360B (en) 2018-03-22 2018-03-22 Digital model generation method and device

Country Status (1)

Country Link
CN (1) CN108492360B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017129A (en) * 2020-08-28 2020-12-01 湖南尚珂伊针纺有限公司 High efficiency socks digital model apparatus for producing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9375195B2 (en) * 2012-05-31 2016-06-28 Siemens Medical Solutions Usa, Inc. System and method for real-time ultrasound guided prostate needle biopsy based on biomechanical model of the prostate from magnetic resonance imaging data
CN103366610B (en) * 2013-07-03 2015-07-22 央数文化(上海)股份有限公司 Augmented-reality-based three-dimensional interactive learning system and method
CN103440682B (en) * 2013-08-13 2016-08-10 北京农业信息技术研究中心 A kind of quick three-dimensional drawing methods and system
CN105913485B (en) * 2016-04-06 2019-02-12 北京小小牛创意科技有限公司 A kind of generation method and device of three-dimensional virtual scene
CN106023302B (en) * 2016-05-06 2020-06-09 武汉雄楚高晶科技有限公司 Mobile communication terminal, server and method for realizing three-dimensional reconstruction
CN106334317A (en) * 2016-08-27 2017-01-18 厦门市朗星节能照明股份有限公司 Dice game device
CN107274335B (en) * 2017-05-23 2020-11-10 首汇焦点(北京)科技有限公司 Method and device for quickly establishing high-precision digital model
CN107767454A (en) * 2017-11-10 2018-03-06 泰瑞数创科技(北京)有限公司 A kind of three-dimensional mobile fast modeling method of outdoor scene, apparatus and system

Also Published As

Publication number Publication date
CN108492360A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
TWI619080B (en) Method for calculating fingerprint overlapping region and electronic device
WO2021095693A1 (en) Information processing device, information processing method, and program
CN109376256B (en) Image searching method and device
CN108875522A (en) Face cluster methods, devices and systems and storage medium
US20060126944A1 (en) Variance-based event clustering
JP5240082B2 (en) Biometric authentication apparatus, authentication accuracy evaluation apparatus, and biometric authentication method
CN105453132B (en) The information processing equipment and image processing method of real-time image processing
CN104182962A (en) Picture definition evaluation method and device
WO2021136386A1 (en) Data processing method, terminal, and server
CN113378789B (en) Cell position detection method and device and electronic equipment
US20130308837A1 (en) Information processor, information processing method, and program
JP6975177B2 (en) Detection of biological objects
CN111623782A (en) Navigation route display method and three-dimensional scene model generation method and device
CN108647264A (en) A kind of image automatic annotation method and device based on support vector machines
JP6017343B2 (en) Database generation device, camera posture estimation device, database generation method, camera posture estimation method, and program
CN108492360B (en) Digital model generation method and device
CN110276348B (en) Image positioning method, device, server and storage medium
KR101635309B1 (en) Apparatus and method of textrue filtering using patch shift
JP2013195243A (en) Color sample, color measurement program, and color measuring method
CN113888619A (en) Distance determination method and device, electronic equipment and storage medium
CN112241502A (en) Page loading detection method and device
CN113055603A (en) Image processing method and electronic equipment
CN109064393B (en) Face special effect processing method and device
JP6399808B2 (en) Image processing apparatus, image processing method, and program
CN108038514A (en) A kind of method, equipment and computer program product for being used to identify image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210201

Address after: 100094 room 106, 1st floor, building 16, east yard, No.10, xibeiwangdong Road, Haidian District, Beijing

Applicant after: Beijing Softcom Smart City Technology Co.,Ltd.

Address before: 100193 506, 5 / F, building 16, east yard, No. 10, xibeiwangdong Road, Haidian District, Beijing

Applicant before: BEIJING ISOFTSTONE ZHICHENG TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 301-1, floor 3, building 10, Zhongguancun Software Park, No. 8, Dongbeiwang West Road, Haidian District, Beijing 100193

Applicant after: Beijing softong Intelligent Technology Co.,Ltd.

Address before: 100094 room 106, 1st floor, building 16, east yard, No.10, xibeiwangdong Road, Haidian District, Beijing

Applicant before: Beijing Softcom Smart City Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant