CN117063462A - System and method for optimal camera placement and configuration using sparse voxel octree - Google Patents

System and method for optimal camera placement and configuration using sparse voxel octree Download PDF

Info

Publication number
CN117063462A
CN117063462A CN202080108316.6A CN202080108316A CN117063462A CN 117063462 A CN117063462 A CN 117063462A CN 202080108316 A CN202080108316 A CN 202080108316A CN 117063462 A CN117063462 A CN 117063462A
Authority
CN
China
Prior art keywords
computer
model
input
recording devices
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080108316.6A
Other languages
Chinese (zh)
Inventor
陈震宇
C·K·陆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyong Technology Development Co ltd
Original Assignee
Luoyong Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyong Technology Development Co ltd filed Critical Luoyong Technology Development Co ltd
Publication of CN117063462A publication Critical patent/CN117063462A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

A closed circuit monitoring system includes one or more recording devices available for use. The processor is configured to execute computer-executable instructions for receiving input from one or more recording devices. The input includes image or video information defining a target volume. The processor may further convert the image or video information into a three-dimensional (3D) model. A cube representation of the 3D model is constructed and includes a voxel model. The processor may store the information of the voxel model in the form of a sparse voxel octree in a data storage accessible to the processor. The processor may determine the number of coverage volumes within the target volume and one or more recording devices required for the coverage volumes.

Description

System and method for optimal camera placement and configuration using sparse voxel octree
Technical Field
The present description relates to a system and method for optimally placing and configuring cameras using sparse voxel octree.
Background
Video surveillance systems, such as Closed Circuit Televisions (CCTV), are commonly used to observe or monitor activity within an area to prevent threat or misbehavior, or to provide activity records for later investigation. The development of visual sensor technology and wireless communications has reduced the difficulty and cost of deploying such monitoring systems over the last decades. This creates problems of optimizing system performance in terms of coverage and cost, since the placement and configuration of the cameras is manual.
Traditionally, the process of manually placing and configuring a surveillance camera can be very time consuming. The technician may initially need a good guess to place and configure the camera and then go through multiple cycles of trial and error processes when adjusting the position and orientation of the camera so that the coverage of the overall monitoring system can be iteratively improved. This effort is cumbersome and prone to human error such as poor coverage and blind spots that are not made clear.
In order to alleviate the above problems, in academic and commercial research, the best camera placement has been studied for decades. This task is typically formulated by minimizing the number of cameras that can fully cover a region of interest (ROI). In this approach, maximizing the coverage of the ROI with a "volume to be monitored" or with a limited number of cameras is the direction of investigation. The configuration of the camera, such as the site, orientation, field of view (FOV) and depth coverage, may be fixed or to be optimized, depending on the mathematical model of the optimization method. However, many existing approaches are not able to handle situations where the input is large, such as when the input is a large building, such as a mall and airport. This can be very expensive due to memory limitations and data access speed during optimization.
Accordingly, aspects of the present application provide a new approach to address the shortcomings of previous approaches.
Disclosure of Invention
Aspects of the present application may create a system and method that overcomes the above-described challenges by first converting the input 3D model into a Sparse Voxel Octree (SVO). Various aspects of the present application may greatly reduce the required memory usage and improve access speed for both Central Processing Units (CPUs) and Graphics Processing Units (GPUs), allowing the use of large-scale models as inputs, compared to conventional rendering methods such as graph structures or polygon-based rasterization.
Drawings
Those of ordinary skill in the art will appreciate that the elements in the figures are illustrated for simplicity and clarity and, therefore, not all of the connections and options are shown. For example, common but well-understood elements that are useful or necessary in a commercially feasible embodiment may not be depicted in order to facilitate a less obstructed view of these various embodiments of the present disclosure. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used herein are defined as terms of their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein.
Fig. 1 is a diagram illustrating a monitored or monitored space 100 according to some embodiments.
Fig. 2 is a diagram illustrating a sparse voxel octree according to some embodiments.
Fig. 3 is a flow chart illustrating a method of optimally placing and configuring cameras using sparse voxel octree, in accordance with some embodiments.
FIG. 4 is a diagram illustrating a portable computing device according to one embodiment.
FIG. 5 is a diagram illustrating a computing device according to one embodiment.
Detailed Description
Embodiments may now be described more fully with reference to the accompanying drawings, which form a part hereof, and which show by way of illustration specific exemplary embodiments that may be practiced. These illustrations and example embodiments may be presented with the following understanding that the present disclosure is an explanation of the principles of one or more embodiments and is not intended to limit any one of the embodiments illustrated. Embodiments may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the embodiments to those skilled in the art. The application may be embodied as a method, system, computer-readable medium, apparatus or device, among other things. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.
Aspects of the present application provide an improved way to enhance the shots of CCTV capture in situations where the input is large, for example, when the input is large buildings such as malls and airports. Referring to fig. 1, a monitored or monitored space 100 is illustrated in accordance with some embodiments. In one example, recording device 102 may be positioned or set at location 104. Recording device 102 may be a video camera, webcam, or similar video or image capturing device. Recording device 102 may be a portable computing device 801 connected to a remote computing device 841. Recording device 102 may be strategically positioned to capture as much volume or space as possible. Space 100 may include at least walls 106 and 108. Within the space 100, there may be individuals 110 and objects 112 to be monitored, and these individuals and objects may move around within the space 100.
As discussed above, the systems and methods may overcome the above-described challenges by first converting the input 3D model into a Sparse Voxel Octree (SVO). Various aspects of the present application greatly reduce the required memory usage and improve access speeds of both Central Processing Units (CPUs) and Graphics Processing Units (GPUs) compared to conventional rendering methods, such as graph structures or polygon-based rasterization, allowing the use of large-scale models as inputs.
Fig. 2 is a diagram of an octree 300 according to some embodiments. In one illustration, an octree is a tree data structure in which each internal node has exactly eight children. Octree 300 may include a parent node 302 and it has eight child nodes, including 304 and 308. Each of the nodes 304 and 308 then has its eight child nodes 306 and 310, respectively. In one aspect, octree is most commonly used to divide a three-dimensional space by recursively subdividing the three-dimensional space into eight octagons. As shown in fig. 2, cube 312 may first illustrate a cube, and the nodes of tree 300 may store an explicit three-dimensional point, which is the "center" of subdivision of that node. The point defines one of the corners for each of the eight children in tree 300, such as nodes 304 and 308. Octree is a three-dimensional analogue of quadtree. Since tree 300 is a recursive process, cube 312 is shown at 314 and 316.
In another embodiment, the SVO may include a 3D computer graphics rendering technique that uses ray casting or sometimes ray tracing approaches to enter the octree data representation, as shown in FIG. 2.
Referring now to fig. 3, a flow chart may illustrate a method 300 of optimally placing and configuring cameras using Sparse Voxel Octree (SVO), in accordance with some embodiments. In one embodiment, method 300 may be performed by a portable computing device 801 or a remote computing device 841. In one example, recording device 102 may be connected to portable computing device 801 or remote computing device 841 such that captured images or video may be more efficiently processed by portable computing device 801 or remote computing device 841. In one embodiment, the method 300 may be a computerized method that may be performed by a computing device having at least one processor or microprocessor.
In one embodiment, at 302, a processor of portable computing device 801 or remote computing device 841 may obtain as input a 3D model, such as Building Information Models (BIMs) of a facility or space (e.g., space 100) to be monitored or monitored. At 304, the 3D model may be voxelized to a cubic representation of a volume in 3D space, referred to as a voxel model, similar to a square representation where the pixels are areas in 2D space. In one embodiment, a cube representation of the volume is constructed.
In one example, the voxel model includes voxelization accomplished by programming a processor (such as a CPU, or GPU, or both) in a computing device (such as portable computing device 801 or remote computing device 841). To efficiently store the voxel model, it may be stored as a sparse octree data structure, named sparse voxel octree. Octree is a tree data structure in which each node has exactly eight children. By adapting such a data structure, the voxels are subdivided into eight sub-voxels of equal size, and the sub-voxels may be further subdivided into eight sub-voxels, and so on. Since the data structure is a tree-like structure, one can use a depth-first search algorithm, and thus the time to access a particular voxel of the model can be significantly reduced compared to a conventional voxel model. On the other hand, since the data structure is sparse, as long as the corresponding space is empty, most of the nodes will be empty, which reduces the memory requirement for storing the voxel model.
After the model is voxelized, the method may then provide a user interface to receive optimization constraints from the user at 306. For example, the user may define a region of interest (ROI), recording device, or camera placement constraints, such as potential points of interest, camera orientation, and field of view (FOV) that are possible. In another embodiment, the user may add a constraint on the maximum depth of the sensing coverage to ensure minimum resolution of the monitored space. The user may define a stopping criterion to indicate when the optimization stops, e.g. the number of iterations to be run, or reaches a certain optimal value.
In one embodiment, to converge to the optimal configuration faster, the user may initialize the camera position and orientation before undergoing optimization at 308. At 310, method 310 may determine whether a stopping criterion is met. If so, at 314, the optimization result is returned. If not, at 312, the results are recorded and an update to each parameter is determined.
In another embodiment, the user may also define targets to optimize the results by balancing the results between maximum coverage and minimum number of cameras. Consider, for example, the case when a user is searching for maximum coverage with a given number of cameras. As one example, the optimization results show 60% ROI coverage in the case of 5 cameras, 85% ROI coverage in the case of 6 cameras, 95% ROI coverage in the case of 7 cameras, 98% ROI coverage in the case of 8 cameras, and 100% ROI coverage in the case of 9 or more cameras. Obviously, in order to guarantee full coverage, 9 cameras must be installed. However, the user may define a goal, i.e. choose to optimize the maximum efficiency of each camera, and thus 85% in the case of 6 cameras (scoring 14.17% efficiency per camera) may be chosen as the best configuration, instead of 100% in the case of 9 cameras (scoring 11.11% per camera).
In one embodiment, the optimization may then be achieved by various mathematical models and computational algorithms. For example, the optimization may use a Monte Carlo method that may randomly sample a large set of configurations and find the setting that scores the highest among the user-defined objectives. The stopping criteria may be defined as the number of tests reached. Statistically, by the law of large numbers, if the number of trials is large enough, it is eventually possible to find the best configuration of the camera. This inevitably results in expensive computing power and takes a lot of time.
In one embodiment, many publicly known techniques (such as simulated annealing, markov chain monte carlo, or Mei Teluo wave-black-huntington algorithm) can alleviate this problem. It is worth mentioning that this problem can also be alleviated by using sparse voxel octree to store voxel models, since access time is reduced as previously described.
As another example, the optimization may additionally or alternatively use bayesian optimization. It consists of a bayesian statistical model for modeling the objective function and an acquisition function for making decisions on the next sampling point. Optimization may be initiated by randomly taking several samples of all variables evenly distributed and observing the performance of all these initial settings. By using all of these data, the posterior probability distribution and corresponding acquisition function can be calculated, so that the next sample can be selected by maximizing the acquisition function, or manually by the user. As long as the performance of all selected samples is observed, a new posterior probability distribution and a new corresponding acquisition function can be calculated. Eventually the best configuration will be found by iterating the above steps until a stopping criterion is reached.
In another embodiment, a machine learning algorithm may be applied based on previous recordings or models of the monitored or monitored volumes, such that optimization may be applied according to the size of the volumes and or the number of recording devices or cameras contemplated.
Fig. 4 may be a high-level illustration of portable computing device 801 of fig. 5 in communication with remote computing device 841, but application programs may be stored and accessed in a variety of ways. Further, applications may be obtained in various ways, such as from an application store, from a website, from a store Wi-Fi system, and so forth. Various versions of an application may exist to take advantage of different computing devices, different languages, and different API platforms.
In one embodiment, the portable computing device 801 may be a mobile device 108 that operates using a portable power source 855 (such as a battery). The portable computing device 801 may also have a display 802, which may or may not be a touch-sensitive display. More specifically, the display 802 may have a capacitive sensor that may be used to provide input data to the portable computing device 801, for example. In other embodiments, an input pad 804 (such as an arrow, a scroll wheel, a keyboard, etc.) may be used to provide input to the portable computing device 801. In addition, the portable computing device 801 may have a microphone 806 that may accept and store verbal data, a camera 808 that accepts images, and a speaker 810 that transmits sound.
The portable computing device 801 may be capable of communicating with a computing device 841 or multiple computing devices 841 that make up the cloud of computing devices 811. The portable computing device 801 may be capable of communicating in a variety of ways. In some embodiments, the communication may be wired, such as through an ethernet cable, a USB cable, or an RJ6 cable. In other embodiments, the communication may be wireless, such as by(802.11 standard), bluetooth, cellular communication or near field communication devices. The communication may be directly to the computing device 841 or may be via a network or communication module 880 through a communication network (such as a cellular service), through the internet, through a private network, through bluetooth, or the like.
Fig. 4 may be a sample portable computing device 801 that is physically configured as part of a system. The portable computing device 801 may have a processor 850 physically configured according to computer-executable instructions. It may have a portable power source 855, such as a battery that may be rechargeable. It may also have a sound and video module 860 that aids in displaying video and sound, and may be turned off when not in use to conserve power and battery life. The portable computing device 801 may also have non-volatile memory 870 and volatile memory 865. The network or communication module 880 may have GPS, bluetooth, NFC, cellular or other communication capabilities. In one embodiment, some or all of the network or communication capabilities may be separate circuits or may be part of processor 850. There may also be an input/output bus 875 that shuttles data to and from various user input devices such as microphone 806, camera 808, and other inputs such as tablet 804, display 802, speaker 810, and the like. It may also control communication with the network through a wireless device or a wired device. Of course, this is but one embodiment of a portable computing device 801, and the number and type of portable computing devices 801 are limited only by imagination.
The physical elements making up remote computing device 841 may be further illustrated in fig. 5. At a high level, computing device 841 may include digital storage such as magnetic disks, optical disks, flash memory, nonvolatile storage, and the like. The structured data may be stored in a digital storage device, such as a database. The server 841 may have a processor 1000 that is physically configured according to computer-executable instructions. It may also have a sound and video module 1005 that aids in displaying video and sound, and may be turned off when not in use to save power and battery life. The server 841 may also have volatile memory 1010 and nonvolatile memory 1015.
Database 1025 may be stored in memory 1010 or 1015, or may be separate. Database 1025 may also be part of a cloud of computing devices 841, and may be stored in a distributed manner across multiple computing devices 841. There may also be an input/output bus 1020 that shuttles data to and from various user input devices such as microphone 806, camera 808, inputs such as tablet 804, display 802, speaker 810, and the like. Input/output bus 1020 may also be connected to microphone 806, camera 808, inputs such as tablet 804, display 802, and speaker 810, or other peripheral devices, among others. The input/output bus 1020 may also interface with a network or communication module 1030 to control communication with other devices or computer networks via wireless or wired devices. In some embodiments, the application may be on a local computing device 801, and in other embodiments, the application may be remote 841. Of course, this is but one embodiment of server 841, and the number and type of portable computing devices 841 is limited only by imagination.
The user devices, computers, and servers (e.g., 801 or 841) described herein may be computers that may have, among other elements, microprocessors (such as fromCompany, & gt> Or (b)) The method comprises the steps of carrying out a first treatment on the surface of the Volatile and nonvolatile memory; one or more mass storage devices (e.g., hard disk); various user input devices (such as a mouse, keyboard, or microphone); a video display system. The user devices, computers, and servers described herein may be in many operating systemsAny of which may be run on, including but not limited toOr->However, it is contemplated that any suitable operating system may be used with the present application. The servers may be a cluster of web servers, each of which may be based on +.>And supported by a load balancer that decides which of the network server clusters should handle the request based on the current request load of the available server(s).
The user devices, computers and servers described herein may communicate via a network, the network includes the Internet, a Wide Area Network (WAN), a Local Area Network (LAN),Other computer networks (now known or later devised), and/or any combination of the above. Those skilled in the art having the present specification, drawings and claims will appreciate before that the network may connect the various components through any combination of wired and wireless channels, including copper, fiber optics, microwaves and other forms of radio frequency, electrical and/or optical communication technology. It should also be understood that any network may be connected to any other network in different ways. The interconnection between computers and servers in a system is an example. Any of the devices described herein may communicate with any other device via one or more networks.
Example embodiments may include additional devices and networks beyond those shown. Further, functions described as being performed by one device may be allocated and performed by two or more devices. Multiple devices may also be combined into a single device that may perform the functions of the combined device.
The various participants and elements described herein may operate one or more computer devices to facilitate the functionality described herein. Any of the elements in the figures described above (including any server, user device, or database) may use any suitable number of subsystems to facilitate the functions described herein.
Any of the software components or functions described in this application may be implemented as software code or computer readable instructions that may be executed by at least one processor using any suitable computer language, such as, for example, java, c++ or Perl, using, for example, conventional or object-oriented techniques.
The software code may be stored as a series of instructions or commands on a non-transitory computer readable medium, such as Random Access Memory (RAM), read Only Memory (ROM), magnetic media (such as a hard or floppy disk), or optical media (such as a CD-ROM). Any such computer-readable medium may reside on or within a single computing device and may reside on or within different computing devices within a system or network.
It will be appreciated that the application as described above may be implemented in the form of control logic using computer software in a modular or integrated manner. Based on the disclosure and teachings provided herein, one of ordinary skill in the art will know and appreciate other ways and/or methods to implement the present application using hardware, software, or a combination of hardware and software.
The above description is illustrative and not restrictive. Many variations of the embodiments can become apparent to those skilled in the art upon review of the present disclosure. Accordingly, the scope embodiments should be determined not with reference to the above description, but should instead be determined with reference to the appended claims along with their full scope or equivalents.
One or more features from any embodiment may be combined with one or more features of any other embodiment without departing from the scope of the embodiments. Recitation of "a", "an", or "the" is intended to mean "one or more", unless expressly specified to the contrary. Recitation of "and/or" is intended to mean the maximum inclusion of that term, unless specifically indicated to the contrary.
One or more elements of the present system may be claimed as a means for performing a particular function. Where such means-plus-function elements are used to describe certain elements of the claimed systems, those of ordinary skill in the art having the present specification, figures, and claims will understand that the corresponding structures comprise a computer, processor, or microprocessor (as the case may be) programmed to perform the specifically recited functions using functions found to be specially programmed in the computer and/or by implementing one or more algorithms to implement the functions recited in the above claims or steps. As will be appreciated by one of ordinary skill in the art, the algorithm may be expressed in this disclosure as a mathematical formula, a flow chart, a narrative, and/or any other manner that provides one of ordinary skill with sufficient structure to carry out the described process and its equivalents.
While this disclosure may be embodied in many different forms, the drawings and discussion will be presented with the understanding that the present disclosure is an explanation of the principles of one or more applications and is not intended to limit any one embodiment to the one illustrated.
The present disclosure provides a solution to the long felt needs described above. In particular, the external signal interference radar according to aspects of the present application overcomes the shortcomings of the prior art by being able to identify specific times and signal spectra of interfering mobile signals. These features enable better analysis and detection of legal problems of the movement signal itself.
Further advantages and modifications of the above-described system and method will readily occur to those skilled in the art.
The disclosure, in its broader aspects, is therefore not limited to the specific details, the representative systems and methods, and illustrative examples shown and described above. Various modifications and changes may be made to the above description without departing from the scope or spirit of the disclosure, and it is intended that the disclosure encompass all such modifications and changes as fall within the scope of the appended claims and their equivalents.

Claims (20)

1. A closed circuit monitoring system, comprising:
one or more recording devices available for use;
a processor configured to execute computer-executable instructions for:
receiving input from the one or more recording devices, wherein the input comprises image or video information defining a target volume;
converting the image or video information into a three-dimensional (3D) model;
constructing a cube representation of the 3D model, wherein the cube representation comprises a voxel model;
storing information of the voxel model in the form of a sparse voxel octree in a data storage accessible to the processor;
determining a coverage volume within the target volume; and
the number of the one or more recording devices required for the coverage volume is determined.
2. The closed circuit monitoring system of claim 1, further comprising: a Graphical User Interface (GUI) that receives input from a user to configure constraints on the model; and wherein the processor is configured to determine the coverage volume and the number of the one or more recording devices from the received input.
3. The closed-circuit monitoring system of claim 1, wherein the 3D model comprises one or more Building Information Models (BIMs).
4. The closed circuit monitoring system of claim 1, wherein the target volume comprises a facility or space to be monitored or monitored.
5. The closed circuit monitoring system of claim 2, wherein the constraints received from the user include at least one of: a region of interest (ROI) and placement constraints.
6. The closed-circuit monitoring system of claim 5, wherein the placement constraints include one or more of: a location of the recording device, an orientation of the recording device, a field of view (FOV) of the recording device.
7. The closed circuit monitoring system of claim 2, wherein the GUI further receives a stop criteria input from the user, wherein the processor is configured to apply the stop criteria input to determine the coverage volume and the number of the one or more recording devices from the stop criteria input.
8. A computer-implemented method for processing a closed-loop monitoring system, the method comprising:
receiving input from one or more recording devices, wherein the input comprises image or video information defining a target volume;
converting the image or video information into a three-dimensional (3D) model;
constructing a cube representation of the 3D model, wherein the cube representation comprises a voxel model;
storing information of the voxel model in the form of a sparse voxel octree in a data storage accessible to the processor;
determining a coverage volume within the target volume;
determining the number of the one or more recording devices required for the coverage volume; and
input is received from a user via a Graphical User Interface (GUI) to configure constraints on the model.
9. The computer-implemented method of claim 8, wherein determining the coverage volume and the number of the one or more recording devices is based on the received input.
10. The computer-implemented method of claim 8, wherein the 3D model comprises one or more Building Information Models (BIMs).
11. The computer-implemented method of claim 8, wherein the target volume comprises a facility or space to be monitored or monitored.
12. The computer-implemented method of claim 8, wherein the constraints received from the user comprise at least one of: a region of interest (ROI) and placement constraints.
13. The computer-implemented method of claim 12, wherein the placement constraints include one or more of: a location of the recording device, an orientation of the recording device, a field of view (FOV) of the recording device.
14. The computer-implemented method of claim 8, the method further comprising: a stop criteria input is received from the user, and the method further comprises: the stopping criteria input is applied to determine the coverage volume and the number of the one or more recording devices from the stopping criteria input.
15. A tangible, non-transitory, computer-readable medium having stored thereon computer-executable instructions for processing a closed-loop monitoring system, wherein the computer-executable instructions comprise:
receiving input from one or more recording devices, wherein the input comprises image or video information defining a target volume, wherein the target volume comprises a facility or space to be monitored or monitored;
converting the image or video information into a three-dimensional (3D) model;
constructing a cube representation of the 3D model, wherein the cube representation comprises a voxel model;
storing information of the voxel model in the form of a sparse voxel octree in a data storage accessible to the processor;
determining a coverage volume within the target volume;
determining the number of the one or more recording devices required for the coverage volume; and
input is received from a user via a Graphical User Interface (GUI) to configure constraints on the model.
16. The tangible, non-transitory, computer-readable medium of claim 15, wherein determining the coverage volume and the number of the one or more recording devices is based on the received input.
17. The tangible, non-transitory computer-readable medium of claim 15, wherein the 3D model comprises one or more Building Information Models (BIMs).
18. The tangible, non-transitory, computer-readable medium of claim 15, wherein the constraints received from the user comprise at least one of: a region of interest (ROI) and placement constraints.
19. The tangible, non-transitory, computer-readable medium of claim 18, wherein the placement constraints include one or more of: a location of the recording device, an orientation of the recording device, a field of view (FOV) of the recording device.
20. The tangible, non-transitory computer-readable medium of claim 15, wherein the computer-executable instructions further comprise: a stop criteria input is received from the user, and the computer-executable instructions further comprise: the stopping criteria input is applied to determine the coverage volume and the number of the one or more recording devices from the stopping criteria input.
CN202080108316.6A 2020-12-06 2020-12-06 System and method for optimal camera placement and configuration using sparse voxel octree Pending CN117063462A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2020/061568 WO2022118064A1 (en) 2020-12-06 2020-12-06 System and method of optimal cameras placement and configuration using sparse voxel octree

Publications (1)

Publication Number Publication Date
CN117063462A true CN117063462A (en) 2023-11-14

Family

ID=81853859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080108316.6A Pending CN117063462A (en) 2020-12-06 2020-12-06 System and method for optimal camera placement and configuration using sparse voxel octree

Country Status (2)

Country Link
CN (1) CN117063462A (en)
WO (1) WO2022118064A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8830230B2 (en) * 2011-01-31 2014-09-09 Honeywell International Inc. Sensor placement and analysis using a virtual environment
US10187806B2 (en) * 2015-04-14 2019-01-22 ETAK Systems, LLC Systems and methods for obtaining accurate 3D modeling data using multiple cameras
CN106027962B (en) * 2016-05-24 2019-06-11 浙江宇视科技有限公司 The coverage rate calculation method and device of video monitoring, points distributing method and system
WO2018109266A1 (en) * 2016-12-15 2018-06-21 Nokia Technologies Oy A method and technical equipment for rendering media content
CN110851978B (en) * 2019-11-08 2024-03-19 江苏科技大学 Camera position optimization method based on visibility

Also Published As

Publication number Publication date
WO2022118064A1 (en) 2022-06-09

Similar Documents

Publication Publication Date Title
US10032082B2 (en) Method and apparatus for detecting abnormal situation
EP3797386A2 (en) Deep learning system
JP7209678B2 (en) Holographic quantum mechanics simulation
CN110659658B (en) Target detection method and device
CN111192313A (en) Method for robot to construct map, robot and storage medium
CN114758337B (en) Semantic instance reconstruction method, device, equipment and medium
JP2018536550A (en) Active camera movement determination for object position and range in 3D space
US20230252070A1 (en) Method and apparatus for training retrieval model, retrieval method and apparatus, device and medium
CN112437451A (en) Wireless network flow prediction method and device based on generation countermeasure network
CN111753870B (en) Training method, device and storage medium of target detection model
AU2022258476B2 (en) Point cloud filtering
KR102210693B1 (en) Image Classification Improvement Technique Using Boundary Bitmap
RU2746152C2 (en) Detection of a biological object
CN108304578B (en) Map data processing method, medium, device and computing equipment
CN111812670B (en) Single photon laser radar space transformation noise judgment and filtering method and device
CN116013354B (en) Training method of deep learning model and method for controlling mouth shape change of virtual image
CN117063462A (en) System and method for optimal camera placement and configuration using sparse voxel octree
WO2023159882A1 (en) Space collision detection method and apparatus, and electronic device
CN116188676A (en) Method and device for establishing digital twin model of digital center
CN115619924A (en) Method and apparatus for light estimation
Yeshwanth Kumar et al. Building information modelling of a multi storey building using terrestrial laser scanner and visualisation using potree: An open source point cloud renderer
US20240135149A1 (en) Anomaly Detection System for Embedded Devices
US20240135571A1 (en) Configuring an Object Detection System for an Embedded Device
CN117333626B (en) Image sampling data acquisition method, device, computer equipment and storage medium
CN111210500B (en) Three-dimensional point cloud processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication