WO2023045726A1 - 一种相机的参数确定方法、装置、电子设备及程序产品 - Google Patents

一种相机的参数确定方法、装置、电子设备及程序产品 Download PDF

Info

Publication number
WO2023045726A1
WO2023045726A1 PCT/CN2022/116506 CN2022116506W WO2023045726A1 WO 2023045726 A1 WO2023045726 A1 WO 2023045726A1 CN 2022116506 W CN2022116506 W CN 2022116506W WO 2023045726 A1 WO2023045726 A1 WO 2023045726A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
camera
parameters
configuration parameters
scene
Prior art date
Application number
PCT/CN2022/116506
Other languages
English (en)
French (fr)
Inventor
刘春辉
陈俊维
冯中坚
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2023045726A1 publication Critical patent/WO2023045726A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present application relates to the field of computer technology, in particular to a camera parameter determination method, device, electronic equipment and program product.
  • the configuration parameters of the camera to be deployed are usually manually determined empirically, and the camera to be deployed is manually determined empirically.
  • Deployment parameters such as installation location, installation angle.
  • the purpose of the embodiments of the present application is to provide a camera parameter determination method, device, electronic equipment and program product, so as to improve the accuracy of the determined camera parameters.
  • the specific technical scheme is as follows:
  • the embodiment of the present application provides a method for determining camera parameters, the method including:
  • the virtual deployment parameters and/or virtual configuration parameters of the virtual camera are determined as reference deployment parameters and/or reference configuration parameters of the cameras to be deployed in the target scene .
  • the simulating the deployment and control of the virtual camera in the scene model according to the virtual parameters of the virtual camera to obtain a simulated monitoring image includes:
  • the local scene model is projected along the field of view direction to obtain a simulated monitoring image corresponding to the virtual camera.
  • the method further includes:
  • a recommended camera type whose configuration parameters match the reference configuration parameters is determined.
  • the method further includes:
  • the method further includes:
  • the recommended camera type is determined to be the target camera type.
  • the scene model and the status of the virtual camera deployed in the scene model are displayed in the first window of the first interface
  • the simulated monitoring image is displayed in the second window of the first interface
  • the method further includes:
  • the second interface further includes a fourth window, the fourth window displays the scene model and the deployment and control status of the virtual camera in the scene model.
  • the method also includes:
  • the simulated monitoring image does not meet the monitoring requirements, adjust the virtual deployment parameters and/or virtual configuration parameters of the virtual camera, and return to the simulation of the virtual camera in the virtual camera according to the virtual parameters of the virtual camera.
  • Multiple virtual cameras are deployed in the scene model, and the method also includes:
  • the scene model obtained for simulating the target scene includes:
  • a 3D engine is used to construct a 3D model to obtain a scene model.
  • the embodiment of the present application provides a camera parameter determination device, the device includes:
  • a scene model obtaining module configured to obtain a scene model for simulating a target scene
  • a virtual camera deployment module configured to deploy a virtual camera in the scene model
  • An image acquisition module configured to simulate the deployment and control of the virtual camera in the scene model according to the virtual parameters of the virtual camera, and obtain a simulated monitoring image, wherein the virtual parameters include virtual deployment parameters and virtual configuration parameters;
  • a parameter determination module configured to determine the virtual deployment parameters and/or virtual configuration parameters of the virtual camera as the reference deployment parameters of the cameras to be deployed in the target scene when the simulated monitoring image meets the monitoring requirements and/or refer to configuration parameters.
  • the image acquisition module is specifically used for:
  • the local scene model is projected along the field of view direction to obtain a simulated monitoring image corresponding to the virtual camera.
  • the device further includes a type recommendation module, configured to:
  • the device further includes a parameter updating module, configured to:
  • the parameter update module is also used for:
  • the recommended camera type is determined to be the target camera type.
  • the scene model and the status of the virtual camera deployed in the scene model are displayed in the first window of the first interface
  • the simulated monitoring image is displayed in the second window of the first interface
  • the device also includes an interface switching module for:
  • the virtual configuration parameter of the virtual camera is determined as the reference configuration parameter of the camera to be deployed in the target scene, switching from the first interface to the second interface, on the second interface of the second interface
  • the recommended camera types whose configuration parameters match the reference configuration parameters are displayed in the three windows;
  • the second interface further includes a fourth window, the fourth window displays the scene model and the deployment and control status of the virtual camera in the scene model.
  • the device further includes a parameter adjustment module, configured to:
  • the device also includes a blind area elimination module for:
  • Determining monitoring blind areas between the field of view ranges of different virtual cameras Determining monitoring blind areas between the field of view ranges of different virtual cameras; adjusting virtual deployment parameters and/or virtual configuration parameters of at least one virtual camera among the plurality of virtual cameras according to the determined monitoring blind areas; and/or
  • the scene model obtaining module is specifically used for:
  • a 3D engine is used to construct a 3D model to obtain a scene model.
  • the embodiment of the present application further provides an electronic device, including a processor and a memory;
  • the processor is used to implement the method described in any one of the first aspect when executing the program stored in the memory.
  • the embodiment of the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method described in any one of the first aspect.
  • the embodiment of the present application also provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method described in any one of the first aspect is executed. method.
  • the scene model used to simulate the target scene can be obtained, and the virtual camera is deployed in the scene model; according to the virtual parameters of the virtual camera, the simulated virtual camera is deployed and controlled in the scene model to obtain a simulated monitoring image , wherein the virtual parameters include virtual deployment parameters and virtual configuration parameters; when the simulated monitoring image meets the monitoring requirements, the virtual deployment parameters and/or virtual configuration parameters of the virtual camera are determined as the reference of the camera to be deployed in the target scene Deployment parameters and/or reference configuration parameters. In this way, the virtual camera can be deployed in the scene model, and the monitoring image can be obtained by simulating the virtual parameters of the virtual camera.
  • Deployment parameters and/or virtual configuration parameters are determined as reference deployment parameters and/or reference configuration parameters of cameras to be deployed in the target scene, and subsequently can be determined based on the reference deployment parameters and/or reference configuration parameters. Deployment parameters and/or configuration parameters of the deployed cameras, so as to ensure that the determined parameters of the cameras match the real target scene. It can be seen that the accuracy of the determined camera parameters can be improved by applying the solution provided by the embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a camera parameter determination method provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a first interface provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of another first interface provided by the embodiment of the present application.
  • FIG. 4 is a schematic diagram of a second interface provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a first viewing mode provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another camera parameter determination method provided in the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a camera parameter determination device provided in an embodiment of the present application.
  • 8a and 8b are schematic structural diagrams of an electronic device provided by an embodiment of the present application.
  • embodiments of the present application provide a camera parameter determination method, device, electronic equipment, and program product, which will be introduced respectively below.
  • the embodiment of the present application provides a camera parameter determination method, which can be applied to computers, servers, mobile phones and other electronic devices, including:
  • the virtual deployment parameters and/or virtual configuration parameters of the virtual camera are determined as reference deployment parameters and/or reference configuration parameters of the cameras to be deployed in the target scene.
  • the virtual camera can be deployed in the scene model, and the monitoring image can be obtained by simulating the virtual parameters of the virtual camera. According to the simulated monitoring image, it can be judged whether the virtual parameters of the virtual camera meet the monitoring requirements.
  • Deployment parameters and/or virtual configuration parameters are determined as reference deployment parameters and/or reference configuration parameters of cameras to be deployed in the target scene, and subsequently can be determined based on the reference deployment parameters and/or reference configuration parameters. Deployment parameters and/or configuration parameters of the deployed cameras, so as to ensure that the determined parameters of the cameras match the real target scene. It can be seen that, by applying the solutions provided by the foregoing embodiments, the accuracy of the determined camera parameters can be improved.
  • FIG. 1 is a schematic flow chart of a camera parameter determination method provided in an embodiment of the present application, and the method includes the following steps S101-S103:
  • the above-mentioned target scene refers to: a real scene where the camera is to be deployed, for example, the above-mentioned target scene may be any of the following scenes: a residential area, a construction site, a traffic intersection, a factory, a shopping mall, and the like.
  • the aforementioned virtual camera refers to a model for simulating a real camera, and the aforementioned virtual camera may include one or more of the following virtual cameras: bolt camera, dome camera, wide-angle camera, fisheye camera, and the like.
  • a 3D modeling technique may be used to obtain a 3D model of the target scene as a scene model, and deploy one or more virtual cameras in the above scene model, thereby simulating the deployment of real cameras in the actual target scene.
  • the simulated virtual camera is deployed and controlled in the scene model to obtain a simulated monitoring image.
  • the virtual parameters include virtual deployment parameters and virtual configuration parameters.
  • the above-mentioned virtual deployment parameters refer to at least one of the following parameters: the installation position, height, orientation, installation method, etc. of the virtual camera in the scene model, and the above-mentioned installation method includes any of the following methods: wall installation, hoisting, installation Wait on the pole.
  • the aforementioned virtual configuration parameters refer to the parameters of the camera itself, including at least one of the following parameters: focal length, monitoring mode, vertical viewing angle, horizontal viewing angle, and the like.
  • the aforementioned monitoring modes include at least a corridor mode and a normal mode.
  • the above-mentioned corridor mode refers to: rotate the image collected by the camera by 90° to increase the monitoring range of the camera in the vertical direction.
  • the width and height of the collected images are 1080*1920.
  • the above-mentioned normal mode refers to: the image collected by the camera is not rotated.
  • the aforementioned simulated monitoring image refers to: the monitoring image in the scene model that can be captured by the simulated virtual camera.
  • the configuration parameters of the virtual camera itself and the deployment parameters in the scene model it is possible to simulate the deployment of the virtual camera in the scene model, thereby simulating the image collected by the virtual camera as a simulated monitoring image, which can reflect the virtual camera.
  • the scope of monitoring in the scene model is possible to simulate the deployment of the virtual camera in the scene model, thereby simulating the image collected by the virtual camera as a simulated monitoring image, which can reflect the virtual camera.
  • the above-mentioned monitoring requirement refers to: the requirement for the monitoring range of the camera, for example, it may be at least one of the following requirements: the target object is located within the monitoring range, the target channel is located within the monitoring range, etc., and the above-mentioned target object can be Objects that need to be monitored, for example, can be at least one of the following objects: cash registers, bayonet gates, valuables counters, etc.
  • the above-mentioned target channel can be the channel that the user pays attention to and needs to be monitored , such as at least one of the following passages: fire exits, passenger passages, and the like.
  • the simulated monitoring image can reflect the monitoring range of the virtual camera. According to the above simulated monitoring image, it can be judged whether the monitoring range of the virtual camera meets the monitoring requirements. If yes, it means that the virtual deployment parameters and virtual configuration parameters of the current virtual camera are applicable. Based on the scene model, therefore, on the one hand, the virtual deployment parameters of the current virtual camera can be determined as the reference deployment parameters of the camera to be deployed in the target scene, so that the selected camera can be deployed in the scene model based on the above reference deployment parameters. middle;
  • the virtual configuration parameter of the current virtual camera can be determined as a reference configuration parameter of the camera to be deployed in the target scene, so that the type of the camera to be deployed in the target scene can be selected subsequently based on the reference configuration parameter.
  • the virtual deployment parameters can be used to guide the actual installation of the camera in the target scene, ensuring that the camera meets the monitoring requirements after installation.
  • the virtual configuration parameters of the virtual cameras can be determined as the reference configuration parameters of the cameras to be deployed in the target scene under the condition that the simulated monitoring images meet the monitoring requirements.
  • the camera recommendation can be performed according to the above reference configuration parameters. For example, in the scene of camera procurement, in order to recommend the type of camera to be purchased to the user, reference configuration parameters may be determined according to the above solution, so that the user may purchase a camera matching the reference configuration parameters.
  • the virtual deployment parameters of the virtual camera can be determined as the reference deployment parameters of the cameras to be deployed in the target scene under the condition that the simulated monitoring images meet the monitoring requirements.
  • the camera to be deployed can be deployed in the scene according to the above reference deployment parameters.
  • the above-mentioned simulated monitoring image when judging whether the simulated monitoring image corresponding to the virtual camera meets the monitoring requirements, can be displayed, and an instruction input by the user through an external input device is received.
  • Information that satisfies the monitoring requirements refers to information that indicates that the user believes that the simulated surveillance image meets the monitoring requirements. information.
  • the above-mentioned external input device may be any one of the following devices: keyboard, mouse, touch pad, microphone.
  • the above-mentioned target object is a preset object to be monitored, for example, the above-mentioned target object may be an entrance, a cash register, a shelf, and the like.
  • the scene model for simulating the target scene can be obtained, and the virtual camera is deployed in the scene model; the virtual camera is simulated and controlled in the scene model according to the virtual parameters of the virtual camera, and a simulated monitoring image is obtained, wherein,
  • the virtual parameters include virtual deployment parameters and virtual configuration parameters; when the simulated monitoring image meets the monitoring requirements, the virtual deployment parameters and/or virtual configuration parameters of the virtual camera are determined as the reference deployment parameters and the virtual configuration parameters of the camera to be deployed in the target scene. /or refer to configuration parameters.
  • the virtual camera can be deployed in the scene model, and the monitoring image can be obtained by simulating the virtual parameters of the virtual camera.
  • Deployment parameters and/or virtual configuration parameters are determined as reference deployment parameters and/or reference configuration parameters of cameras to be deployed in the target scene, and subsequently can be determined based on the reference deployment parameters and/or reference configuration parameters. Deployment parameters and/or configuration parameters of the deployed cameras, so as to ensure that the determined parameters of the cameras match the real target scene. It can be seen that, by applying the solutions provided by the foregoing embodiments, the accuracy of the determined camera parameters can be improved.
  • the simulated monitoring image can reflect the monitoring range of the virtual camera. According to the above simulated monitoring image, it can be judged whether the monitoring range of the virtual camera meets the monitoring requirements. If not, the virtual deployment parameters and/or virtual configuration of the virtual camera can be adjusted. Parameters, based on the adjusted parameters, reacquire the simulated monitoring image corresponding to the virtual camera after parameter adjustment, and judge again whether the simulated monitoring image of the virtual camera after parameter adjustment meets the monitoring requirements, until the obtained simulated monitoring image meets the monitoring requirements.
  • an adjustment instruction input by the user through an external input device may be received, and the virtual deployment parameters and/or virtual configuration parameters of the virtual camera may be adjusted according to the adjustment instruction.
  • the virtual deployment parameters and/or virtual configuration parameters of the virtual camera may also be adjusted according to a preset adjustment step.
  • the parameter to be adjusted is the focal length
  • the preset adjustment step is 1 meter
  • multiple virtual cameras can be deployed in the scene model.
  • the monitoring blind spots between the field of view ranges of different virtual cameras can be determined; according to the determined monitoring blind spots, multiple virtual cameras can be adjusted.
  • Virtual deployment parameters and/or virtual configuration parameters of at least one virtual camera among the cameras can be adjusted.
  • the virtual parameters of at least one virtual camera among the plurality of virtual cameras may be adjusted according to the determined blind area.
  • the virtual deployment parameters and/or virtual configuration parameters of at least one virtual camera are adjusted according to the blind spots between different virtual cameras, so that there is no monitoring blind spot between different virtual cameras after parameter adjustment, thereby ensuring subsequent virtual cameras based on virtual cameras.
  • the reference deployment parameters and/or reference configuration parameters of the camera to be deployed obtained by the deployment parameters and/or the virtual configuration parameters are more consistent with the actual target scene, and the monitoring effect of the subsequently actually deployed cameras is improved.
  • the monitoring blind area between the field of view ranges of different virtual cameras can be determined according to the simulated monitoring images corresponding to the above-mentioned multiple virtual cameras .
  • the simulated monitoring image corresponding to each virtual camera can reflect the scope of the virtual camera.
  • the monitoring area of each virtual camera can be directly determined, and then the blind area between the monitoring areas of each virtual camera can be obtained, and the field of view of different virtual cameras can be obtained Monitoring blind zone between.
  • monitoring blind area input by the user through an external input device may also be directly received, which is not limited in this embodiment of the present application.
  • the field of view direction of the virtual camera can be determined according to the virtual deployment parameters of the virtual camera, and the scene model can be projected along the field of view direction to obtain a projected image;
  • the virtual deployment parameters and virtual configuration parameters are used to determine the field of view of the virtual camera, and the projected image is cut according to the field of view to obtain a simulated surveillance image after cutting.
  • the installation position and orientation of the virtual camera can be determined, so as to obtain the field of view direction of the virtual camera, and the scene model can be projected along the field of view direction to obtain a projected image, and then according to the virtual camera
  • the virtual deployment parameters and virtual configuration parameters of the virtual camera can be used to obtain the field of view of the virtual camera, and the above projection image is cut according to the field of view to obtain the cropped image, which is the simulated monitoring captured by the simulated virtual camera image.
  • the field of view and field of view direction of the virtual camera can also be determined directly according to the virtual deployment parameters and virtual configuration parameters of the virtual camera, and then the scene model is processed along the above field of view. Segmentation to obtain a local scene model within the field of view of the virtual camera, and then project the local scene model along the field of view of the virtual camera to obtain a simulated surveillance image corresponding to the virtual camera.
  • step S103 after determining in step S103 that the virtual configuration parameter of the virtual camera is the reference configuration parameter of the camera to be deployed in the target scene, it is also possible to:
  • the camera types of different models of cameras and different The configuration parameters of the camera of the camera type in order to recommend a suitable camera type to the user, the camera types of different models of cameras and different The configuration parameters of the camera of the camera type, and then determine the camera type whose configuration parameters match the reference configuration parameters, as the recommended camera type of the camera recommended to be installed in the target scene. In this way, it is convenient for the user to select the camera actually deployed in the target scene according to the recommended camera type.
  • the camera type with the smallest difference between the configuration parameters and the reference configuration parameters may be selected as the recommended camera type
  • the camera type whose configuration parameters override the reference configuration parameters can also be selected as the recommended camera type. For example, assuming that the horizontal field of view in the reference configuration parameter is 60°, you can select a camera type with a horizontal field of view greater than or equal to 60° as the recommended camera type.
  • multiple recommended camera types that match the reference configuration parameters may be determined according to the reference configuration parameters of the virtual camera, and the user's input of the above-mentioned multiple recommended camera types may be received. Select the operation, and use the camera type selected by the user as the actual required target camera type. In this way, at least one recommended camera type can be determined by the electronic device, and then the user can select the actually required target camera type from the above-mentioned multiple recommended camera types.
  • the virtual configuration parameters of the virtual camera are updated according to the configuration parameters of the camera corresponding to the recommended camera type, and step S102 is executed.
  • the configuration parameters of the camera corresponding to the recommended camera type can be obtained, and then the virtual configuration parameters of the virtual camera are updated to the configuration parameters of the camera corresponding to the recommended camera type, returning to step S102, thus obtaining A new simulated monitoring image, which can reflect the monitoring range of the camera corresponding to the recommended camera type in actual applications, which is convenient for users to adjust the recommended camera, and avoids the configuration of the camera corresponding to the recommended camera type
  • the parameters do not match the reference configuration parameters, resulting in a mismatch between the recommended camera type and the target scene.
  • the vertical field of view in the determined reference configuration parameters is 90°
  • the vertical field of view of the camera corresponding to the recommended camera type The 90° cannot be reached, so when the camera corresponding to the recommended camera type is used for deployment and control, the simulated surveillance image of the camera may not meet the surveillance requirements.
  • the virtual configuration parameters of the virtual camera are updated based on the configuration parameters of the camera corresponding to the recommended camera type, and then a new simulated surveillance image is obtained.
  • the simulated surveillance image is convenient for the user to judge whether the camera corresponding to the recommended camera type is Satisfy the monitoring needs, so as to adjust the recommended camera type and improve the reliability of the recommended camera type.
  • the virtual configuration parameters of the virtual camera when updating the virtual configuration parameters of the virtual camera based on the configuration parameters of the camera corresponding to the recommended camera type, in one case, only the virtual configuration parameters of the virtual camera can be adjusted, that is, only the virtual camera
  • the virtual configuration parameters of the virtual camera are updated to the configuration parameters of the camera corresponding to the recommended camera type without changing the virtual deployment parameters of the virtual camera; in another case, the virtual configuration parameters and virtual deployment parameters of the virtual camera can be adjusted synchronously, that is, not only Update the virtual configuration parameters of the virtual camera to the configuration parameters of the camera corresponding to the recommended camera type, and adjust the virtual deployment parameters of the virtual camera until the obtained simulated monitoring image meets the monitoring requirements, so that the adjusted configuration parameters of the virtual camera can be compared with The deployment parameters are more matched, thereby improving the monitoring effect of the camera.
  • the recommended camera type is determined to be the target camera type.
  • step S102 After updating the virtual configuration parameters of the virtual camera to the configuration parameters of the camera corresponding to the recommended recommended camera type, return to the above step S102 to obtain the simulated surveillance image again, and determine whether the obtained simulated surveillance image meets the surveillance requirements, If it is satisfied, it means that the recommended camera type matches the target scene, and the recommended camera type can be used as the target camera type of the camera actually deployed in the target scene.
  • the simulated monitoring image obtained by taking the configuration parameters of any recommended camera type as the current virtual configuration parameters meets the monitoring requirements, it is also possible to determine the virtual deployment parameters of the current virtual camera as the target Deployment parameters.
  • the recommended camera type can be determined as the target camera type, and the current virtual camera's
  • the virtual deployment parameter is a target deployment parameter, and the camera corresponding to the determined target camera type can be deployed in the target scene according to the target deployment parameter. In this way, it can be ensured that the type of the deployed camera matches the deployment parameters, thereby improving the monitoring effect of the camera in the target scene.
  • the virtual camera’s virtual image can also be obtained. configuration parameters as target configuration parameters.
  • the recommended camera type can be determined as the target camera type, and the current virtual camera's The virtual configuration parameters are used as the target configuration parameters, and the virtual deployment parameters of the current virtual camera are used as the target deployment parameters, so that the camera corresponding to the determined target camera type can be deployed in the target scene according to the target deployment parameters, and according to the target configuration Parameters Configure the parameters of the deployed camera.
  • the target camera type corresponding to The focal length of the camera is configured to be 10 meters. In this way, the deployed camera can be more matched with the target scene, and the monitoring effect of the camera can be improved.
  • the user when obtaining the scene model in the above step S101, the user can customize the model of the target scene, or obtain the model of the target scene imported from outside, which will be described in detail below.
  • the aforementioned objects refer to: objects used to form the target scene.
  • the aforementioned objects may be at least one of the following objects: toll gates, turnstiles, diversion lines, parking signs, and the like.
  • the preset attribute information of each object may include general attributes and private attributes of the object
  • the general attributes mentioned above refer to the attributes of each object, for example, may include at least one of the following attributes: name, type, material, identification, position, rotation angle, status information, and so on.
  • the aforementioned types may include at least one of the following types: buildings, outdoor walls, cars, trees, roads, cameras, custom cubes, custom cylinders, custom prompt information, floors, roofs, windows, doors, and the like.
  • the private attributes may include at least one of the following attributes: floor height, floor number, floor list, etc.; If the type is a car, the private attribute may include at least one of the following attributes: vehicle type, body length, body height, etc.
  • the above state information includes locked state or unlocked state.
  • the above locked state refers to whether the object can be changed. When the object is in the locked state, it means that the object cannot be changed; when the object is in the unlocked state, it means that the object can be changed. .
  • the above state information may also include a hidden state or a non-hidden state.
  • the object In the hidden state, the object is in an invisible state, and in the non-hidden state, the object is in a visible state.
  • each object can be drawn on the 2D canvas with reference to the distribution of each object in the target scene, and the attribute information of each drawn object can be set. Then, based on the attribute information of each object, the drawn 2D The object is rendered in 3D to obtain each 3D object, and the target scene can be composed of the above-mentioned 3D objects.
  • custom 3D entity objects can be added to the scene model, for example, 3D entity objects such as walls, lawns, and street lights can be added to the scene model.
  • 3D entity objects such as walls, lawns, and street lights
  • the above-mentioned 3D entity object is a 3D modeling template for pre-customization.
  • the 3D solid object can be added to the specified position by clicking or dragging the mouse, or the horizontal position of the object to be added can be picked up by clicking the mouse, and then added according to the horizontal position Custom 3D solid objects.
  • Obtain the scene design information of the target scene analyze the scene design information according to the preset modeling protocol to obtain the scene analysis information; refer to the scene analysis information, use the 3D engine to construct a 3D model, and obtain the scene model.
  • the above-mentioned scene design information may be a CAD file of the target scene, and the format of the above-mentioned CAD file may be a dwg format or a dxf format.
  • the above-mentioned modeling protocol refers to: a protocol suitable for 3D modeling, for example, parsing preset attribute information of each object described in a CAD file, and the like.
  • the scene design information of the target scene can be obtained, which is used to describe the position, shape, attribute, etc. of each object in the target scene, and the scene design information can be analyzed according to the modeling protocol to obtain the scene analysis information, and then use
  • the 3D engine renders the scene analysis information, so as to obtain a 3D model of the target scene as a scene model.
  • an editing operation on an object drawn in the 2D canvas can be received, and then 3D rendering can be performed based on the edited object, thereby realizing synchronous updating of the 3D scene model;
  • editing operations on the 3D scene model may also be received, and then the objects drawn in the 2D canvas may be updated based on the synchronization after editing.
  • the obtained scene model supports import and export. After the above scene model is exported, it can be used as a template for subsequent use when obtaining a new scene model.
  • the parameter determination method provided in this application can be applied to the client, and the interface of the client will be introduced below.
  • the scene model and the status of the virtual camera deployed in the scene model are displayed in the first window of the first interface
  • the simulated monitoring image is displayed in the second window of the first interface.
  • the first interface may include a first window and a second window, wherein the first window may be used to display the scene model and the deployed virtual camera, and the second window may be used to display simulated monitoring images, etc.;
  • the scene model can be displayed in the first window of the first interface of the client;
  • the scene model after the deployment of the virtual camera can also be displayed in the first window of the first interface
  • the simulated surveillance image may be displayed in the second window.
  • the visible area of the virtual camera can also be determined according to the virtual deployment parameters and virtual configuration parameters of the virtual camera, and then the visible area can be displayed in the first window, which is convenient for the user to intuitively view from the first window. Observe the monitoring range of the virtual camera accurately.
  • the second window may also display options of the virtual camera to be deployed in the scene model and virtual parameters of the deployed virtual camera. In this way, it is convenient for the user to select a virtual camera from the second window to deploy in the scene model, and it is also convenient for the subsequent user to adjust the virtual parameters of the deployed virtual camera through the second window.
  • FIG. 2 is a schematic diagram of a first interface provided by an embodiment of the present application.
  • the left side of the first interface is a first window, and the first window displays a scene model of the target scene; the lower right side of the first interface is a second window, and the second window displays a list of virtual cameras that can be added.
  • the user can select the virtual camera to be added from the second window through the external input device, and then deploy the selected virtual camera to the scene model of the first window, so as to realize the deployment of the virtual camera in the scene model.
  • FIG. 2 is only an example of different window layouts in the first interface, and the text in FIG. 2 has no substantial impact on the solution of the present application.
  • FIG. 3 is a schematic diagram of another first interface provided by an embodiment of the present application.
  • the left side of the first interface is the first window, and the first window shows the scene model of the target scene and the virtual camera deployed in the above scene model;
  • the right side of the first interface is the second window, and the second window Displays the logo, installation method, distance to the target, target height, installation height, and horizontal field of view of the virtual camera, and also displays the simulated surveillance image collected by the virtual camera as the simulated surveillance image of the virtual camera, which is convenient for users to view Information about the virtual camera.
  • FIG. 3 is only an example of different window layouts in the first interface, and the text in FIG. 3 has no substantial impact on the solution of the present application.
  • the second interface includes a third window, and the third window can display multiple
  • the recommended camera type to be recommended is convenient for the user to select the actual camera type to be deployed from the multiple recommended camera types.
  • the second interface further includes a fourth window, which is used to display the scene model and the status of the virtual camera being deployed and controlled in the scene model.
  • the above second interface may include a third window and a fourth window
  • the third window may be used to display the recommended camera type
  • the fourth window may be used to display the scene model of the target scene
  • the virtual camera corresponding to the selected camera type it is convenient for the user to view the recommended camera type and the deployment effect of the recommended camera type in the scene model in the second interface, and it is convenient for the user to select the recommended camera type.
  • the configuration parameters of the camera corresponding to the recommended camera type can also be displayed in the third window.
  • FIG. 4 is a schematic diagram of a second interface provided by an embodiment of the present application.
  • the left side of the second interface is the fourth window
  • the fourth window shows the scene model of the target scene
  • the right side of the second interface is The third window displays the recommended camera type and the parameter information of the camera corresponding to the recommended camera type.
  • FIG. 4 is only an example of different window layouts in the second interface, and the text in FIG. 4 has no substantial impact on the solution of the present application.
  • the above-mentioned fourth window supports multiple viewing modes, namely the first viewing mode, the second viewing mode, the third viewing mode, and the fourth viewing mode, wherein:
  • the first viewing mode refers to viewing the scene model and deployed cameras along the overlooking direction; see FIG. 5, which is a schematic diagram of a first viewing mode provided by the embodiment of the present application.
  • the second viewing mode refers to viewing the scene model and deployed cameras along the side view direction
  • the third viewing mode refers to viewing the scene model along the field of view direction of the deployed camera
  • the fourth viewing mode refers to viewing the scene model around the position of the deployed camera.
  • the user can control the viewing direction through the mouse, keyboard, touch screen, etc.
  • FIG. 6 is a schematic flowchart of another camera parameter determination method provided in the embodiment of the present application. The method includes the following steps:
  • S602. Determine the virtual camera selected by the user from the second window of the first interface, and deploy the selected virtual camera in the scene model
  • the virtual configuration parameters of the virtual camera can be used as the reference configuration parameters of the cameras to be deployed in the target scene, and the actual and different models can be displayed in the second interface camera type, the camera type whose configuration parameters match the reference configuration parameters may be determined from the multiple camera types displayed on the second interface as the recommended camera type.
  • the virtual deployment parameters of the virtual camera can be used as the deployment parameters of the camera to be installed in the target scene, so that according to the deployment parameters , install the camera corresponding to the above recommended camera type in the target scene.
  • the scene model for simulating the target scene can be obtained, and the virtual camera is deployed in the scene model; the virtual camera is simulated and controlled in the scene model according to the virtual parameters of the virtual camera, and a simulated monitoring image is obtained, wherein,
  • the virtual parameters include virtual deployment parameters and virtual configuration parameters; when the simulated monitoring image meets the monitoring requirements, the virtual deployment parameters and/or virtual configuration parameters of the virtual camera are determined as the reference deployment parameters and the virtual configuration parameters of the camera to be deployed in the target scene. /or refer to configuration parameters.
  • the virtual camera can be deployed in the scene model, and the monitoring image can be obtained by simulating the virtual parameters of the virtual camera.
  • Deployment parameters and/or virtual configuration parameters are determined as reference deployment parameters and/or reference configuration parameters of cameras to be deployed in the target scene, and subsequently can be determined based on the reference deployment parameters and/or reference configuration parameters. Deployment parameters and/or configuration parameters of the deployed cameras, so as to ensure that the determined parameters of the cameras match the real target scene. It can be seen that, by applying the solutions provided by the foregoing embodiments, the accuracy of the determined camera parameters can be improved.
  • the present application also provides a device for determining parameters, which will be described in detail below.
  • FIG. 7 is a schematic structural diagram of a camera parameter determination device provided in an embodiment of the present application, and the device includes:
  • a virtual camera deployment module 702 configured to deploy a virtual camera in the scene model
  • the image obtaining module 703 is configured to simulate the deployment and control of the virtual camera in the scene model according to the virtual parameters of the virtual camera, and obtain a simulated monitoring image, wherein the virtual parameters include virtual deployment parameters and virtual configuration parameters;
  • a parameter determination module 704 configured to determine the virtual deployment parameters and/or virtual configuration parameters of the virtual camera as the reference deployment of the camera to be deployed in the target scene when the simulated monitoring image meets the monitoring requirements parameters and/or reference configuration parameters.
  • the image obtaining module 703 is specifically used for:
  • the local scene model is projected along the field of view direction to obtain a simulated monitoring image corresponding to the virtual camera.
  • the device further includes a type recommendation module, configured to:
  • the device further includes a parameter updating module, configured to:
  • the parameter updating module is also used for:
  • the recommended camera type is determined to be the target camera type.
  • the scene model and the status of the virtual camera deployed in the scene model are displayed in the first window of the first interface
  • the simulated monitoring image is displayed in the second window of the first interface
  • the device also includes an interface switching module for:
  • the virtual configuration parameter of the virtual camera is determined as the reference configuration parameter of the camera to be deployed in the target scene, switching from the first interface to the second interface, on the second interface of the second interface
  • the recommended camera types whose configuration parameters match the reference configuration parameters are displayed in the three windows;
  • the second interface further includes a fourth window, the fourth window displays the scene model and the deployment and control status of the virtual camera in the scene model.
  • the device further includes a parameter adjustment module, configured to:
  • the device also includes a blind area elimination module for:
  • Determining monitoring blind areas between the field of view ranges of different virtual cameras Determining monitoring blind areas between the field of view ranges of different virtual cameras; adjusting virtual deployment parameters and/or virtual configuration parameters of at least one virtual camera among the plurality of virtual cameras according to the determined monitoring blind areas; and/or
  • the scene model obtaining module 701 is specifically used for:
  • a 3D engine is used to construct a 3D model to obtain a scene model.
  • the scene model for simulating the target scene can be obtained, and the virtual camera is deployed in the scene model; the virtual camera is simulated and controlled in the scene model according to the virtual parameters of the virtual camera, and a simulated monitoring image is obtained, wherein,
  • the virtual parameters include virtual deployment parameters and virtual configuration parameters; when the simulated monitoring image meets the monitoring requirements, the virtual deployment parameters and/or virtual configuration parameters of the virtual camera are determined as the reference deployment parameters and the virtual configuration parameters of the camera to be deployed in the target scene. / or refer to configuration parameters.
  • the virtual camera can be deployed in the scene model, and the monitoring image can be obtained by simulating the virtual parameters of the virtual camera.
  • Deployment parameters and/or virtual configuration parameters are determined as reference deployment parameters and/or reference configuration parameters of cameras to be deployed in the target scene, and subsequently can be determined based on the reference deployment parameters and/or reference configuration parameters. Deployment parameters and/or configuration parameters of the deployed cameras, so as to ensure that the determined parameters of the cameras match the real target scene. It can be seen that, by applying the solutions provided by the foregoing embodiments, the accuracy of the determined camera parameters can be improved.
  • the embodiment of the present application also provides an electronic device, as shown in FIG. 8a, including a processor 801 and a memory 803,
  • the memory 803 is used to store computer programs; the processor 801 is used to implement the above camera parameter determination method when executing the programs stored in the memory 803 .
  • the electronic device further includes a communication interface 802 and a communication bus 804 , where the processor 801 , the communication interface 802 , and the memory 803 communicate with each other through the communication bus 804 .
  • the communication bus mentioned above for the electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the electronic device and other devices.
  • the memory may include a random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory.
  • RAM Random Access Memory
  • NVM non-Volatile Memory
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), a dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a computer-readable storage medium is also provided, and a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the parameters of any of the above-mentioned cameras are realized. Determine the steps of the method.
  • a computer program product including instructions is also provided, which, when run on a computer, causes the computer to execute the method for determining parameters of any camera in the above embodiments.
  • the scene model for simulating the target scene can be obtained, and the virtual camera is deployed in the scene model; the virtual camera is simulated and controlled in the scene model according to the virtual parameters of the virtual camera, and a simulated monitoring image is obtained, wherein,
  • the virtual parameters include virtual deployment parameters and virtual configuration parameters; when the simulated monitoring image meets the monitoring requirements, the virtual deployment parameters and/or virtual configuration parameters of the virtual camera are determined as the reference deployment parameters and the virtual configuration parameters of the camera to be deployed in the target scene. / or refer to configuration parameters.
  • the virtual camera can be deployed in the scene model, and the monitoring image can be obtained by simulating the virtual parameters of the virtual camera.
  • Deployment parameters and/or virtual configuration parameters are determined as reference deployment parameters and/or reference configuration parameters of cameras to be deployed in the target scene, and subsequently can be determined based on the reference deployment parameters and/or reference configuration parameters. Deployment parameters and/or configuration parameters of the deployed cameras, so as to ensure that the determined parameters of the cameras match the real target scene. It can be seen that, by applying the solutions provided by the foregoing embodiments, the accuracy of the determined camera parameters can be improved.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a Solid State Disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了一种相机的参数确定方法、装置、电子设备及程序产品,涉及计算机技术领域,包括:获得用于模拟目标场景的场景模型,在所述场景模型中部署虚拟相机;根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,其中,所述虚拟参数包括虚拟部署参数和虚拟配置参数;在所述模拟监控图像满足监控需求的情况下,将所述虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为所述目标场景中待部署的相机的参考部署参数和/或参考配置参数。应用本申请实施例提供的方案,可以提高所确定的相机参数的准确度。

Description

一种相机的参数确定方法、装置、电子设备及程序产品
本申请要求于2021年9月27日提交中国专利局、申请号为202111138970.4申请名称为“一种相机的参数确定方法、装置、电子设备及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种相机的参数确定方法、装置、电子设备及程序产品。
背景技术
在小区、施工现场、交通路口等场景中,通常需要部署相机,利用相机对上述场景进行布控。
相关技术中,在选择用于监控的相机时,通常由人工根据经验确定所要部署的相机的配置参数,如水平视场角、垂直视场角、焦距,并由人工根据经验确定所要部署的相机的部署参数,如安装位置、安装角度。
这样由人工根据经验确定相机的参数,可能导致所确定的相机的参数与场景不匹配,进而导致后续所部署的相机的监控效果较差。
发明内容
本申请实施例的目的在于提供一种相机的参数确定方法、装置、电子设备及程序产品,以提高所确定的相机参数的准确度。具体技术方案如下:
第一方面,本申请实施例提供了一种相机的参数确定方法,所述方法包括:
获得用于模拟目标场景的场景模型,在所述场景模型中部署虚拟相机;
根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,其中,所述虚拟参数包括虚拟部署参数和虚拟配置参数;
在所述模拟监控图像满足监控需求的情况下,将所述虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为所述目标场景中待部署的相机的参考部署参数和/或参考配置参数。
本申请的一个实施例中,所述根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,包括:
根据所述虚拟相机的虚拟部署参数,确定所述虚拟相机的视场方向,沿所述视场方向对所述场景模型进行投影,得到投影图像;
根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围,根据所述视场范围对所述投影图像进行裁剪,得到裁剪后的模拟监控图像;
根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围及视场方向;
沿所述视场范围对所述场景模型进行分割,得到处于所述虚拟相机的视场范围的局部场景模型;
沿所述视场方向对所述局部场景模型进行投影,得到所述虚拟相机对应的模拟监控图像。
本申请的一个实施例中,在将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,所述方法还包括:
从预先获得的不同型号的相机类型中,确定配置参数与所述参考配置参数相匹配的推荐相机类型。
本申请的一个实施例中,在确定配置参数与所述参考配置参数相匹配的推荐相机类型之后,所述方法还包括:
针对任一推荐相机类型,根据该推荐相机类型对应的相机的配置参数,更新所述虚拟相机的虚拟配置参数,执行所述根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控、得到模拟监控图像的步骤。
本申请一个实施例中,所述方法还包括:
在由任一推荐相机类型对应的相机的配置参数作为虚拟配置参数得到的模拟监控图像满足所述监控需求的情况下,确定该推荐相机类型为目标相机类型。
本申请的一个实施例中,所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态展示在第一界面的第一窗口中;
所述模拟监控图像展示在所述第一界面的第二窗口中;
在所述将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,所述方法还包括:
从所述第一界面切换至第二界面,在所述第二界面的第三窗口中展示配置参数与所述参考配置参数相匹配的推荐相机类型;
所述第二界面还包括第四窗口,所述第四窗口中显示所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态。
本申请的一个实施例中,所述方法还包括:
在所述模拟监控图像不满足所述监控需求的情况下,调整所述虚拟相机的虚拟部署参数和/或虚拟配置参数,返回所述根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控、得到模拟监控图像的步骤,直至所述模拟监控图像满足所述监控需求;和/或
所述场景模型中部署有多个虚拟相机,所述方法还包括:
确定不同虚拟相机的视场范围之间的监控盲区;
根据所确定的监控盲区,调整所述多个虚拟相机中至少一虚拟相机的虚拟部署参数和/或虚拟配置参数;和/或
所述获得用于模拟目标场景的场景模型,包括:
获得目标场景的场景设计信息;
按照预设的建模协议对所述场景设计信息进行解析,得到场景解析信息;
参照所述场景解析信息,利用3D引擎构建3D模型,得到场景模型。
第二方面,本申请实施例提供了一种相机的参数确定装置,所述装置包括:
场景模型获得模块,用于获得用于模拟目标场景的场景模型;
虚拟相机部署模块,用于在所述场景模型中部署虚拟相机;
图像获得模块,用于根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,其中,所述虚拟参数包括虚拟部署参数和虚拟配置参数;
参数确定模块,用于在所述模拟监控图像满足监控需求的情况下,将所述虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为所述目标场景中待部署的相机的参考部署参数和/或参考配置参数。
本申请的一个实施例中,所述图像获得模块,具体用于:
根据所述虚拟相机的虚拟部署参数,确定所述虚拟相机的视场方向,沿所述视场方向对所述场景模型进行投影,得到投影图像;
根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围,根据所述视场范围对所述投影图像进行裁剪,得到裁剪后的模拟监控图像;
根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围及视场方向;
沿所述视场范围对所述场景模型进行分割,得到处于所述虚拟相机的视场范围的局部场景模型;
沿所述视场方向对所述局部场景模型进行投影,得到所述虚拟相机对应的模拟监控图像。
本申请的一个实施例中,所述装置还包括类型推荐模块,用于:
在将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,从预先获得的不同型号的相机类型中,确定配置参数与所述参考配置参数相匹配的推荐相机类型。
本申请的一个实施例中,所述装置还包括参数更新模块,用于:
在确定配置参数与所述参考配置参数相匹配的推荐相机类型之后,针对任一推荐相机类型,根据该推荐相机类型对应的相机的配置参数,更新所述虚拟相机的虚拟配置参数,触发所述图像获得模块。
本申请一个实施例中,所述参数更新模块,还用于:
在由任一推荐相机类型对应的相机的配置参数作为虚拟配置参数得到的模拟监控图像满足所述监控需求的情况下,确定该推荐相机类型为目标相机类型。
本申请的一个实施例中,所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态展示在第一界面的第一窗口中;
所述模拟监控图像展示在所述第一界面的第二窗口中;
所述装置还包括界面切换模块,用于:
在所述将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,从所述第一界面切换至第二界面,在所述第二界面的第三窗口中展示配置参数与所述参考配置参数相匹配的推荐相机类型;
所述第二界面还包括第四窗口,所述第四窗口中显示所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态。
本申请的一个实施例中,所述装置还包括参数调整模块,用于:
在所述模拟监控图像不满足所述监控需求的情况下,调整所述虚拟相机的虚拟部署参数和/或虚拟配置参数,触发所述图像获得模块,直至所述模拟监控图像满足所述监控需求;和/或
所述场景模型中部署有多个虚拟相机,所述装置还包括盲区消除模块,用于:
确定不同虚拟相机的视场范围之间的监控盲区;根据所确定的监控盲区,调整所述多个虚拟相机中至少一虚拟相机的虚拟部署参数和/或虚拟配置参数;和/或
所述场景模型获得模块,具体用于:
获得目标场景的场景设计信息;
按照预设的建模协议对所述场景设计信息进行解析,得到场景解析信息;
参照所述场景解析信息,利用3D引擎构建3D模型,得到场景模型。
第三方面,本申请实施例还提供了一种电子设备,包括处理器和存储器;
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的程序时,实现第一方面任一所述的方法。
第四方面,本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行第一方面任一项所述的方法。
第五方面,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时执行第一方面任一项所述的方法。
本申请实施例有益效果:
本申请实施例提供的参数确定方案中,可以获得用于模拟目标场景的场景模型,在场景模型中部署虚拟相机;根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控图像,其中,虚拟参数包括虚拟部署参数和虚拟配置参数;在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为目标场景中待部署的相机的参考部署参数和/或参考配置参数。这样可以在场景模型中部署虚拟相机,根据虚拟相机的虚拟参数模拟得到监控图像,根据所模拟的监控图像判断虚拟相机的虚拟参数是否满足监控需求,若为是,则可以将该虚拟相机的虚拟部署参数和/或虚拟配置参数确定为实际在目标场景中所要部署的相机的参考部署参数和/或参考配置参数,后续可以基于该参考部署参数和/或参考配置参数确定实际在目标场景中所要部署的相机的部署参数和/或配置参数,从而保证所确定的相机的参数与真实的目标场景相匹配。由此可见,应用本申请实施例提供的方案,可以提高所确定的相机参数的准确度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的实施例。
图1为本申请实施例提供的一种相机的参数确定方法的流程示意图;
图2为本申请实施例提供的一种第一界面的示意图;
图3为本申请实施例提供的另一种第一界面的示意图;
图4为本申请实施例提供的一种第二界面的示意图;
图5为本申请实施例提供的一种第一查看模式的示意图;
图6为本申请实施例提供的另一种相机的参数确定方法的流程示意图;
图7为本申请实施例提供的一种相机的参数确定装置的结构示意图;
图8a、图8b为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员基于本申请所获得的所有其他实施例,都属于本申请保护的范围。
为了提高所确定的相机参数的准确度,本申请实施例提供了一种相机的参数确定方法、装置、电子设备及程序产品,下面分别进行介绍。
本申请实施例提供了一种相机的参数确定方法,该方法可以应用于计算机、服务器、手机等电子设备,包括:
获得用于模拟目标场景的场景模型,在场景模型中部署虚拟相机;
根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控图像,其中,虚拟参数包括虚拟部署参数和虚拟配置参数;
在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为目标场景中待部署的相机的参考部署参数和/或参考配置参数。
这样可以在场景模型中部署虚拟相机,根据虚拟相机的虚拟参数模拟得到监控图像,根据所模拟的监控图像判断虚拟相机的虚拟参数是否满足监控需求,若为是,则可以将该虚拟相机的虚拟部署参数和/或虚拟配置参数确定为实际在目标场景中所要部署的相机的参考部署参数和/或参考配置参数,后续可以基于该参考部署参数和/或参考配置参数确定实际在目标场景中所要部署的相机的部署参数和/或配置参数,从而保证所确定的相机的参数与真实的目标场景相匹配。由此可见,应用上述实施例提供的方案,可以提高所确定的相机参数的准确度。
下面对上述相机的参数确定方法进行详细介绍。
参见图1,图1为本申请实施例提供的一种相机的参数确定方法的流程示意图,该方法包括如下步骤S101-S103:
S101,获得用于模拟目标场景的场景模型,在场景模型中部署虚拟相机。
其中,上述目标场景指的是:待部署相机的、真实的场景,例如,上述目标场景可以是以下场景中的任一种:小区、施工现场、交通路口、工厂、商场等。
上述虚拟相机指的是:用于模拟真实相机的模型,上述虚拟相机可以包括以下虚拟的相机中的一种或多种:枪机、球机、广角相机、鱼眼相机等。
具体的,可以利用三维建模技术,获得目标场景的三维模型,作为场景模型,并在上述场景模型中部署一个或多个虚拟相机,从而模拟在实际的目标场景中部署真实的相机。
S102,根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控 图像。
其中,虚拟参数包括虚拟部署参数和虚拟配置参数。
上述虚拟部署参数指的是以下参数中的至少一个:虚拟相机在场景模型中的安装位置、高度、朝向、安装方式等,上述安装方式包括以下方式中的任一种:壁装、吊装、安装在立杆上等。
上述虚拟配置参数指的是相机自身的参数,包括以下参数中的至少一个:焦距、监控模式、垂直视场角、水平视场角等。上述监控模式至少包括走廊模式、正常模式。
上述走廊模式指的是:将相机所采集的图像进行90°旋转,提高相机在纵向方向上的监控范围,例如,假设相机所采集的图像的宽高为1920*1080,旋转90°之后,所采集的图像的宽高即为1080*1920。
上述正常模式指的是:不对相机所采集的图像进行旋转。
上述模拟监控图像指的是:模拟虚拟相机能拍摄到的场景模型中的监控图像。
具体的,可以根据虚拟相机自身的配置参数、以及在场景模型中的部署参数,模拟虚拟相机在场景模型中布控,从而模拟得到虚拟相机采集的图像,作为模拟监控图像,该图像可以反映虚拟相机在场景模型中的监控范围。
S103,在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为目标场景中待部署的相机的参考部署参数和/或参考配置参数。
其中,上述监控需求指的是:针对相机监控范围的需求,例如,可以是以下需求中的至少一种:目标物位于监控范围内、目标通道位于监控范围内等,上述目标物可以是用户关注的、需要进行监控的物体,例如,可以是以下物体中的至少一种:收银机、卡口闸机、贵重物品柜台等,同样地,上述目标通道可以是用户关注的、需要进行监控的通道,例如是以下通道中的至少一种:消防通道、客流通行通道等。
具体的,模拟监控图像可以反映虚拟相机的监控范围,根据上述模拟监控图像,可以判断虚拟相机的监控范围是否满足监控需求,若为是,则说明当前虚拟相机的虚拟部署参数、虚拟配置参数适用于该场景模型,因此,一方面可以将当前虚拟相机的虚拟部署参数确定为目标场景中待部署的相机的参考部署参数,从而后续可以基于上述参考部署参数,将所选择的相机部署在场景模型中;
另一方面,可以将当前虚拟相机的虚拟配置参数确定为目标场景中待部署的相机的参考配置参数,从而后续可以基于该参考配置参数选择目标场景中待部署的相机的类型。这样利用虚拟部署参数可以指导实际在目标场景中安装相机,保证安装后相机满足监控需求。
本申请的一个实施例中,当仅需要推荐相机时,可以在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟配置参数,确定为目标场景中待部署的相机的参考配置参数,后续可以按照上述参考配置参数进行相机推荐。例如,在相机采购的场景中,为了向用户推荐所需采购的相机的类型,可以按照上述方案,确定参考配置参数,以便于用户采购与该参考配置参数相匹配的相机。
相应地,当仅需要对相机的部署进行指导时,可以在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数,确定为目标场景中待部署的相机的参考部署参数。例 如,在相机部署的场景中,可以按照上述参考部署参数,将待部署的相机部署在场景中。
本申请的一个实施例中,在判断虚拟相机对应的模拟监控图像是否满足监控需求时,可以对上述模拟监控图像进行展示,接收用户通过外部输入设备输入的指令,该指令携带满足监控需求或不满足监控需求的信息,上述满足监控需求的信息指的是:表征用户认为模拟监控图像满足监控需求的信息,上述不满足监控需求的信息指的是:表征用户认为模拟监控图像不满足监控需求的信息。根据上述指令可以判断模拟监控图像是否满足监控需求。其中,上述外部输入设备可以是以下设备中的任一种:键盘、鼠标、触摸板、麦克风。
除此之外,在判断虚拟相机对应的模拟监控图像是否满足监控需求时,还可以对模拟监控图像进行目标检测,基于检测结果判断模拟监控图像的图像内容中是否包含完整的目标对象,若为是,则说明虚拟相机的监控范围覆盖完整的目标对象,进而认为满足监控需求,否则,说明虚拟相机的监控范围难以覆盖完整的目标对象,进而认为不满足监控需求。其中,上述目标对象为预设的需要监控的对象,例如,上述目标对象可以是出入口、收银台、货架等。
上述实施例提供的方案中,可以获得用于模拟目标场景的场景模型,在场景模型中部署虚拟相机;根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控图像,其中,虚拟参数包括虚拟部署参数和虚拟配置参数;在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为目标场景中待部署的相机的参考部署参数和/或参考配置参数。这样可以在场景模型中部署虚拟相机,根据虚拟相机的虚拟参数模拟得到监控图像,根据所模拟的监控图像判断虚拟相机的虚拟参数是否满足监控需求,若为是,则可以将该虚拟相机的虚拟部署参数和/或虚拟配置参数确定为实际在目标场景中所要部署的相机的参考部署参数和/或参考配置参数,后续可以基于该参考部署参数和/或参考配置参数确定实际在目标场景中所要部署的相机的部署参数和/或配置参数,从而保证所确定的相机的参数与真实的目标场景相匹配。由此可见,应用上述实施例提供的方案,可以提高所确定的相机参数的准确度。
本申请的一个实施例中,在模拟监控图像不满足监控需求的情况下,调整虚拟相机的虚拟部署参数和/或虚拟配置参数,返回步骤S102,直至模拟监控图像满足监控需求。
具体的,模拟监控图像可以反映虚拟相机的监控范围,根据上述模拟监控图像,可以判断虚拟相机的监控范围是否满足监控需求,若为否,则可以调整虚拟相机的虚拟部署参数和/或虚拟配置参数,基于调整后的参数,重新获得参数调整后的虚拟相机对应的模拟监控图像,再次判断参数调整后虚拟相机的模拟监控图像是否满足监控需求,直至所得到的模拟监控图像满足监控需求。
本申请的一个实施例中,在调整虚拟相机的虚拟参数时,可以接收用户通过外部输入设备输入的调整指令,按照该调整指令调整虚拟相机的虚拟部署参数和/或虚拟配置参数。
除此之外,也可以按照预设的调整步长调整虚拟相机的虚拟部署参数和/或虚拟配置参数。例如,假设待调整的参数为焦距,预设的调整步长为1米,则在每一次调整虚拟相机的焦距时,可以将虚拟相机的焦距上调1米。
本申请的一个实施例中,场景模型中可以部署有多个虚拟相机,这种情况下,可以确 定不同虚拟相机的视场范围之间的监控盲区;根据所确定的监控盲区,调整多个虚拟相机中至少一虚拟相机的虚拟部署参数和/或虚拟配置参数。
具体的,可以检测上述多个虚拟相机的监控区域中是否存在盲区,若为是,则可以根据所确定的盲区,对上述多个虚拟相机中的至少一个虚拟相机的虚拟参数进行调整。这样根据不同虚拟相机之间的盲区对至少一虚拟相机的虚拟部署参数和/或虚拟配置参数进行调整,以使得参数调整后不同虚拟相机之间不存在监控盲区,进而保证后续基于虚拟相机的虚拟部署参数和/或虚拟配置参数所得到的待部署的相机的参考部署参数和/或参考配置参数与实际的目标场景更加匹配,提高后续实际部署的相机的监控效果。
本申请的一个实施例中,在确定不同虚拟相机的视场范围之间的监控盲区时,可以根据上述多个虚拟相机对应的模拟监控图像,确定不同虚拟相机的视场范围之间的监控盲区。
具体的,每一虚拟相机对应的模拟监控图像可以反映该虚拟相机的范围,在存在多个虚拟相机的情况下,可以根据上述多个虚拟相机对应的模拟监控图像所反映的监控区域,确定上述多个虚拟相机的监控区域之间的监控盲区。
除此之外,也可以根据各个虚拟相机的虚拟部署参数和虚拟配置参数,直接确定各个虚拟相机的监控区域,进而获得各个虚拟相机的监控区域之间的盲区,得到不同虚拟相机的视场范围之间的监控盲区。
另外,也可以直接接收用户通过外部输入设备输入的监控盲区,本申请实施例并不对此进行限定。
本申请的一个实施例中,在获得模拟监控图像时,可以根据虚拟相机的虚拟部署参数,确定虚拟相机的视场方向,沿视场方向对场景模型进行投影,得到投影图像;根据虚拟相机的虚拟部署参数和虚拟配置参数,确定虚拟相机的视场范围,根据视场范围对投影图像进行裁剪,得到裁剪后的模拟监控图像。
具体的,基于虚拟相机的虚拟部署参数,可以确定虚拟相机的安装位置和朝向,从而得到虚拟相机的视场方向,沿该视场方向对场景模型进行投影,可以得到投影图像,再根据虚拟相机的虚拟部署参数和虚拟配置参数,可以得到虚拟相机的视场范围,按照该视场范围对上述投影图像进行裁剪,从而得到裁剪后的图像,该图像即为所模拟的虚拟相机采集的模拟监控图像。
除此之外,本申请的一个实施例中,也可以直接根据虚拟相机的虚拟部署参数和虚拟配置参数,确定虚拟相机的视场范围及视场方向,然后沿上述视场范围对场景模型进行分割,得到处于虚拟相机的视场范围的局部场景模型,然后沿虚拟相机的视场方向对上述局部场景模型进行投影,得到虚拟相机对应的模拟监控图像。
本申请的一个实施例中,在步骤S103确定虚拟相机的虚拟配置参数为目标场景中待部署的相机的参考配置参数之后,还可以:
从预先获得的不同型号的相机类型中,确定配置参数与参考配置参数相匹配的推荐相机类型。
具体的,实际的各种型号的相机中,可能不存在与上述参考配置参数完全一致的相机,鉴于此,为了向用户推荐合适的相机类型,可以预先获得不同型号的相机的相机类型、以 及不同相机类型的相机的配置参数,然后从中确定配置参数与该参考配置参数相匹配的相机类型,作为推荐在目标场景中安装的相机的推荐相机类型。这样便于用户按照所推荐的相机类型,选择实际在目标场景中所部署的相机。
本申请的一个实施例中,在确定配置参数与参考配置参数相匹配的推荐相机类型时,可以选择配置参数与参考配置参数之间的差异最小的相机类型,作为推荐相机类型;
除此之外,也可以选择配置参数覆盖参考配置参数的相机类型,作为推荐相机类型。例如,假设参考配置参数中水平视场角为60°,则可以选择水平视场角大于等于60°的相机类型,作为推荐相机类型。
本申请的一个实施例中,在确定推荐相机类型时,可以根据虚拟相机的参考配置参数,确定与该参考配置参数相匹配的多个推荐相机类型,接收用户对上述多个推荐相机类型中的选中操作,将用户所选中的相机类型作为实际需要的目标相机类型。这样可以由电子设备确定出至少一个推荐相机类型,然后由用户从上述多个推荐的相机类型中选择实际需要的目标相机类型。
本申请的一个实施例中,在得到推荐相机类型之后,还可以:
针对任一推荐相机类型,根据该推荐相机类型对应的相机的配置参数,更新虚拟相机的虚拟配置参数,执行步骤S102。
具体的,针对任一推荐相机类型,可以得到该推荐相机类型对应的相机的配置参数,然后将虚拟相机的虚拟配置参数更新为该推荐相机类型对应的相机的配置参数,返回步骤S102,从而得到新的模拟监控图像,该模拟监控图像可以反映所推荐的相机类型对应的相机在实际应用中的监控范围,便于用户对所推荐的相机进行调整,避免由于所推荐的相机类型对应的相机的配置参数与参考配置参数不匹配,导致推荐的相机类型与目标场景不匹配。
例如,假设所确定的参考配置参数中垂直视场角为90°,而由于已有的相机中最大的垂直视场角为75°,这样使得所推荐的相机类型对应的相机的垂直视场角无法达到90°,从而采用所推荐的相机类型对应的相机进行布控时,可能导致相机的模拟监控图像不满足监控需求。在上述方案中,基于推荐相机类型对应的相机的配置参数对虚拟相机的虚拟配置参数进行更新,进而获得新的模拟监控图像,通过该模拟监控图像便于用户判断所推荐的相机类型对应的相机是否满足监控需求,以便于对所推荐的相机类型进行调整,提高所推荐的相机类型的可靠度。
本申请的一个实施例中,在基于推荐相机类型对应的相机的配置参数对虚拟相机的虚拟配置参数进行更新时,一种情况下,可以仅调整虚拟相机的虚拟配置参数,即仅将虚拟相机的虚拟配置参数更新为推荐相机类型对应的相机的配置参数,而不改变虚拟相机的虚拟部署参数;另一种情况下,可以对虚拟相机的虚拟配置参数、虚拟部署参数进行同步调整,即不仅将虚拟相机的虚拟配置参数更新为推荐相机类型对应的相机的配置参数,而且调整虚拟相机的虚拟部署参数,直至所得到的模拟监控图像满足监控需求,这样可以使得调整后虚拟相机的配置参数与部署参数更加匹配,进而提高相机的监控效果。
本申请的一个实施例中,在由任一推荐相机类型对应的相机的配置参数作为当前虚拟 配置参数得到的模拟监控图像满足监控需求的情况下,确定该推荐相机类型为目标相机类型。
具体的,在将虚拟相机的虚拟配置参数更新为所推荐的推荐相机类型对应的相机的配置参数后,返回上述步骤S102,重新得到模拟监控图像,判断所得到的模拟监控图像是否满足监控需求,若满足,则说明所推荐的推荐相机类型与目标场景相匹配,则可以将该推荐相机类型作为实际在目标场景中部署的相机的目标相机类型。
本申请的一个实施例中,在由任一推荐相机类型对应的相机的配置参数作为当前虚拟配置参数得到的模拟监控图像满足监控需求的情况下,还可以确定当前虚拟相机的虚拟部署参数为目标部署参数。
具体的,在由任一推荐相机类型对应的相机的配置参数作为当前虚拟配置参数得到的模拟监控图像满足监控需求的情况下,可以确定该推荐相机类型为目标相机类型,并确定当前虚拟相机的虚拟部署参数为目标部署参数,后续可以按照该目标部署参数,将所确定的目标相机类型对应的相机部署在目标场景中。这样可以保证所部署的相机的类型和部署参数相匹配,从而提高相机在目标场景中的监控效果。
除此之外,在上述方案的基础上,在由任一推荐相机类型对应的相机的配置参数作为当前虚拟配置参数得到的模拟监控图像满足监控需求的情况下,还可以获得当前虚拟相机的虚拟配置参数作为目标配置参数。
具体的,在由任一推荐相机类型对应的相机的配置参数作为当前虚拟配置参数得到的模拟监控图像满足监控需求的情况下,可以确定该推荐相机类型为目标相机类型,并将当前虚拟相机的虚拟配置参数作为目标配置参数、将当前虚拟相机的虚拟部署参数作为目标部署参数,这样后续可以按照该目标部署参数,将所确定的目标相机类型对应的相机部署在目标场景中,并按照目标配置参数对所部署的相机进行参数配置,例如,假设目标相机类型对应的相机支持的焦距为6米-12米,而所确定的目标配置参数中焦距为10米,则可以将目标相机类型对应的相机的焦距配置为10米。这样能够使得所部署的相机与目标场景更匹配,提高相机的监控效果。
本申请的一个实施例中,对于上述步骤S101在获得场景模型时,可以由用户自定义创建目标场景的模型,也可以获得外部导入的目标场景的模型,下面分别进行详细介绍。
本申请的一个实施例中,针对用于组成目标场景的每一对象,根据该对象在目标场景中的实际位置,在2D画布中与该实际位置对应的画布位置处,绘制该对象,编辑该对象的预设属性信息,然后对所绘制的各个对象进行3D渲染,得到目标场景的场景模型。
其中,上述对象指的是:用于构成目标场景的物体。例如,假设目标场景为停车场,则上述对象可以为以下对象中的至少一种:收费口、闸机、分流线、停车标志牌等。
每一对象的预设属性信息可以包括该对象的通用属性和私有属性;
上述通用属性指的是各个对象均具有的属性,例如,可以包括以下属性中的至少一种:名称、类型、材质、标识、位置、旋转角度、状态信息等。上述类型可以包括以下类型中的至少一种:建筑物、室外墙、车、树、路、相机、自定义立方体、自定义圆柱体、自定义提示信息、地板、屋顶、窗、门等。
针对不同类型的对象,所具有的私有属性不同,例如,假设对象的类型为建筑物,则私有属性可以包括以下属性中的至少一种:楼层高度、楼层层数、楼层列表等;假设对象的类型为车,则私有属性可以包括以下属性中的至少一种:车辆类型、车身长度、车身高度等。
上述状态信息包括锁定状态或非锁定状态,上述锁定状态指的是对象是否可更改的状态,当对象处于锁定状态时,说明该对象不可更改;当对象处于非锁定状态时,说明该对象可更改。
上述状态信息还可以包括隐藏状态或非隐藏状态,在隐藏状态下,对象处于不可见状态,在非隐藏状态下,对象处于可见状态。
具体的,可以参照目标场景中各个对象的分布,在2D画布中绘制各个对象的形状,并设置所绘制的各个对象的属性信息,然后以各个对象的属性信息为基准,对所绘制的各个2D对象进行3D渲染,得到各个3D对象,由上述3D对象可以组成目标场景。
本申请的一个实施例中,在对场景模型进行编辑时,可以在场景模型中增加自定义的3D实体对象,例如,可以在场景模型中增加围墙、草坪、路灯等3D实体对象。其中,上述3D实体对象为用于预先自定义的3D建模模板。在确定所要增加的3D实体对象的位置时,可以通过点击鼠标或拖拽的方式,将3D实体对象添加至指定位置,也可以通过点击鼠标拾取待添加对象的水平位置,然后按照该水平位置添加自定义的3D实体对象。
本申请的一个实施例中,在获得用于模拟目标场景的场景模型时,还可以:
获得目标场景的场景设计信息;按照预设的建模协议对场景设计信息进行解析,得到场景解析信息;参照场景解析信息,利用3D引擎构建3D模型,得到场景模型。
其中,上述场景设计信息可以是目标场景的CAD文件,上述CAD文件的格式可以是dwg格式或dxf格式等。
上述建模协议指的是:适用于3D建模的协议,例如,解析CAD文件中所描述的各个对象的预设属性信息等。
具体的,可以获得目标场景的场景设计信息,该信息用于描述目标场景中各个对象的位置、形状、属性等,按照建模协议对该场景设计信息进行解析,可以得到场景解析信息,然后利用3D引擎渲染上述场景解析信息,从而可以得到目标场景的3D模型,作为场景模型。
本申请的一个实施例中,可以接收对2D画布中所绘制的对象的编辑操作,然后基于编辑后的对象进行3D渲染,从而实现同步对3D的场景模型进行更新;
另外,也可以接收对3D的场景模型的编辑操作,然后基于编辑后的同步对2D画布中所绘制的对象进行更新。
本申请的一个实施例中,所获得的场景模型支持导入和导出,上述场景模型在导出后,可以作为模板,供后续获得新的场景模型时使用。
本申请所提供的参数确定方法可以应用于客户端上,下面对客户端的界面进行介绍。
本申请的一个实施例中,场景模型以及虚拟相机在场景模型中进行布控的状态展示在第一界面的第一窗口中;
模拟监控图像展示在第一界面的第二窗口中。
具体的,第一界面中可以包括第一窗口和第二窗口,其中,第一窗口可以用于展示场景模型及所部署的虚拟相机,第二窗口可以用于展示模拟监控图像等;
在步骤S101得到目标场景的场景模型后,可以将该场景模型展示在客户端的第一界面的第一窗口中;
相对应地,在上述场景模型中部署虚拟相机之后,也可以将部署虚拟相机后的场景模型展示在第一界面的第一窗口中;
在上述步骤S102得到模拟监控图像后,则可以将该模拟监控图像展示在第二窗口中。
本申请的一个实施例中,还可以根据虚拟相机的虚拟部署参数、虚拟配置参数,确定虚拟相机的可视区域,然后将该可视区域展示在第一窗口中,便于用户从第一窗口直观地观察虚拟相机的监控范围。
本申请的一个实施例中,第二窗口还可以展示待在场景模型中部署的虚拟相机的选项以及已部署的虚拟相机的虚拟参数。这样便于用户从第二窗口中选择虚拟相机部署于场景模型中,也便于后续用户通过第二窗口调整所部署的虚拟相机的虚拟参数。
参见图2,图2为本申请实施例提供的一种第一界面的示意图。该第一界面的左侧为第一窗口,第一窗口展示有目标场景的场景模型;该第一界面的右侧下方为第二窗口,第二窗口展示有可添加的虚拟相机列表。用户可以通过外部输入设备从第二窗口中选中所要添加的虚拟相机,然后将所选中的虚拟相机部署到第一窗口的场景模型中,实现在场景模型中部署虚拟相机。需要说明的是,图2仅仅是第一界面中不同窗口布局的一种示例,图2中的文字并不对本申请的方案产生实质影响。
参见图3,图3为本申请实施例提供的另一种第一界面的示意图。该第一界面的左侧为第一窗口,第一窗口展示有目标场景的场景模型、以及在上述场景模型中所部署的虚拟相机;该第一界面的右侧为第二窗口,第二窗口展示有虚拟相机的标识、安装方式、到目标的距离、目标高度、安装高度、水平视野,还展示有模拟该虚拟相机所采集的模拟监控图像,作为该虚拟相机的模拟监控图像,便于用户查看该虚拟相机的信息。需要说明的是,图3仅仅是第一界面中不同窗口布局的一种示例,图3中的文字并不对本申请的方案产生实质影响。
本申请的一个实施例中,在确定参考配置参数之后,可以从第一界面切换至第二界面,在第二界面的第三窗口中展示配置参数与参考配置参数相匹配的推荐相机类型。
具体的,在步骤S103确定参考配置参数之后,可以从上述第一界面中切换至第二界面,第二界面包括第三窗口,该第三窗口中可以展示多个与参考配置参数相匹配的、待推荐的推荐相机类型,便于用户从该多种推荐相机类型中选择实际待部署的相机的相机类型。
本申请的一个实施例中,第二界面还包括第四窗口,第四窗口中用于显示场景模型以及虚拟相机在场景模型中进行布控的状态。
具体的,上述第二界面可以包括第三窗口和第四窗口,第三窗口可以用于展示推荐相机类型,第四窗口可以用于展示目标场景的场景模型、以及在上述场景模型中所部署的、所选择的相机类型对应的虚拟相机。这样便于用户在第二界面中查看所推荐的相机类型、 以及所推荐的相机类型在场景模型中部署的效果,便于用户对所推荐的相机类型进行选择。
除此之外,第三窗口中还可以展示所推荐的相机类型对应的相机的配置参数。
参见图4,图4为本申请实施例提供的一种第二界面的示意图。该第二界面的左侧为第四窗口,第四窗口展示有目标场景的场景模型,以及在上述场景模型中所部署的、所选择的相机类型对应的虚拟相机,第二界面的右侧为第三窗口,第三窗口展示有所推荐的相机类型、以及所推荐的的相机类型对应的相机的参数信息。需要说明的是,图4仅仅是第二界面中不同窗口布局的一种示例,图4中的文字并不对本申请的方案产生实质影响。
本申请的一个实施例中,上述第四窗口支持多种查看模式,分别为第一查看模式、第二查看模式、第三查看模式、第四查看模式,其中:
第一查看模式指的是沿俯视方向查看场景模型及所部署的相机;参见图5,图5为本申请实施例提供的一种第一查看模式的示意图,在第一查看模式下,可以从第四窗口中沿俯视方向查看场景模型;需要说明的是,图5仅仅是第一查看模式下第二界面布局的一种示例,图5中的文字并不对本申请的方案产生实质影响。
第二查看模式指的是沿侧视方向查看场景模型及所部署的相机;
第三查看模式指的是沿所部署的相机的视场方向查看场景模型;
第四查看模式指的是:以所部署的相机的位置为基准,环绕查看场景模型,该模式下,用户可以通过鼠标、键盘、触摸屏等控制查看方向。
参见图6,图6为本申请实施例提供的另一种相机的参数确定方法的流程示意图,该方法包括如下步骤:
S601,获得用于模拟目标场景的场景模型,在第一界面的第一窗口中展示场景模型;
S602,确定用户从第一界面的第二窗口中选中的虚拟相机,将所选中的虚拟相机部署在场景模型中;
S603,根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控图像,将模拟监控图像、及虚拟相机的虚拟参数展示在第二窗口中;
S604,在模拟监控图像不满足预设的监控需求的情况下,调整虚拟相机的虚拟参数,返回步骤S603,直至得到的模拟监控图像满足监控需求;
S605,从第一界面切换至第二界面,在第二界面中选择与虚拟相机的虚拟配置参数相匹配的推荐相机类型;
具体的,在得到的模拟监控图像满足监控需求的情况下,可以将虚拟相机的虚拟配置参数作为目标场景中待部署的相机的参考配置参数,而第二界面中可以展示有实际的、不同型号的相机类型,可以从第二界面所展示的多种相机类型中确定配置参数与该参考配置参数相匹配的相机类型,作为推荐相机类型。
S606,获得调整后虚拟相机的虚拟参数中的虚拟部署参数,作为所选择的推荐相机类型对应的相机在目标场景中安装的部署参数。
具体的,在基于调整后虚拟相机的虚拟参数得到的模拟监控图像满足监控需求的情况下,可以将虚拟相机的虚拟部署参数作为目标场景中待安装的相机的部署参数,从而可以按照该部署参数,将上述推荐相机类型对应的相机安装在目标场景中。
上述实施例提供的方案中,可以获得用于模拟目标场景的场景模型,在场景模型中部署虚拟相机;根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控图像,其中,虚拟参数包括虚拟部署参数和虚拟配置参数;在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为目标场景中待部署的相机的参考部署参数和/或参考配置参数。这样可以在场景模型中部署虚拟相机,根据虚拟相机的虚拟参数模拟得到监控图像,根据所模拟的监控图像判断虚拟相机的虚拟参数是否满足监控需求,若为是,则可以将该虚拟相机的虚拟部署参数和/或虚拟配置参数确定为实际在目标场景中所要部署的相机的参考部署参数和/或参考配置参数,后续可以基于该参考部署参数和/或参考配置参数确定实际在目标场景中所要部署的相机的部署参数和/或配置参数,从而保证所确定的相机的参数与真实的目标场景相匹配。由此可见,应用上述实施例提供的方案,可以提高所确定的相机参数的准确度。
本申请还提供了一种参数确定装置,下面进行详细介绍。
参见图7,图7为本申请实施例提供的一种相机的参数确定装置的结构示意图,所述装置包括:
场景模型获得模块701,用于获得用于模拟目标场景的场景模型;
虚拟相机部署模块702,用于在所述场景模型中部署虚拟相机;
图像获得模块703,用于根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,其中,所述虚拟参数包括虚拟部署参数和虚拟配置参数;
参数确定模块704,用于在所述模拟监控图像满足监控需求的情况下,将所述虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为所述目标场景中待部署的相机的参考部署参数和/或参考配置参数。
本申请的一个实施例中,所述图像获得模块703,具体用于:
根据所述虚拟相机的虚拟部署参数,确定所述虚拟相机的视场方向,沿所述视场方向对所述场景模型进行投影,得到投影图像;
根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围,根据所述视场范围对所述投影图像进行裁剪,得到裁剪后的模拟监控图像;
根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围及视场方向;
沿所述视场范围对所述场景模型进行分割,得到处于所述虚拟相机的视场范围的局部场景模型;
沿所述视场方向对所述局部场景模型进行投影,得到所述虚拟相机对应的模拟监控图像。
本申请的一个实施例中,所述装置还包括类型推荐模块,用于:
在将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,从预先获得的不同型号的相机类型中,确定配置参数与所述参考配置参数相匹 配的推荐相机类型。
本申请的一个实施例中,所述装置还包括参数更新模块,用于:
在确定配置参数与所述参考配置参数相匹配的推荐相机类型之后,针对任一推荐相机类型,根据该推荐相机类型对应的相机的配置参数,更新所述虚拟相机的虚拟配置参数,触发所述图像获得模块703。
本申请的一个实施例中,所述参数更新模块,还用于:
在由任一推荐相机类型对应的相机的配置参数作为虚拟配置参数得到的模拟监控图像满足所述监控需求的情况下,确定该推荐相机类型为目标相机类型。
本申请的一个实施例中,所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态展示在第一界面的第一窗口中;
所述模拟监控图像展示在所述第一界面的第二窗口中;
所述装置还包括界面切换模块,用于:
在所述将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,从所述第一界面切换至第二界面,在所述第二界面的第三窗口中展示配置参数与所述参考配置参数相匹配的推荐相机类型;
所述第二界面还包括第四窗口,所述第四窗口中显示所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态。
本申请的一个实施例中,所述装置还包括参数调整模块,用于:
在所述模拟监控图像不满足所述监控需求的情况下,调整所述虚拟相机的虚拟部署参数和/或虚拟配置参数,触发所述图像获得模块703,直至所述模拟监控图像满足所述监控需求;和/或
所述场景模型中部署有多个虚拟相机,所述装置还包括盲区消除模块,用于:
确定不同虚拟相机的视场范围之间的监控盲区;根据所确定的监控盲区,调整所述多个虚拟相机中至少一虚拟相机的虚拟部署参数和/或虚拟配置参数;和/或
所述场景模型获得模块701,具体用于:
获得目标场景的场景设计信息;
按照预设的建模协议对所述场景设计信息进行解析,得到场景解析信息;
参照所述场景解析信息,利用3D引擎构建3D模型,得到场景模型。
上述实施例提供的方案中,可以获得用于模拟目标场景的场景模型,在场景模型中部署虚拟相机;根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控图像,其中,虚拟参数包括虚拟部署参数和虚拟配置参数;在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为目标场景中待部署的相机的参考部署参数和/或参考配置参数。这样可以在场景模型中部署虚拟相机,根据虚拟相机的虚拟参数模拟得到监控图像,根据所模拟的监控图像判断虚拟相机的虚拟参数是否满足监控需求,若为是,则可以将该虚拟相机的虚拟部署参数和/或虚拟配置参数确定为实际在目标场景中所要部署的相机的参考部署参数和/或参考配置参数,后续可以基于该参考部署参数和/或参考配置参数确定实际在目标场景中所要部署的相机的部署参数和/或配置 参数,从而保证所确定的相机的参数与真实的目标场景相匹配。由此可见,应用上述实施例提供的方案,可以提高所确定的相机参数的准确度。
本申请实施例还提供了一种电子设备,如图8a所示,包括处理器801、和存储器803,
存储器803,用于存放计算机程序;处理器801,用于执行存储器803上所存放的程序时,实现上述相机的参数确定方法。
可选的,如图8b所示,该电子设备还包括通信接口802和通信总线804,其中,处理器801,通信接口802,存储器803通过通信总线804完成相互间的通信。上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一相机的参数确定方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一相机的参数确定方法。
上述实施例提供的方案中,可以获得用于模拟目标场景的场景模型,在场景模型中部署虚拟相机;根据虚拟相机的虚拟参数模拟虚拟相机在场景模型中进行布控,得到模拟监控图像,其中,虚拟参数包括虚拟部署参数和虚拟配置参数;在模拟监控图像满足监控需求的情况下,将虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为目标场景中待部署的相机的参考部署参数和/或参考配置参数。这样可以在场景模型中部署虚拟相机,根据虚拟相机的虚拟参数模拟得到监控图像,根据所模拟的监控图像判断虚拟相机的虚拟参数是否满足监控需求,若为是,则可以将该虚拟相机的虚拟部署参数和/或虚拟配置参数确定为实际在目标场景中所要部署的相机的参考部署参数和/或参考配置参数,后续可以基于该参考部署参数和/或参考配置参数确定实际在目标场景中所要部署的相机的部署参数和/或配置参数,从而保证所确定的相机的参数与真实的目标场景相匹配。由此可见,应用上述实施例提供的方案,可以提高所确定的相机参数的准确度。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。 当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例、电子设备实施例、计算机可读存储介质实施例、计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (16)

  1. 一种相机的参数确定方法,其特征在于,所述方法包括:
    获得用于模拟目标场景的场景模型,在所述场景模型中部署虚拟相机;
    根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,其中,所述虚拟参数包括虚拟部署参数和虚拟配置参数;
    在所述模拟监控图像满足监控需求的情况下,将所述虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为所述目标场景中待部署的相机的参考部署参数和/或参考配置参数。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,包括:
    根据所述虚拟相机的虚拟部署参数,确定所述虚拟相机的视场方向,沿所述视场方向对所述场景模型进行投影,得到投影图像;
    根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围,根据所述视场范围对所述投影图像进行裁剪,得到裁剪后的模拟监控图像;
    根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围及视场方向;
    沿所述视场范围对所述场景模型进行分割,得到处于所述虚拟相机的视场范围的局部场景模型;
    沿所述视场方向对所述局部场景模型进行投影,得到所述虚拟相机对应的模拟监控图像。
  3. 根据权利要求1所述的方法,其特征在于,在将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,所述方法还包括:
    从预先获得的不同型号的相机类型中,确定配置参数与所述参考配置参数相匹配的推荐相机类型。
  4. 根据权利要求3所述的方法,其特征在于,在确定配置参数与所述参考配置参数相匹配的推荐相机类型之后,所述方法还包括:
    针对任一推荐相机类型,根据该推荐相机类型对应的相机的配置参数,更新所述虚拟相机的虚拟配置参数,执行所述根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控、得到模拟监控图像的步骤。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    在由任一推荐相机类型对应的相机的配置参数作为虚拟配置参数得到的模拟监控图像满足所述监控需求的情况下,确定该推荐相机类型为目标相机类型。
  6. 根据权利要求3所述的方法,其特征在于,
    所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态展示在第一界面的第一窗口中;
    所述模拟监控图像展示在所述第一界面的第二窗口中;
    在所述将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考 配置参数之后,所述方法还包括:
    从所述第一界面切换至第二界面,在所述第二界面的第三窗口中展示配置参数与所述参考配置参数相匹配的推荐相机类型;
    所述第二界面还包括第四窗口,所述第四窗口中显示所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    在所述模拟监控图像不满足所述监控需求的情况下,调整所述虚拟相机的虚拟部署参数和/或虚拟配置参数,返回所述根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控、得到模拟监控图像的步骤,直至所述模拟监控图像满足所述监控需求;和/或
    所述场景模型中部署有多个虚拟相机,所述方法还包括:
    确定不同虚拟相机的视场范围之间的监控盲区;
    根据所确定的监控盲区,调整所述多个虚拟相机中至少一虚拟相机的虚拟部署参数和/或虚拟配置参数;和/或
    所述获得用于模拟目标场景的场景模型,包括:
    获得目标场景的场景设计信息;
    按照预设的建模协议对所述场景设计信息进行解析,得到场景解析信息;
    参照所述场景解析信息,利用3D引擎构建3D模型,得到场景模型。
  8. 一种相机的参数确定装置,其特征在于,所述装置包括:
    场景模型获得模块,用于获得用于模拟目标场景的场景模型;
    虚拟相机部署模块,用于在所述场景模型中部署虚拟相机;
    图像获得模块,用于根据所述虚拟相机的虚拟参数模拟所述虚拟相机在所述场景模型中进行布控,得到模拟监控图像,其中,所述虚拟参数包括虚拟部署参数和虚拟配置参数;
    参数确定模块,用于在所述模拟监控图像满足监控需求的情况下,将所述虚拟相机的虚拟部署参数和/或虚拟配置参数,确定为所述目标场景中待部署的相机的参考部署参数和/或参考配置参数。
  9. 根据权利要求8所述的装置,其特征在于,所述图像获得模块,具体用于:
    根据所述虚拟相机的虚拟部署参数,确定所述虚拟相机的视场方向,沿所述视场方向对所述场景模型进行投影,得到投影图像;
    根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围,根据所述视场范围对所述投影图像进行裁剪,得到裁剪后的模拟监控图像;
    根据所述虚拟相机的虚拟部署参数和虚拟配置参数,确定所述虚拟相机的视场范围及视场方向;
    沿所述视场范围对所述场景模型进行分割,得到处于所述虚拟相机的视场范围的局部场景模型;
    沿所述视场方向对所述局部场景模型进行投影,得到所述虚拟相机对应的模拟监控图 像。
  10. 根据权利要求8所述的装置,其特征在于,所述装置还包括类型推荐模块,用于:
    在将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,从预先获得的不同型号的相机类型中,确定配置参数与所述参考配置参数相匹配的推荐相机类型。
  11. 根据权利要求10所述的装置,其特征在于,所述装置还包括参数更新模块,用于:
    在确定配置参数与所述参考配置参数相匹配的推荐相机类型之后,针对任一推荐相机类型,根据该推荐相机类型对应的相机的配置参数,更新所述虚拟相机的虚拟配置参数,触发所述图像获得模块。
  12. 根据权利要求11所述的装置,其特征在于,所述参数更新模块,还用于:
    在由任一推荐相机类型对应的相机的配置参数作为虚拟配置参数得到的模拟监控图像满足所述监控需求的情况下,确定该推荐相机类型为目标相机类型。
  13. 根据权利要求10所述的装置,其特征在于,所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态展示在第一界面的第一窗口中;
    所述模拟监控图像展示在所述第一界面的第二窗口中;
    所述装置还包括界面切换模块,用于:
    在所述将所述虚拟相机的虚拟配置参数,确定为所述目标场景中待部署的相机的参考配置参数之后,从所述第一界面切换至第二界面,在所述第二界面的第三窗口中展示配置参数与所述参考配置参数相匹配的推荐相机类型;
    所述第二界面还包括第四窗口,所述第四窗口中显示所述场景模型以及所述虚拟相机在所述场景模型中进行布控的状态。
  14. 根据权利要求8-13中任一项所述的装置,其特征在于,所述装置还包括参数调整模块,用于:
    在所述模拟监控图像不满足所述监控需求的情况下,调整所述虚拟相机的虚拟部署参数和/或虚拟配置参数,触发所述图像获得模块,直至所述模拟监控图像满足所述监控需求;和/或
    所述场景模型中部署有多个虚拟相机,所述装置还包括盲区消除模块,用于:
    确定不同虚拟相机的视场范围之间的监控盲区;根据所确定的监控盲区,调整所述多个虚拟相机中至少一虚拟相机的虚拟部署参数和/或虚拟配置参数;和/或
    所述场景模型获得模块,具体用于:
    获得目标场景的场景设计信息;按照预设的建模协议对所述场景设计信息进行解析,得到场景解析信息;参照所述场景解析信息,利用3D引擎构建3D模型,得到场景模型。
  15. 一种电子设备,其特征在于,包括处理器和存储器;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现权利要求1-7任一所述的方法。
  16. 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得所述计算机执行权利要求1-7中任一项所述的方法。
PCT/CN2022/116506 2021-09-27 2022-09-01 一种相机的参数确定方法、装置、电子设备及程序产品 WO2023045726A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111138970.4 2021-09-27
CN202111138970.4A CN113824882A (zh) 2021-09-27 2021-09-27 一种相机的参数确定方法、装置、电子设备及程序产品

Publications (1)

Publication Number Publication Date
WO2023045726A1 true WO2023045726A1 (zh) 2023-03-30

Family

ID=78921371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116506 WO2023045726A1 (zh) 2021-09-27 2022-09-01 一种相机的参数确定方法、装置、电子设备及程序产品

Country Status (2)

Country Link
CN (1) CN113824882A (zh)
WO (1) WO2023045726A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113824882A (zh) * 2021-09-27 2021-12-21 杭州海康威视数字技术股份有限公司 一种相机的参数确定方法、装置、电子设备及程序产品
US11914837B2 (en) 2022-07-08 2024-02-27 Shanghai Lilith Technology Corporation Video acquisition method, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050002662A1 (en) * 2003-07-01 2005-01-06 Sarnoff Corporation Method and apparatus for placing sensors using 3D models
CN102685394A (zh) * 2011-01-31 2012-09-19 霍尼韦尔国际公司 利用虚拟环境的传感器布置和分析
US9665800B1 (en) * 2012-10-21 2017-05-30 Google Inc. Rendering virtual views of three-dimensional (3D) objects
CN108076320A (zh) * 2016-11-17 2018-05-25 天津凯溢华升科技发展有限公司 一种城市技术服务用安防方法
CN113824882A (zh) * 2021-09-27 2021-12-21 杭州海康威视数字技术股份有限公司 一种相机的参数确定方法、装置、电子设备及程序产品

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867086B (zh) * 2012-09-10 2014-06-25 安科智慧城市技术(中国)有限公司 一种监控摄像机的自动部署方法、系统及电子设备
CN113438469B (zh) * 2021-05-31 2022-03-15 深圳市大工创新技术有限公司 安防摄像机自动化测试方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050002662A1 (en) * 2003-07-01 2005-01-06 Sarnoff Corporation Method and apparatus for placing sensors using 3D models
CN102685394A (zh) * 2011-01-31 2012-09-19 霍尼韦尔国际公司 利用虚拟环境的传感器布置和分析
US9665800B1 (en) * 2012-10-21 2017-05-30 Google Inc. Rendering virtual views of three-dimensional (3D) objects
CN108076320A (zh) * 2016-11-17 2018-05-25 天津凯溢华升科技发展有限公司 一种城市技术服务用安防方法
CN113824882A (zh) * 2021-09-27 2021-12-21 杭州海康威视数字技术股份有限公司 一种相机的参数确定方法、装置、电子设备及程序产品

Also Published As

Publication number Publication date
CN113824882A (zh) 2021-12-21

Similar Documents

Publication Publication Date Title
WO2023045726A1 (zh) 一种相机的参数确定方法、装置、电子设备及程序产品
US10854013B2 (en) Systems and methods for presenting building information
AU2017204181B2 (en) Video camera scene translation
US20190371055A1 (en) 3d monitoring server using 3d bim object model and 3d monitoring system comprising it
US9898862B2 (en) System and method for modeling buildings and building products
Chen et al. Visualization of CCTV coverage in public building space using BIM technology
CN111937051A (zh) 使用增强现实可视化的智能家居设备放置和安装
WO2019242057A1 (zh) 远程全景看房方法、装置、用户终端、服务器及存储介质
CN104166657A (zh) 电子地图搜索方法以及服务器
CA3203691A1 (en) System and method to process and display information related to real estate by developing and presenting a photogrammetric reality mesh
US20190266653A1 (en) Graphical user interface for creating building product literature
US20220130004A1 (en) Interface for uncompleted homes planning
CN110675505A (zh) 基于全景虚实无缝融合的室内外看房系统
WO2021115322A1 (zh) 核电厂实物保护系统的三维布设方法、装置及设备
CN111222190A (zh) 一种古建筑管理系统
US20160085831A1 (en) Method and apparatus for map classification and restructuring
CN111161413A (zh) 一种基于gis的三维虚拟机场平台的构建方法
US11094121B2 (en) Mobile application for signage design solution using augmented reality
JP6362233B1 (ja) 建築設備に関する情報の表示システム、表示装置およびプログラム
US20220269397A1 (en) Systems and methods for interactive maps
KR102640497B1 (ko) 경량화된 3차원 모델링 데이터의 공간정보 설정 및 영상 관제 장치
US10922546B2 (en) Real-time location tagging
CN113297652A (zh) 施工图的生成方法、装置及设备
US20240112420A1 (en) Augmented reality enhanced building model viewer
CN115063530A (zh) 广告位展示方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
WD Withdrawal of designations after international publication
NENP Non-entry into the national phase

Ref country code: DE