WO2022217877A1 - Procédé et appareil de génération de carte, et dispositif électronique et support de stockage - Google Patents

Procédé et appareil de génération de carte, et dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022217877A1
WO2022217877A1 PCT/CN2021/125027 CN2021125027W WO2022217877A1 WO 2022217877 A1 WO2022217877 A1 WO 2022217877A1 CN 2021125027 W CN2021125027 W CN 2021125027W WO 2022217877 A1 WO2022217877 A1 WO 2022217877A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
area
acquisition
information
devices
Prior art date
Application number
PCT/CN2021/125027
Other languages
English (en)
Chinese (zh)
Other versions
WO2022217877A9 (fr
Inventor
许文航
吴佳飞
张广程
闫俊杰
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Publication of WO2022217877A1 publication Critical patent/WO2022217877A1/fr
Publication of WO2022217877A9 publication Critical patent/WO2022217877A9/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present disclosure relates to the field of security technologies, and in particular, to a map generation method and device, an electronic device and a storage medium.
  • multiple acquisition devices can be set in the current scene, so that images of the current scene can be acquired in multiple directions.
  • multiple cameras can be arranged in parks and streets, and images can be taken through multiple cameras, so as to realize the security protection of parks and streets.
  • the present disclosure provides a technical solution for map generation.
  • a map generation method comprising:
  • the method further includes: sending an acquisition request in a broadcast manner, so that the plurality of acquisition devices return the pose information and the shooting field of view information based on the acquisition request; Alternatively, an acquisition request is sent to the multiple acquisition devices, so that the multiple acquisition devices return the pose information and the shooting field of view information based on the acquisition request.
  • the pose information includes a geographic location and an orientation
  • the shooting field of view information includes a field of view
  • the shooting area of each collecting device includes: determining the shooting angle range of the collecting device according to the orientation of the collecting device and the angle of view; according to the geographical position of the collecting device and the shooting angle range, A photographing area for each of the acquisition devices is determined.
  • the photographing field of view information further includes an optimal photographing distance, and determining the photographing area of the collecting device according to the geographic location of the collecting device and the photographing angle range, including : According to the geographic location of the collection device, the shooting angle range and the best shooting distance, determine a fan-shaped area formed with the geographic location of the collection device as a vertex; determine the fan-shaped area as the collection device shooting area.
  • the method further includes: determining at least one of a shooting blind area and a non-optimal shooting area of the target scene according to the shooting area of each of the collecting devices, wherein, The non-optimal shooting area is an area in the target scene that exceeds the optimal shooting distance of the multiple collection devices; at least one of the shooting blind area and the non-optimal shooting area is prompted in the deployment map. item.
  • the method further includes: in the case where it is determined that the shooting blind spot exists, generating a rotation instruction based on the position information of the shooting blind spot; sending a rotation instruction to at least one of the multiple collection devices A collection device sends the rotation instruction, so that the at least one collection device rotates toward the shooting blind area.
  • the method further includes: in the case that it is determined that the non-optimal shooting area exists, generating a parameter adjustment instruction; sending a parameter adjustment instruction to at least one acquisition device among the plurality of acquisition devices The parameter adjustment instruction is used to expand the photographing area of the at least one acquisition device.
  • a method for generating a map is provided, which is applied to a collection device, including:
  • the server device Sending the pose information and the shooting field of view information to the server device, wherein the server device is used to determine the shooting area of each collection device according to the pose information and shooting field information of each collection device , draw the shooting area of each of the collection devices on the electronic map of the target scene, and generate a deployment map of the target scene.
  • the method further includes: receiving a rotation instruction sent by a server device; acquiring position information of a shooting blind spot in the target scene according to the rotation instruction; information, and rotate toward the shooting blind spot.
  • the method further includes: receiving a parameter adjustment instruction sent by a server device; and adjusting camera parameters according to the parameter adjustment instruction to expand the shooting area.
  • a map generating apparatus comprising:
  • an acquisition part configured to acquire the pose information and the shooting field of view information corresponding to the multiple acquisition devices in the target scene respectively;
  • a determining part configured to determine the shooting area of each of the collection devices according to the pose information and the shooting field of view information of each of the collection devices;
  • the generating part is configured to draw the shooting area of each of the collection devices on the electronic map of the target scene, and generate a layout map of the target scene.
  • the apparatus further includes a first sending part configured to send an acquisition request in a broadcast manner, so that the plurality of acquisition devices return the pose information and the pose information based on the acquisition request. the shooting field of view information; or, sending an acquisition request to the multiple collection devices, so that the multiple collection devices return the pose information and the shooting field of view information based on the acquisition request.
  • the pose information includes a geographic location and an orientation
  • the shooting field of view information includes a viewing angle
  • the determining part is configured to be based on the orientation of the collecting device and the viewing angle , determine the shooting angle range of the collecting device; determine the shooting area of each collecting device according to the geographic location of the collecting device and the shooting angle range.
  • the photographing field of view information further includes an optimal photographing distance
  • the determining part is configured to be based on the geographic location of the collecting device, the photographing angle range and the optimal photographing distance
  • the distance is determined as a fan-shaped area formed by taking the geographic location of the collecting device as a vertex; the fan-shaped area is determined as the shooting area of the collecting device.
  • the determining part is further configured to determine at least one of a shooting blind area and a non-optimal shooting area of the target scene according to the shooting area of each of the collecting devices, wherein, the non-optimal shooting area is an area in the target scene that exceeds the optimal shooting distance of the plurality of collection devices; in the deployment map, the shooting blind area and the non-optimal shooting area are prompted. at least one.
  • the apparatus further includes: a second sending part, configured to generate a rotation instruction based on the position information of the shooting blind spot when it is determined that the shooting blind spot exists; At least one acquisition device among the plurality of acquisition devices sends the rotation instruction, so that the at least one acquisition device rotates toward the shooting blind area.
  • the apparatus further includes: a third sending part, configured to generate a parameter adjustment instruction when it is determined that the non-optimal shooting area exists; At least one of the acquisition devices sends the parameter adjustment instruction to expand the shooting area of the at least one acquisition device.
  • a map generating apparatus comprising:
  • the acquisition part is configured to acquire the current pose information and the shooting field of view information
  • the sending part is configured to send the pose information and the shooting field of view information to the server device, wherein the server device is configured to determine each of the The shooting area of the acquisition device is drawn on the electronic map of the target scene, and the shooting area of each acquisition device is drawn to generate a deployment map of the target scene.
  • the apparatus further includes: a rotating part, configured to receive a rotation instruction sent by the server device; obtain the location information of the shooting blind spot in the target scene according to the rotation instruction; position information of the shooting blind spot, and rotate toward the shooting blind spot.
  • a rotating part configured to receive a rotation instruction sent by the server device; obtain the location information of the shooting blind spot in the target scene according to the rotation instruction; position information of the shooting blind spot, and rotate toward the shooting blind spot.
  • the apparatus further includes: an adjustment part, configured to receive a parameter adjustment instruction sent by a server device; and adjust camera parameters according to the parameter adjustment instruction to expand the shooting area .
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above method when executed by a processor.
  • a computer program comprising computer-readable code, which, when the computer-readable code is executed in an electronic device and executed by a processor in the electronic device, realizes the above-mentioned map Generate method.
  • a computer program product which, when run on a computer, causes the computer to execute the above-described map generation method.
  • the pose information and shooting field of view information corresponding to multiple collection devices in the target scene may be obtained, and then the shooting of each collection device is determined according to the pose information and shooting field of view information of each collection device.
  • the shooting area of each acquisition device can be drawn on the electronic map of the target scene, and a layout map of the target scene can be generated.
  • the information of multiple collection devices in the target scene can be integrated, the information of multiple collection devices can be effectively associated, and the information of the target scene can be provided to the user in real time and intuitively by placing the map, saving security personnel human resources to provide effective support for the security protection of target scenarios.
  • FIG. 1 shows a flowchart of a map generation method according to an embodiment of the present disclosure.
  • FIG. 2 shows a scene diagram of interaction between a server device and multiple collection devices according to an embodiment of the present disclosure.
  • FIG. 3 shows a flowchart of a map generation method according to an embodiment of the present disclosure.
  • FIG. 4 shows a flowchart of an example of a map generation method according to an embodiment of the present disclosure.
  • FIG. 5 shows a schematic diagram of an example of a deployment map according to an embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of a map generating apparatus according to an embodiment of the present disclosure.
  • FIG. 7 shows a block diagram of a map generating apparatus according to an embodiment of the present disclosure.
  • FIG. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 9 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • At least one of A and B in this document is only an association relationship to describe the associated objects, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • at least one herein refers to any combination of any one of the plurality or at least two of the plurality, for example, including at least one of A, B, and C, and may mean including from A, B, and C. Any one or more elements selected from the set of B and C.
  • edge node products the concept of the Internet of Things is introduced, and the cloud/device center is used to connect edge nodes.
  • edge devices due to the lack of edge devices' own perception capabilities, various edge intelligent terminal devices In fact, it is still in a state of information island, which leads to the fact that the information of various edge devices cannot be effectively correlated for data mining and utilization.
  • the second aspect is that in the design of the current park/city street monitoring system, the selection of camera points and the evaluation of the monitoring effect after installation are extremely dependent on the monitoring plan during the construction phase/the experience of the construction personnel. Insufficient understanding of the park/monitoring area, the following problems are prone to occur: 1) The installation angle or installation point of the surveillance camera is missing, resulting in a monitoring dead angle. 2) The installation distance of the monitoring camera is not suitable, which will cause the shooting target to exceed the optimal monitoring distance, which will cause the target image to be unclear.
  • the security personnel cannot dynamically, intuitively and quickly understand the monitoring coverage status of the current park by the monitoring system. It is still necessary to rely on long-term experience to judge or increase manual patrol coverage.
  • the embodiments of the present disclosure provide a map generation solution, by acquiring the pose information and shooting field information corresponding to multiple collection devices in the target scene, and then according to the pose information of each collection device Information and shooting field of view information, determine the shooting area of each acquisition device, so that the shooting area of each acquisition device can be drawn on the electronic map of the target scene, and the layout map of the target scene can be generated.
  • It can be used in security systems, multi-camera networking, edge nodes and other scenarios. For example, in open scenes such as squares, parks, classrooms, etc., multiple cameras can be set up, and by obtaining the pose information and shooting field information of multiple cameras, a cloth drawing the shooting areas of each camera in the scene can be generated in real time.
  • the map generation method provided by the embodiments of the present disclosure may be executed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless Phones, Personal Digital Assistants (PDAs), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless Phones, Personal Digital Assistants (PDAs), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • PDAs Personal Digital Assistants
  • the map generation method may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • the method may be performed by a server.
  • FIG. 1 shows a flowchart of a map generation method according to an embodiment of the present disclosure. As shown in FIG. 1 , the map generation method can be applied to a server device, including:
  • Step S11 acquiring the pose information and the shooting field of view information corresponding to the plurality of collecting devices in the target scene respectively.
  • multiple collection devices may be set in the target scene.
  • the collection device may be a device with an image collection function, for example, the collection device may be a terminal device, a server, etc. with a shooting function, and each collection device may shoot a target scene.
  • Multiple acquisition devices can communicate with each other to form a camera network, and information from different acquisition devices can be shared.
  • the server device can obtain the pose information and shooting field information corresponding to the multiple collection devices in the target scene respectively, that is, obtain the pose information and shooting field information of each collection device in the target scene.
  • the server device may obtain the pose information and shooting field information of each acquisition device from each acquisition device through the network.
  • the server device may pre-store the pose information of each collection device and at least part of the shooting field of view information. For example, the server device may pre-store the geographic location of each capture device, and then obtain the information from each capture device. Other information other than the geographic location in the pose information and the shooting field of view information.
  • the pose information may include geographic location and orientation, wherein the geographic location may indicate the location of the collection device in the electronic map of the target scene, and the geographic location may be latitude and longitude coordinates or location coordinates in the coordinate system of the electronic map.
  • the orientation can indicate the orientation of the collection device. Each collection device can be rotated within a preset angle range. The orientation is different, and the pictures captured by the collection device are also different. The orientation can be expressed as a geographic direction or in the coordinate system of the electronic map. direction.
  • the shooting field of view information may indicate the shooting field of view of the collection device, and the shooting field of view information may include a field of view angle, which may be relative to the orientation of the collection device.
  • the field of view may indicate a deviation from the orientation. For example, if the azimuth of the acquisition device is 0 and the field of view is (-30°, 30°), it can indicate that the field of view of the acquisition device is within an angular range of ⁇ 30° from the azimuth of the acquisition device.
  • the server device may be a control device used to manage multiple collection devices, for example, a server, a control terminal, etc.
  • the server device may summarize the information of multiple collection devices and issue some control commands to the multiple collection devices .
  • the server device may be any one of the multiple collection devices, so that one of the multiple collection devices can aggregate the information of the multiple collection devices and control other collection devices. In this way, it can be applied to various application scenarios.
  • FIG. 2 shows a scene diagram of interaction between a server device and multiple collection devices according to an embodiment of the present disclosure.
  • the connection of each edge device can be realized through the cloud device/central device (server device), and the cloud device/central device can aggregate multiple edge devices information, and issue control commands, such as rotation commands or parameter adjustment commands, to multiple edge devices (collection device 1-collection device 5), so as to reduce the influence of the lack of the edge device's own perception ability, and realize the Effective association of information.
  • Step S12 Determine the shooting area of each of the collection devices according to the pose information and the shooting field of view information of each of the collection devices.
  • the server device can determine the area corresponding to the field of view that can be captured by each capture device according to the pose information and the shooting field of view information of each capture device, and the region can be the capture area of the capture device.
  • the shooting area can be a general area.
  • the geographic location of the capture device can be used as the center point, and a plurality of straight lines can be set through the center point. These straight lines can evenly divide the target scene into several areas. At least one area that may be located in the photographing field of view of the collecting device is roughly determined according to the orientation and viewing angle of the collecting device, and these areas can be used as the photographing area of the collecting device.
  • the shooting area corresponding to the shooting field of view of each collecting device may also be accurately determined according to the pose information and shooting field information of each collecting device.
  • the server device can determine the shooting angle range of the acquisition device according to the azimuth and field of view of the acquisition device.
  • the azimuth of the acquisition device can be used to transform the field of view, such as increasing or decreasing the azimuth of the acquisition device based on the field of view.
  • the shooting angle range of the acquisition device can be obtained. Assuming that the azimuth of the acquisition device is 90° (with true north as 0°) and the viewing angle is (-30°, 30°), the shooting angle range is (60°) , 120°).
  • the shooting angle range may be the range of the shooting field of view of the acquisition device that corresponds to the geographic azimuth or the range that corresponds to the azimuth under the target scene coordinate system.
  • the shooting area of each collecting device can be determined according to the geographic location of the collecting device and the shooting angle range. For example, two rays can be drawn with the geographic location of the collecting device as the center and the azimuth formed by the shooting angle range as the side. The area formed by the rays can be determined as the shooting area of the acquisition device. In this way, the shooting area corresponding to each collecting device can be determined more accurately.
  • Step S13 Draw the shooting area of each of the collection devices on the electronic map of the target scene, and generate a deployment map of the target scene.
  • the shooting area of each acquisition device can be drawn on the electronic map of the target scene.
  • the shooting area of each acquisition device can be drawn on the electronic map by using different colors or indicators to generate a layout map of the target scene.
  • the shooting areas of different collection devices can be distinguished by color or indicator.
  • the overlapping areas can also be marked on the electronic map. mark.
  • the generated deployment map can be displayed, so that the deployment map can display the deployment situation of the target scene in real time, and the user can intuitively and quickly understand the security status of the target scene, which provides a basis for the security protection of the target scene.
  • the generated deployment map can also be sent to the web page or client, so that the user can view the deployment map of the target scene by logging in to the corresponding web page or client, so that the user can quickly understand the deployment status of the target scene. Reduce deployment dead spots and loopholes.
  • the server device can repeatedly acquire the pose information and shooting field information of multiple acquisition devices in the target scene periodically or non-periodically, so that the pose information of multiple acquisition devices in the target scene can be continuously acquired according to the continuous acquisition.
  • Information and shooting field of view information update the deployment map in real time, so that the security status of the target scene can be displayed in real time.
  • the electronic map of the target scene can be a map established in the world coordinate system
  • the coordinate points in the electronic map can be expressed as latitude and longitude coordinates
  • the geographic location of the acquisition device can also be latitude and longitude coordinates, so that the shooting area of the acquisition device can be directly drawn on the electronic map.
  • the electronic map may also be a map established in a relative coordinate system, and the coordinate points in the electronic map may be expressed as relative coordinates in the relative coordinate system.
  • the geographic location of the collection device is the latitude and longitude coordinates
  • the geographic location of the acquisition device is transformed from the latitude and longitude coordinates to the relative coordinate system of the electronic map, and the shooting area of the acquisition device is further drawn in the electronic map.
  • the embodiments of the present disclosure can draw the shooting areas of the multiple collecting devices on the electronic map of the target scene through the pose information and shooting field information of the multiple collecting devices, obtain the deployment map of the target scene, and realize the security protection of the target scene.
  • the acquisition device can be used as an edge node in the edge device scenario, so that the information of each edge device can be effectively associated, compared with some solutions in the related art that are difficult to effectively utilize the information of the edge device due to the lack of the edge device's own perception ability. , which can enhance the effective use of information from edge devices.
  • the server device may send the acquisition request in a broadcast manner, so that the multiple acquisition devices return based on the acquisition request. pose information and the shooting field of view information.
  • the server device may send an acquisition request to multiple acquisition devices, so that the multiple acquisition devices return the pose information and the shooting field of view information based on the acquisition request.
  • the acquisition device can monitor the server device.
  • the acquisition device returns itself to the server device according to the acquisition request. pose information and shooting field of view information.
  • the server device may not need to store the device list of the acquisition device in advance.
  • the server device can obtain the device lists of multiple collection devices in advance, and then send an acquisition request according to the collection device indicated in the device list.
  • the collection device receives the acquisition request, it can return itself to the server device. pose information and shooting field of view information.
  • the acquisition device may not need to monitor the server device in real time.
  • a global navigation satellite system Global Navigation Satellite System, GNSS
  • an electronic compass sensor may be configured in the acquisition device, so that the acquisition device may have the ability to perceive its own geographic location and orientation.
  • the acquisition device can obtain high-precision latitude and longitude coordinates through GPS differential or static positioning algorithms.
  • the location of the acquisition device can no longer be changed, so that after the acquisition device is installed, the logical location of the acquisition device can be saved in the acquisition device or in the in the server device.
  • the coordinates of the electronic map can also be longitude and latitude coordinates, so that the acquired location information and the location information of the electronic map can be unified, which is convenient for other devices to use.
  • the acquisition device determines the field of view and the optimal shooting distance by using camera parameters such as the size, resolution, and focal length of the photosensitive device.
  • the imaging quality of the acquisition camera is relatively high in the case of photographing the angle of view of the acquisition camera and the photographing pair within the optimal shooting distance.
  • the acquisition device can record its own viewing angle and optimal shooting distance.
  • the server device may determine the shooting area of each collecting device according to the pose information and the shooting field of view information of each collecting device.
  • the shooting field information may also include the best shooting distance.
  • a fan-shaped area formed with the geographical location of the acquisition device as the vertex can be determined according to the geographical location of the acquisition device, the range of the shooting angle, and the optimal shooting distance, and the fan-shaped area can be determined as the acquisition The shooting area of the device.
  • the optimal shooting distance may represent the maximum distance between the shooting object in the target scene and the acquisition device under the condition that the imaging of the acquisition device is clear.
  • the acquisition device can clearly capture the subject within the optimal shooting distance, and the subject can be a person or object in the target scene. If the optimal shooting distance is exceeded, the shooting picture of the capture device may be blurred. Therefore, in order to improve the clarity of the image captured by the capture device, the optimal capture distance of the capture device may also be considered when the capture area of each capture device is determined. For example, a sector may be formed with the geographic location of the collection device as the center, the optimal shooting distance as the radius, and the shooting angle range as the vertex angle, and the fan-shaped area may be the shooting area of the collection device. The shooting area determined in this way takes into account the optimal shooting distance of each collection device, and the people or objects captured in the shooting area can be clearly imaged, thereby improving the clarity of the image captured by the collection device.
  • At least one of the shooting blind area and the non-optimal shooting area of the target scene may also be determined according to the shooting areas of multiple acquisition devices, and further At least one of the determined shooting blind area and the non-optimal shooting area is prompted in the generated deployment map.
  • the shooting blind area may be an area in the target scene that cannot be photographed by multiple acquisition devices.
  • the server device can determine the shooting blind spots in the target scene that cannot be captured by multiple acquisition devices according to the shooting angle range corresponding to the shooting area of each acquisition device and the occlusion of the buildings and infrastructure in the target scene to the shooting field of view of the acquisition device. .
  • the non-optimal shooting area may be an area in the target scene that exceeds the optimal shooting distances of the multiple capture devices.
  • the server device can determine that it is within the shooting field of view of multiple collection devices but exceeds the shooting field of view of multiple collection devices according to the optimal shooting distance corresponding to the shooting area of each collection device and the occlusion of buildings and infrastructure in the target scene to the shooting field of view of the collection device. Capture the area where the device's best shooting distance is.
  • At least one of the shooting blind area and the non-optimal shooting area can be prompted in the deployment map, for example, by Graphics such as arrows and circles can indicate at least one of shooting blind spots and non-optimal shooting areas, or you can draw at least one of shooting blind spots and non-optimal shooting areas in the layout map, so that you can Better provide users with the deployment situation of the target scene by deploying the map.
  • the server device may generate a rotation instruction based on the position information of the shooting blind spot, and send the rotation instruction to at least one of the multiple collection devices, so that at least One acquisition device is rotated toward the shooting blind area, thereby changing the orientation of at least one acquisition device.
  • the server device can pre-store the maximum shooting field of view of each acquisition device, and the maximum shooting field of view is the largest area that can be covered by the shooting area of the acquisition device when the acquisition device is rotatable, that is, the shooting area of the acquisition device follows the acquisition device. The maximum area that can be reached by changing the orientation.
  • the maximum shooting field of view of each collection device may also be stored in the respective collection device, and the server device may obtain the maximum shooting field of view of each collection device from each collection device.
  • the server device can determine one or more acquisition devices whose maximum shooting field of view includes the shooting blind area according to the location of the maximum shooting field of view of each acquisition device and the location of the shooting blind area, and then can send rotation to the determined one or more acquisition devices. an instruction to rotate the determined one or more acquisition devices toward the shooting blind spot. In this way, when there is a shooting blind spot in the target scene, the acquisition device can be adjusted in orientation, and the existence of the shooting blind spot can be reduced.
  • the acquisition device can receive the rotation instruction sent by the server device, and then can obtain the position information of the shooting blind spot in the target scene according to the rotation instruction, and can further determine the rotation direction and rotation angle according to the position information of the shooting blind spot, and then according to the determined position information.
  • the rotation direction and rotation angle are rotated towards the shooting blind spot, so that the shooting blind spot can enter the shooting field of view, thereby reducing the existence of the shooting blind spot.
  • the optimal shooting distance of the acquisition device can be changed with the adjustment of the camera parameters, and correspondingly, the maximum shooting field of view of the acquisition device can be changed according to the optimal shooting distance and change.
  • the maximum shooting field of view of the acquisition device may also be marked, for example, a dotted line may be used to mark the maximum shooting field of view of the acquisition device.
  • the server device may generate a parameter adjustment instruction, and send the parameter adjustment instruction to at least one acquisition device among the plurality of acquisition devices, so as to expand at least one of the acquisition devices.
  • a shooting area of a capture device which can reduce the existence of non-optimal shooting areas in the target scene and improve the deployment effect.
  • the parameter adjustment instruction may instruct the acquisition device to adjust the camera parameters
  • the camera parameters may include parameters such as focal length, aperture, and exposure value.
  • the acquisition device After the acquisition device receives the parameter adjustment instruction sent by the server device, it can adjust the camera parameters according to the parameter adjustment instruction to expand the current shooting area, so that the image of the target scene captured by the acquisition device is as clear as possible, reducing the most The existence of the best shooting area.
  • the server device may further determine the position of the shooting object in the captured image in the target scene according to the images captured by multiple collection devices.
  • a tracking instruction can also be sent to one or more acquisition devices according to the position of the shooting object in the target scene, so that one or more acquisition devices in the target scene can track the shooting object and determine the shooting object. motion estimation.
  • the running track of the shooting object can be marked in the deployment map, so that more information can be provided through the deployment map, which is convenient for users to view or evaluate and analyze the current security solution.
  • the embodiments of the present disclosure can draw the shooting areas of the multiple collecting devices on the electronic map of the target scene by using the pose information and the shooting field information of the multiple collecting devices to generate a real-time visualized deployment map of the target scene.
  • the deployment map allows users to intuitively and quickly understand the deployment of the current target scene, which can help users quickly identify shooting loopholes in the target scene, such as at least one of shooting blind spots and non-optimal shooting areas. Put the map to conduct more targeted inspections to improve the security protection of the target scene.
  • FIG. 3 shows a flowchart of a map generation method according to an embodiment of the present disclosure, which is applied to a collection device.
  • the map generation method includes:
  • Step S21 acquiring current pose information and photographing field of view information.
  • Step S22 Send the pose information and the shooting field of view information to the server device.
  • the acquisition device may acquire an acquisition request sent by the server device in a broadcast manner, or the acquisition device may receive an acquisition request sent by the server device, where the acquisition request is used to request acquisition of current pose information and shooting field of view information.
  • the acquisition device acquires the current pose information and the shooting field of view information, and then sends the current pose information and the shooting field of view information to the server device.
  • the server device can integrate the pose information and shooting field of view information of multiple acquisition devices to determine the shooting area of each acquisition device, and draw the shooting area of each acquisition device on the electronic map of the target scene to generate a deployment map of the target scene.
  • the collection device may also receive a rotation instruction sent by the server device, obtain location information of the shooting blind spot in the target scene according to the received rotation instruction, and further rotate toward the shooting blind area according to the location information of the shooting blind area.
  • the collection device may also receive a parameter adjustment instruction sent by the server device, and further adjust the camera parameters according to the parameter adjustment instruction to expand the shooting area.
  • FIG. 4 shows a flowchart of an example of a map generation method according to an embodiment of the present disclosure, including the following steps:
  • the server device sends an acquisition request.
  • Sending the acquisition request can be implemented in the following two ways: 1. Send the acquisition request by broadcasting; 2.
  • the server device uses the pre-acquired device list to send the acquisition request to the acquisition device in the device list.
  • the acquisition device receives the acquisition request, and returns the pose information and the shooting field of view information to the server device.
  • the server device receives the pose information and the shooting field of view information sent by each collection device.
  • the server device determines the shooting area of each collecting device according to the pose information and shooting field of view information of each collecting device.
  • the server device generates a deployment map of the target scene in the shooting area of each collection device in the electronic map.
  • the server device displays the generated deployment map in the interface.
  • the server device can repeatedly send the acquisition request periodically or aperiodically, so that the pose information and shooting field of view information sent by each acquisition device can be acquired in real time, and the above S31 to S36 can be repeatedly executed to correct the distribution. Put the map to update in real time.
  • the server device can send an acquisition request, and the acquisition request can carry instructions for acquiring the pose information and shooting field of view information in real time. , periodically or non-periodically return the pose information and shooting field of view information to the server device, and the server device can update the deployment map in real time according to the pose information and shooting field of view information of each collection device.
  • FIG. 5 shows a schematic diagram of an example of a deployment map according to an embodiment of the present disclosure.
  • the shooting area considers the best shooting distance, and the target scene can be a park.
  • the shooting area of each acquisition device is marked in the layout map. Different acquisition devices can be distinguished by device numbers. There are 6 acquisition devices in the figure, and the acquisition devices can be marked by numbers 1-6. There are 2 buildings marked in the figure (Building 1 and Building 2). Each acquisition device has a corresponding shooting area (a shadow sector area corresponding to each acquisition device in the figure).
  • the shooting blind spot (shooting blind spot) is also marked in the deployment map, indicating that there is a shooting blind spot, and there is no shooting blind spot at the entrance of the park.
  • the non-optimal shooting area beyond the optimum shooting distance can also indicate the non-optimal shooting area beyond the optimum shooting distance, such as indicating that the image is not clear beyond the optimum shooting distance.
  • zoom rotation can also be supported in the deployment map, and the maximum shooting field of view of the acquisition device is marked by a dotted line.
  • the dotted line area corresponding to acquisition device 4 is the maximum shooting field of view of the acquisition device.
  • the present disclosure also provides a map generation device, electronic device, computer-readable storage medium, and program, all of which can be used to implement any map generation method provided by the present disclosure.
  • a map generation device electronic device, computer-readable storage medium, and program, all of which can be used to implement any map generation method provided by the present disclosure.
  • Fig. 6 shows a block diagram of a map generating apparatus according to an embodiment of the present disclosure.
  • the map generating apparatus may be configured as a server device. As shown in Fig. 6 , the apparatus includes:
  • the acquisition part 41 is configured to acquire the pose information and the shooting field of view information corresponding to the plurality of acquisition devices in the target scene respectively;
  • the determination part 42 is configured to determine the shooting area of each of the collection devices according to the pose information and the shooting field of view information of each of the collection devices;
  • the generating part 43 is configured to draw the shooting area of each of the collection devices on the electronic map of the target scene, and generate a deployment map of the target scene.
  • the apparatus further includes: a first sending part 44, configured to send an acquisition request in a broadcast manner, so that the multiple acquisition devices return the pose based on the acquisition request information and the shooting field of view information; or, sending an acquisition request to the multiple collection devices, so that the multiple collection devices return the pose information and the shooting field of view information based on the acquisition request.
  • a first sending part 44 configured to send an acquisition request in a broadcast manner, so that the multiple acquisition devices return the pose based on the acquisition request information and the shooting field of view information
  • sending an acquisition request to the multiple collection devices so that the multiple collection devices return the pose information and the shooting field of view information based on the acquisition request.
  • the pose information includes geographic location and orientation
  • the photographing field of view information includes an angle of view
  • the determining part 42 is configured to be based on the orientation of the acquisition device and the field of view
  • the shooting angle range of the collecting device is determined; the shooting area of each collecting device is determined according to the geographical position of the collecting device and the shooting angle range.
  • the photographing field of view information further includes an optimal photographing distance
  • the determining part 42 is configured to be based on the geographic location of the collecting device, the photographing angle range, and the optimal photographing distance.
  • the shooting distance a fan-shaped area formed by taking the geographic location of the collecting device as a vertex is determined; and the fan-shaped area is determined as the shooting area of the collecting device.
  • the determining part 42 is further configured to determine at least one of the shooting blind area and the non-optimal shooting area of the target scene according to the shooting area of each of the collecting devices , wherein, the non-optimal shooting area is an area in the target scene that exceeds the optimal shooting distance of the multiple collection devices; in the deployment map, it is indicated that the shooting blind area and the non-optimal shooting area are in the at least one of.
  • the apparatus further includes: a second sending part 45, configured to generate a rotation instruction based on the position information of the shooting blind spot when it is determined that the shooting blind spot exists; At least one acquisition device among the plurality of acquisition devices sends the rotation instruction, so that the at least one acquisition device rotates toward the shooting blind area.
  • the apparatus further includes: a third sending part 46, configured to generate a parameter adjustment instruction when it is determined that the non-optimal shooting area exists; At least one acquisition device in the devices sends the parameter adjustment instruction to expand the shooting area of the at least one acquisition device.
  • FIG. 7 shows a block diagram of an apparatus for generating a map according to an embodiment of the present disclosure.
  • the apparatus for generating a map can be applied to a collection device. As shown in FIG. 7 , the apparatus includes:
  • an acquisition part 51 configured to acquire current pose information and shooting field of view information
  • the sending part 52 is configured to send the pose information and the shooting field of view information to the server device, wherein the server device is configured to determine the position and attitude information and shooting field of view information of each collection device.
  • the shooting area of the acquisition device is drawn, and the shooting area of each acquisition device is drawn on the electronic map of the target scene to generate a deployment map of the target scene.
  • the apparatus further includes: a rotating part 53 configured to receive a rotation instruction sent by the server device; obtain location information of the shooting blind spot in the target scene according to the rotation instruction; The position information of the shooting blind spot is rotated toward the shooting blind spot.
  • the apparatus further includes: an adjustment part 54 configured to receive a parameter adjustment instruction sent by the server device; adjust camera parameters according to the parameter adjustment instruction to expand the shooting area.
  • the functions or included parts of the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and the specific implementation may refer to the descriptions in the above method embodiments. No longer.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • Embodiments of the present disclosure also provide a computer program product, including computer-readable codes.
  • a processor in the device executes a method for implementing the map generation method provided by any of the above embodiments. instruction.
  • Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform the operations of the map generation method provided by any of the foregoing embodiments.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
  • an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 , and the communication component 816 .
  • the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes at least one of a front-facing camera and a rear-facing camera.
  • At least one of the front camera and the rear camera may receive external multimedia data.
  • Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to at least one of output and input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 may access a wireless network based on a communication standard, such as wireless network (WiFi), second generation mobile communication technology (2G) or third generation mobile communication technology (3G), or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
  • FIG. 9 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
  • An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows ServerTM), a graphical user interface based operating system (Mac OS XTM) introduced by Apple, a multi-user multi-process computer operating system (UnixTM). ), Free and Open Source Unix-like Operating System (LinuxTM), Open Source Unix-like Operating System (FreeBSDTM) or similar.
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • the present disclosure may be at least one of systems, methods and computer program products.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded from a computer readable storage medium to various computing/processing devices, or to an external computer or external storage device over a network such as at least one of the Internet, a local area network, a wide area network, and a wireless network .
  • the network may include at least one of copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of at least one of the flowcharts and block diagrams.
  • These computer-readable program instructions may also be stored in a computer-readable storage medium, the instructions causing at least one of a computer, programmable data processing apparatus, and other devices to operate in a particular manner, so that the computer-readable storage medium with the instructions
  • the medium then includes an article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of at least one of the flowcharts and block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of at least one of the flowcharts and block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of at least one of the block diagrams and flowchart illustrations, and combinations of blocks in at least one of the block diagrams and flowchart illustrations can be used with special purpose It is implemented in a hardware-based system, or can be implemented in a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK
  • the pose information and the shooting field of view information corresponding to the plurality of collection devices in the target scene are obtained respectively; according to the pose information and the shooting field of view information of each of the collection devices, each collection device is determined. drawing the shooting area of each of the collection devices on the electronic map of the target scene to generate a deployment map of the target scene.
  • the above solution can provide the user with the situation of the target scene by deploying the map.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de génération de carte, et un dispositif électronique et un support de stockage. Le procédé consiste à : acquérir des informations de pose et des informations de champ de vue de photographie dans une scène cible qui correspondent respectivement à une pluralité de dispositifs de collecte ; déterminer une région de photographie de chaque dispositif de collecte selon les informations de pose et les informations de champ de vue de photographie de chaque dispositif de collecte ; et dessiner la région de photographie de chaque dispositif de collecte sur une carte électronique de la scène cible, de façon à générer une carte de disposition de la scène cible. Au moyen des modes de réalisation de la présente divulgation, la situation d'une scène cible peut être fournie pour un utilisateur au moyen d'une carte de disposition.
PCT/CN2021/125027 2021-04-12 2021-10-20 Procédé et appareil de génération de carte, et dispositif électronique et support de stockage WO2022217877A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110390593.7A CN113115000B (zh) 2021-04-12 2021-04-12 地图生成方法及装置、电子设备和存储介质
CN202110390593.7 2021-04-12

Publications (2)

Publication Number Publication Date
WO2022217877A1 true WO2022217877A1 (fr) 2022-10-20
WO2022217877A9 WO2022217877A9 (fr) 2022-11-24

Family

ID=76716039

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125027 WO2022217877A1 (fr) 2021-04-12 2021-10-20 Procédé et appareil de génération de carte, et dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN113115000B (fr)
WO (1) WO2022217877A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362392A (zh) * 2020-03-05 2021-09-07 杭州海康威视数字技术股份有限公司 可视域生成方法、装置、计算设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115000B (zh) * 2021-04-12 2022-06-17 浙江商汤科技开发有限公司 地图生成方法及装置、电子设备和存储介质
CN115297315A (zh) * 2022-07-18 2022-11-04 北京城市网邻信息技术有限公司 用于环拍时拍摄中心点的矫正方法、装置及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060039693A1 (en) * 2004-08-20 2006-02-23 Samsung Electronics Co., Ltd. Photographing device and method for panoramic imaging
CN103391422A (zh) * 2012-05-10 2013-11-13 中国移动通信集团公司 一种视频监控方法及设备
US20140300637A1 (en) * 2013-04-05 2014-10-09 Nokia Corporation Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US20190149730A1 (en) * 2017-11-16 2019-05-16 Canon Kabushiki Kaisha Image capturing control apparatus and control method therefor
CN209913914U (zh) * 2019-06-10 2020-01-07 深圳市华意达智能电子技术有限公司 多摄像头同时采样嵌入式设备
CN113115000A (zh) * 2021-04-12 2021-07-13 浙江商汤科技开发有限公司 地图生成方法及装置、电子设备和存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4181372B2 (ja) * 2002-09-27 2008-11-12 富士フイルム株式会社 表示装置、画像情報管理端末、画像情報管理システム、および画像表示方法
JP2010206475A (ja) * 2009-03-03 2010-09-16 Fujitsu Ltd 監視支援装置、その方法、及びプログラム
CN103443839B (zh) * 2011-03-28 2016-04-13 松下知识产权经营株式会社 图像显示装置
CN104639824B (zh) * 2013-11-13 2018-02-02 杭州海康威视系统技术有限公司 基于电子地图的摄像机控制方法及装置
KR102282456B1 (ko) * 2014-12-05 2021-07-28 한화테크윈 주식회사 평면도에 히트맵을 표시하는 장치 및 방법
JP6611531B2 (ja) * 2015-09-16 2019-11-27 キヤノン株式会社 画像処理装置、画像処理装置の制御方法、およびプログラム
CN106331618B (zh) * 2016-08-22 2019-07-16 浙江宇视科技有限公司 一种自动确认摄像机可视域的方法及装置
CN109348119B (zh) * 2018-09-18 2021-03-09 成都易瞳科技有限公司 一种全景监控系统
CN109886995B (zh) * 2019-01-15 2023-05-23 深圳职业技术学院 一种复杂环境下多目标跟踪方法
CN112422886B (zh) * 2019-08-22 2022-08-30 杭州海康威视数字技术股份有限公司 可视域立体布控显示系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060039693A1 (en) * 2004-08-20 2006-02-23 Samsung Electronics Co., Ltd. Photographing device and method for panoramic imaging
CN103391422A (zh) * 2012-05-10 2013-11-13 中国移动通信集团公司 一种视频监控方法及设备
US20140300637A1 (en) * 2013-04-05 2014-10-09 Nokia Corporation Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US20190149730A1 (en) * 2017-11-16 2019-05-16 Canon Kabushiki Kaisha Image capturing control apparatus and control method therefor
CN209913914U (zh) * 2019-06-10 2020-01-07 深圳市华意达智能电子技术有限公司 多摄像头同时采样嵌入式设备
CN113115000A (zh) * 2021-04-12 2021-07-13 浙江商汤科技开发有限公司 地图生成方法及装置、电子设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362392A (zh) * 2020-03-05 2021-09-07 杭州海康威视数字技术股份有限公司 可视域生成方法、装置、计算设备及存储介质
CN113362392B (zh) * 2020-03-05 2024-04-23 杭州海康威视数字技术股份有限公司 可视域生成方法、装置、计算设备及存储介质

Also Published As

Publication number Publication date
CN113115000A (zh) 2021-07-13
WO2022217877A9 (fr) 2022-11-24
CN113115000B (zh) 2022-06-17

Similar Documents

Publication Publication Date Title
WO2022217877A1 (fr) Procédé et appareil de génération de carte, et dispositif électronique et support de stockage
US11108953B2 (en) Panoramic photo shooting method and apparatus
US11516377B2 (en) Terminal, focusing method and apparatus, and computer readable storage medium
JP5607759B2 (ja) 軌道ベースのロケーション判断を使用した画像識別
EP3352453A1 (fr) Procédé de prise de vues pour dispositif de vol intelligent et dispositif de vol intelligent
CN105959587B (zh) 快门速度获取方法和装置
KR101788496B1 (ko) 단말 및 비디오 이미지를 제어하는 장치 및 방법
CN112149659B (zh) 定位方法及装置、电子设备和存储介质
WO2017071562A1 (fr) Procédé et appareil de traitement d'informations de positionnement
KR20120012259A (ko) 로드뷰 제공 장치 및 방법
WO2022110776A1 (fr) Procédé et appareil de positionnement, dispositif électronique, support de stockage, produit-programme informatique et programme informatique
WO2023142755A1 (fr) Procédé de commande de dispositif, appareil, dispositif utilisateur et support de stockage lisible par ordinateur
JP6145563B2 (ja) 情報表示装置
CN112432636B (zh) 定位方法及装置、电子设备和存储介质
CN114466308B (zh) 一种定位方法和电子设备
CN111176338B (zh) 导航方法、电子设备及存储介质
WO2022237071A1 (fr) Procédé et appareil de localisation, ainsi que dispositif électronique, support de stockage et programme informatique
CN111754564A (zh) 视频展示方法、装置、设备及存储介质
WO2022110801A1 (fr) Procédé et appareil de traitement de données, dispositif électronique et support de stockage
EP4161054A1 (fr) Procédé, appareil et dispositif de traitement d'informations de point d'ancrage, et support de stockage
CN113724382B (zh) 地图生成方法、装置及电子设备
CN112804481B (zh) 监控点位置的确定方法、装置及计算机存储介质
CN110060355B (zh) 界面显示方法、装置、设备及存储介质
CN110633335B (zh) 获取poi数据的方法、终端和可读存储介质
CN113724382A (zh) 地图生成方法、装置及电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022530738

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21936744

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21936744

Country of ref document: EP

Kind code of ref document: A1