WO2022127576A1 - 站点模型更新方法及系统 - Google Patents

站点模型更新方法及系统 Download PDF

Info

Publication number
WO2022127576A1
WO2022127576A1 PCT/CN2021/134154 CN2021134154W WO2022127576A1 WO 2022127576 A1 WO2022127576 A1 WO 2022127576A1 CN 2021134154 W CN2021134154 W CN 2021134154W WO 2022127576 A1 WO2022127576 A1 WO 2022127576A1
Authority
WO
WIPO (PCT)
Prior art keywords
change
monitoring image
pose
changed
image
Prior art date
Application number
PCT/CN2021/134154
Other languages
English (en)
French (fr)
Inventor
乔健
黄山
谭凯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21905505.0A priority Critical patent/EP4199498A4/en
Publication of WO2022127576A1 publication Critical patent/WO2022127576A1/zh
Priority to US18/336,101 priority patent/US20230334774A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19604Image analysis to detect motion of the intruder, e.g. by frame subtraction involving reference image or background adaptation with time to compensate for changing conditions, e.g. reference image update on detection of light level change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present application relates to the field of artificial intelligence, and specifically relates to a method and system for updating a site model.
  • a site model In a variety of application scenarios, it is necessary to build a site model and update the site model for a site in a specific location, so as to provide data support for the design and hardware installation of the site, improve design efficiency and asset management, and can also be based on the actual site. Provide decision guidance or safety warnings when changes are made.
  • the actual change of the site may include the position change or orientation change of key equipment or even equipment damage, which brings about security or system performance problems, and needs to take timely measures.
  • the meaning of the site depends on the specific application scenario.
  • a site can be understood as a network base station, a relay station, or a communication center that involves network deployment and integrated services.
  • the site can be understood as a traffic indication system.
  • a site in the application scenario of power transportation, can be understood as a photovoltaic power generation system or a relay station or a power transmission hub.
  • a site in the application scenario of the petroleum industry, can be understood as a gas station or a refinery station.
  • it is necessary to monitor the actual changes of the site, and it is also necessary to collect the monitoring data of the site in real time, determine whether the key equipment has changed, and update the site model in time.
  • data collection is generally performed by manually accessing the site to discover the actual changes of the site and update the site model accordingly.
  • manual access to the station not only has the disadvantages of labor-intensive and high cost, but also often cannot arrange personnel to access the station in time, so that the monitoring data of the site and the updating of the site model cannot be achieved in time.
  • the embodiment of the present application automatically identifies the equipment that has changed and the type of change by combining the monocular camera technology and the deep learning algorithm. Thus, it can automatically detect site changes, collect site data and update site models in time.
  • an embodiment of the present application provides a method for updating a site model.
  • the method includes: acquiring a monitoring image, and determining, by using the acquired monitoring image, a change type of a device that has changed and a change amount corresponding to the change type; calculating the pose and camera parameters of the monitoring image according to the monitoring image and a site model; Monitor the pose and camera parameters of the image to determine the pose of the device that has changed; and update the site model according to the pose of the device that has changed, the type of change, and the amount of change corresponding to the type of change.
  • the technical solution described in the first aspect realizes automatic site detection by automatically judging whether there is a changed device in the monitoring image, and further determining the change type and corresponding transformation amount of the changed device according to a plurality of preset change types. Change, collect site data, and update site models in a timely manner.
  • the monitoring image is input into the neural network model to determine the change type of the device that has changed and the change amount corresponding to the change type, where the change type is one of multiple preset change types A preset variation type of .
  • the neural network model is obtained by training using a loss function.
  • the loss function includes the weighted sum of multiple sub-loss functions
  • the multiple sub-loss functions correspond to multiple preset change types one-to-one
  • each sub-loss function of the multiple sub-loss functions changes according to the preset change corresponding to the sub-loss function
  • the amount of change corresponding to the type is determined.
  • the multiple preset change types include device addition, and the amount of change corresponding to the device addition includes the maximum pixel size of the monitoring image. In this way, it is realized to quickly determine whether the change type is a new device and the corresponding change amount.
  • the plurality of preset change types include device deletion, and the amount of change corresponding to the device deletion includes a negative value of the maximum pixel size of the monitoring image. In this way, it is possible to quickly determine whether the change type is device deletion and the corresponding change amount.
  • the plurality of preset change types include device movement, and the amount of change corresponding to the device movement includes the movement distance of the center point of the changed device. In this way, it is possible to quickly determine whether the change type is device movement and the corresponding change amount.
  • the multiple preset change types include device rotation, and the amount of change corresponding to the device rotation includes the turning distance of the line connecting the edge and the center point of the device that has changed. In this way, it is realized to quickly determine whether the change type is device rotation and the corresponding change amount.
  • the multiple preset change types include simultaneous movement and rotation of the device, and the amount of change corresponding to the simultaneous movement and rotation of the device includes the moving distance of the center point of the changed device and the occurrence of the change. Changes the steering distance of the line connecting the edge to the center point of the device. In this way, it is possible to quickly determine whether the change type is the simultaneous movement and rotation of the device and the corresponding change amount.
  • the method further includes: determining the proportion of the area where the changed device is located in the monitoring image; comparing the proportion with the preset proportion; when the proportion is less than the preset proportion , obtain the enlarged monitoring image; calculate the pose and camera parameters of the enlarged monitoring image according to the enlarged monitoring image and the site model; update the site model according to the pose and camera parameters of the enlarged monitoring image.
  • the enlarged monitoring image is obtained; the pose and camera parameters of the enlarged monitoring image are calculated according to the enlarged monitoring image and the site model; the site model is updated according to the pose and camera parameters of the enlarged monitoring image.
  • the enlarged monitoring image is obtained according to a magnification, and the magnification is determined according to a proportion and a preset proportion. In this way, the calculation of the magnification is realized.
  • the pose and camera parameters of the enlarged monitoring image are determined according to the magnification, the pose and camera parameters of the monitoring image. In this way, the pose and camera parameters of the enlarged surveillance image are calculated.
  • an embodiment of the present application provides a chip system, which is characterized in that the chip system is applied to an electronic device; the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor pass through Line interconnection; the interface circuit is used to receive signals from the memory of the electronic device and send signals to the processor, where the signals include computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device executes any one of the first aspects method.
  • the technical solution described in the second aspect realizes automatic site detection by automatically judging whether there is a changed device in the monitoring image, and further determining the change type and corresponding transformation amount of the changed device according to a plurality of preset change types. Change, collect site data, and update site models in a timely manner.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer program instructions, and when executed by a processor, the computer program instructions cause the processor to execute the process as described in the first aspect. any of the methods.
  • the technical solution described in the third aspect realizes automatic site detection by automatically judging whether there is a changed device in the monitoring image, and further determining the change type and corresponding transformation amount of the changed device according to a plurality of preset change types. Change, collect site data, and update site models in a timely manner.
  • an embodiment of the present application provides a computer program product, characterized in that the computer program product includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device is made to perform any one of the first aspects. method.
  • the technical solution described in the fourth aspect realizes automatic site detection by automatically judging whether there is a changed device in the monitoring image, and further determining the change type and corresponding transformation amount of the changed device according to a plurality of preset change types. Change, collect site data, and update site models in a timely manner.
  • an embodiment of the present application provides a system for updating a site model.
  • the system includes: a device change detection device, wherein the device change detection device determines the change type of the changed device and the change amount corresponding to the change type by monitoring the image; and a processor.
  • the processor is used for: acquiring the monitoring image; calculating the pose and camera parameters of the monitoring image according to the monitoring image and the site model; determining the pose of the device that has changed according to the pose and camera parameters of the monitoring image; The pose, type of change, and the amount of change corresponding to the type of change of the device are updated, and the site model is updated.
  • the technical solution described in the fifth aspect realizes automatic site detection by automatically judging whether there is a changed device in the monitoring image, and further determining the change type and corresponding transformation amount of the changed device according to a plurality of preset change types. Change, collect site data, and update site models in a timely manner.
  • an embodiment of the present application provides a photovoltaic power generation system.
  • the photovoltaic power generation system includes a site model update system for performing any of the methods of the first aspect above.
  • the photovoltaic power generation system monitors changes of the photovoltaic power generation system through the site model update system, and the site corresponds to the photovoltaic power generation system.
  • an embodiment of the present application provides a communication relay system.
  • the communication relay system includes a site model updating system, which is used for executing any method of the first aspect above.
  • the communication relay system monitors changes of the communication relay system through the site model updating system, and the site corresponds to the communication relay system.
  • FIG. 1 is a schematic structural diagram of a site model building and updating system provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for constructing a site model provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for updating a site model provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of the second acquisition and processing steps of the monitoring image shown in FIG. 3 according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a step of detecting a device change in the method shown in FIG. 3 according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of the training method of the neural network model shown in FIG. 5 according to an embodiment of the present application.
  • FIG. 7 is a structural block diagram of the trained neural network model shown in FIG. 6 provided by an embodiment of the present application.
  • FIG. 8 is a structural block diagram of a site model updating system provided by an embodiment of the present application.
  • FIG. 9 is a structural block diagram of the neural network processor shown in FIG. 8 according to an embodiment of the present application.
  • the embodiment of the present application automatically identifies the changed equipment and the type of change by combining the shooting technology and the deep learning algorithm, so as to realize the automatic detection of the site change, the collection of site data, and the timely update of the site's data. 3D model.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that responds in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision-making and reasoning, human-computer interaction, recommendation and search, and basic AI theory.
  • Neural Network is a network structure that imitates the behavioral characteristics of animal neural networks for information processing.
  • the structure of the neural network is composed of a large number of nodes (or neurons) connected to each other, and the purpose of processing information is achieved by learning and training the input information based on a specific operation model.
  • a neural network includes an input layer, a hidden layer and an output layer.
  • the input layer is responsible for receiving input signals
  • the output layer is responsible for outputting the calculation results of the neural network
  • the hidden layer is responsible for the calculation process of learning and training. It is the memory unit of the network and the memory of the hidden layer. Functions are represented by a weight matrix, usually one weight coefficient per neuron.
  • Devices based on monocular camera technology should be understood to mean a single camera, which may include a single camera or multiple cameras.
  • a device based on the monocular camera technology refers to a device that uses a single camera including a single camera or a plurality of cameras to perform photography.
  • the specific embodiment of the present application is described by taking a single camera with a single camera as an exemplary embodiment, but the present application can also be applied to a single camera including a plurality of cameras.
  • the single camera may include a camera array composed of two or more cameras, and each camera in the camera array has a fixed linear displacement relationship, and images or videos captured by each camera can be synthesized according to these linear displacement relationships Thereby, data based on monocular camera technology are obtained.
  • Embodiments of the present application provide a method and system for updating a site model.
  • the method includes: acquiring a monitoring image, and determining, by using the acquired monitoring image, a change type of a device that has changed and a change amount corresponding to the change type; calculating the pose and camera parameters of the monitoring image according to the monitoring image and a site model; The pose and camera parameters of the image are used to determine the pose of the device that has changed; and the site model is updated according to the pose of the device that has changed, the type of change, and the amount of change corresponding to the type of change.
  • the site model updating system includes: a device change detection device, wherein the device change detection device determines the change type of the changed device and the change amount corresponding to the change type by monitoring the image; and a processor.
  • the processor is used for: acquiring the monitoring image; calculating the pose and camera parameters of the monitoring image according to the monitoring image and the site model; determining the pose of the device that has changed according to the pose and camera parameters of the monitoring image; The pose, type of change, and the amount of change corresponding to the type of change of the device are updated, and the site model is updated.
  • the embodiments of the present application can be used in the following application scenarios: update of scene models such as base stations and relay stations in the telecommunications industry, update of scene models of traffic indication systems under the security monitoring of smart cities, update of scene models of photovoltaic power generation systems, or other scenarios that require construction of specific locations
  • update of scene models such as base stations and relay stations in the telecommunications industry
  • update of scene models of traffic indication systems under the security monitoring of smart cities update of scene models of photovoltaic power generation systems, or other scenarios that require construction of specific locations
  • the site model and the application scenarios for updating the site model can be used in the following application scenarios: update of scene models such as base stations and relay stations in the telecommunications industry, update of scene models of traffic indication systems under the security monitoring of smart cities, update of scene models of photovoltaic power generation systems, or other scenarios that require construction of specific locations
  • the site model and the application scenarios for updating the site model are examples of scene models such as base stations and relay stations in the telecommunications industry.
  • FIG. 1 is a schematic structural diagram of a site model building and updating system provided by an embodiment of the present application.
  • the site model building and updating system can be divided into two parts, corresponding to the site model building and site model updating respectively.
  • the building part of the site model includes a modeling data collection device 102 , a modeling data processing platform 106 and a site model building platform 108 .
  • the modeling data collection device 102 sends the collected modeling data 104 to the modeling data processing platform 106 for processing, and the modeling data processing platform 106 sends the processed modeling data to the site model building platform 108, and finally
  • the site model building platform 108 builds a site model 120 from the processed modeling data.
  • the update part of the site model includes the update data collection device 112 , the update data processing platform 116 and the site model update platform 118 .
  • the update data collection device 112 sends the collected update data 114 to the update data processing platform 116 for processing, and the update data processing platform 116 sends the processed update data to the site model update platform 118, and finally the site model update platform 118 sends the processed update data to the site model update platform 118.
  • the site model 120 is updated according to the processed update data.
  • the modeling data collection device 102 and the update data collection device 112 belong to the front-end data collection apparatus 100 .
  • the modeling data processing platform 106 , the site model building platform 108 , the update data processing platform 116 , and the site model update platform 118 belong to the backend data processing transpose 110 .
  • the front-end data collection device 100 may be deployed at or near the site, and may be understood as an edge device or a local device, such as a camera, a mobile phone, and the like set at the site.
  • the back-end data processing transposition 110 may be deployed at a location far from the site, and may be understood as a cloud device or a data center device, such as a data center connected to a camera set at the site through a network.
  • a site refers to a scene within a certain spatial range or at a designated location, and the meaning of a site may be specifically defined in combination with a specific industry.
  • a site can be understood as a network base station and a relay station in the telecommunication industry, it can also be understood as a traffic command system in the urban security industry, or it can be understood as a power generation system and a relay station in the power transmission industry, or it can be understood as oil.
  • Refinery stations, gas stations in the industry These can be defined according to specific application scenarios, which are not limited here.
  • the modeling data acquisition device 102 refers to a corresponding device that acquires data for building a site model through panoramic measurement technology, laser point cloud measurement technology, mobile phone photography and imaging synthesis technology, or other suitable technical means .
  • panoramic measurement technology the modeling data acquisition device 102 refers to a panoramic camera or other acquisition device based on panoramic measurement technology
  • the modeling data 104 collected by the modeling data acquisition device 102 is a panoramic view representing the entire area of the scene where the site is located.
  • the images are either multiple panoramic images representing different areas of the scene where the site is located.
  • the modeling data processing platform 106 may process a plurality of panoramic images representing different areas of the scene where the site is located, thereby synthesizing panoramic images representing all areas of the scene where the site is located.
  • the site model building platform 108 processes the processed modeling data 104 through a conventional algorithm, such as a panoramic binocular measurement algorithm, and generates a site model 120 .
  • the modeling data acquisition device 102 refers to a laser scanner or other acquisition device based on the laser point cloud measurement technology.
  • the modeling data 104 collected by the modeling data collection device 102 is laser point cloud data representing the entire area of the scene where the site is located or laser point cloud data representing different areas of the scene where the site is located.
  • the modeling data processing platform 106 may splicing laser point cloud data representing different areas of the scene where the site is located, thereby synthesizing laser point cloud data representing all areas of the scene where the site is located.
  • the site model building platform 108 processes the processed modeling data 104 through a conventional algorithm such as a point cloud vector modeling algorithm to generate a site model 120 .
  • the modeling data acquisition device 102 refers to a portable device with a photo shooting function, such as a mobile phone or a tablet computer.
  • the modeling data 104 collected by the modeling data collection device 102 is picture and video data representing all areas of the scene where the site is located or picture and video data representing different areas of the scene where the site is located.
  • the modeling data processing platform 106 may process the picture and video data representing different regions of the scene where the site is located, thereby synthesizing the picture and video data representing all regions of the scene where the site is located.
  • the site model construction platform 108 processes the processed modeling data 104 through conventional algorithms, such as binocular measurement algorithm or multi-source image synthesis algorithm, and generates a site model 120 .
  • the update data collection device 112 refers to a mobile phone, a surveillance camera, a security lens or other devices based on monocular camera technology. It should be understood that although the building part of the site model acquires data for building the site model and generates the site model 120 through panoramic measurement technology, laser point cloud measurement technology, mobile phone photography and imaging synthesis technology, or other suitable technical means, but , the updated part of the site model is suitable for devices based on monocular camera technology.
  • the device based on the monocular camera technology does not need to use other acquisition devices when collecting the update data 114, and therefore does not need to consider the problem of coordination or synchronization, and in most cases in practical applications, only the device based on the monocular camera technology is used. Sufficient accuracy and information are available to implement updates to the site model 120, and thus have better generality and convenience.
  • update the data collection device 112 that is, the device based on the monocular camera technology to obtain monitoring images or monitoring videos. All or part of the frame images in the surveillance video can be extracted as surveillance images. For example, a video image can be converted into a framed image through a video frame extraction algorithm.
  • the monitoring image collected by the update data acquisition device 112 or the monitoring image extracted from the monitoring video is the update data 114 .
  • the update data 114 is sent to the update data processing platform 116 .
  • the update data processing platform 116 processes the received monitoring images, mainly to identify whether there is a device that has changed in the monitoring image, and when there is a device that has changed, further determine the area of the device that has changed, the type of change and the corresponding change quantity.
  • the site model update platform 118 updates the site model 120 according to the information provided by the update data processing platform 116 .
  • the site model 120 includes an environment model of the site and a device model of the site.
  • the environmental model of the site can be understood as the background elements in the scene where the site is located, such as permanent buildings, roads, etc., and can also be understood as elements with weak correlation with the preset functions of the site, such as trees, pedestrians, etc. .
  • the equipment model of the site is a key element in the scene where the site is located, such as the equipment necessary to realize the preset functions of the site.
  • the equipment model of the communication base station may be the antenna, power supply equipment, relay equipment and/or other elements that are strongly related to the preset functions of the communication base station deployed in the communication base station.
  • the site model updating platform 118 updates the site model 120 to update the entire area of the scene where the site is located or only a part of the area.
  • the site model updating platform 118 may also mark individual devices in the scene where the site is located as objects of special interest, and perform highly sensitive detection on changes of these objects of special interest.
  • the site model updating platform 118 may also mark certain devices as general objects of interest, and perform low-sensitivity detection on changes of these general objects of interest. Taking the site as an example of a communication base station, the antenna can be marked as an object of special concern, and a power supply device for supplying power to the antenna can be marked as an object of general concern. In this way, resources can be concentrated to preferentially reflect changes of devices marked as objects of special concern, which is beneficial to improve resource utilization efficiency.
  • the site model 120 may provide a variety of applications. For example, ranging of the distance between a particular device and a ranging reference point may be implemented using the site model 120 . Specifically, three ground reference points are selected on the image including the specific equipment to determine the datum plane of the ground plane, and the datum plane of the specific equipment is determined according to the datum plane of the ground plane; then the specific equipment is selected on the image, There is an algorithm to simulate the intersection of the generated light and a specific device to determine the pose of the specific device, thereby determining the height and angle of the specific device; select the ranging reference point on the image, and determine the pose of the ranging reference point. Thus, the distance between a specific device and a ranging reference point is calculated.
  • the site model 120 may be used to implement site asset management, space assessment design, EMF visualization, and the like.
  • FIG. 2 is a schematic flowchart of a method for constructing a site model provided by an embodiment of the present application. It should be understood that the method for building a site model shown in FIG. 2 corresponds to the building part of the site model shown in FIG. 1 .
  • the specific embodiment shown in FIG. 2 takes the panoramic measurement technology as an example, but the method shown in FIG. 2 can also be applied to other technical means such as laser point cloud measurement technology and mobile phone photography and imaging synthesis technology through adaptive modification.
  • building a site model 200 includes the following steps.
  • Step S202 Collect panoramic images.
  • the collection of panoramic images refers to obtaining panoramic images representing the entire area of the scene where the site is located or multiple panoramic images representing different areas of the scene where the site is located by using a panoramic camera or other collection equipment based on panoramic measurement technology. Multiple panoramic images representing different areas of the scene where the station is located can be processed to synthesize panoramic images representing all areas of the scene where the station is located. Collecting a panoramic image can also be understood as obtaining a panoramic video through a panoramic camera, then using an image tracking algorithm to extract images of key frames in the panoramic video, and finally using the extracted image of the key frames as a panoramic image representing the entire area of the scene where the site is located.
  • technologies such as image interference area identification algorithm can be used to identify pedestrians, sky or moving areas that play a role in the image, so as to reduce the interference of these irrelevant factors or noise.
  • Step S204 Calculate the pose of the panoramic image.
  • calculating the pose of the panoramic image refers to calculating the pose of the camera when shooting the panoramic image based on the panoramic image collected in step S202.
  • pose is short for position and orientation; the pose can be represented by six variables, three of which indicate the position and the other three the orientation.
  • Calculating the pose of the camera when taking a panoramic image can be achieved by conventional algorithms such as image feature matching algorithm, analytical aerial triangulation algorithm, multiple image pose calculation method (Structure From Motion, SFM) or other suitable technical means. Make specific restrictions.
  • Step S206 Identify a specific device and a corresponding device type in the panoramic image.
  • a specific device and a corresponding device type can be identified from the panoramic image through conventional algorithms such as feature recognition and the like. For example, assuming that the specific device to be identified is the antenna of the site, the feature recognition algorithm can identify the devices that match the antenna characteristics from the panoramic image, and label these devices as the device type of the antenna. For another example, a specific device may be identified in the panoramic image as a powered device or other type of device.
  • Step S208 Select a device model corresponding to the device type of the specific device in the prefabricated model library.
  • a device model corresponding to the device type of the specific device can be selected in the prefabricated model library.
  • the device model in the prefabricated model library may be a simplified geometric model, and several key points are used to simplify and represent the corresponding specific device, thereby facilitating the simplification of subsequent operations and data operation requirements.
  • the prefabricated model library may include a device model whose device type is an antenna, which is used to simplify the actual antenna with complex shape as a geometric model including several key points , which facilitates the convenience of subsequent operations.
  • Step S210 Build a site model according to the pose of the panoramic image and the device model.
  • the device model can be used to replace the specific device, and the pose of the device model in the panoramic image is calculated.
  • the position and size of the area where the specific device is located in the panoramic image can be determined by conventional algorithms such as target detection technology, and then calculated by several key points on the device model corresponding to the specific device to be replaced by the device model.
  • the pose of the device model in the panoramic image after that particular device Taking a specific device as an antenna as an example, the pose of the device model in the panoramic image refers to the position and orientation of the device model corresponding to the antenna in the panoramic image after replacing the antenna with the device model corresponding to the antenna. These can be combined.
  • the geometric model of the device model is used to determine whether the position and orientation of the antenna have changed, for example, the position of the antenna has shifted or the orientation of the antenna has been turned.
  • the pose of the panoramic image is calculated according to the collected panoramic image, and the equipment type is identified from the panoramic image, and then the site model is constructed in combination with the equipment model in the prefabricated model library. .
  • FIG. 3 is a schematic flowchart of a method for updating a site model provided by an embodiment of the present application. It should be understood that the method for updating the site model shown in FIG. 3 corresponds to the updating part of the site model shown in FIG. 1 . As shown in FIG. 3, updating the site model 300 includes the following steps.
  • Step S302 Collect monitoring images.
  • the monitoring image or the monitoring video may be obtained through a mobile phone, a monitoring camera, a security lens or other equipment based on the monocular camera technology. All or part of the frame images in the surveillance video can be extracted as surveillance images. In some exemplary embodiments, a video image can be converted into a framed image through a video frame decimation algorithm.
  • Step S304 Preprocessing the collected monitoring images.
  • the preprocessing of the collected surveillance images refers to performing operations such as exposure restoration, blur restoration, and rain and fog removal on the surveillance images, thereby optimizing the quality of the surveillance images, improving the clarity of image data, and facilitating subsequent processing.
  • the monitoring image preprocessing may also include operations such as excluding overexposed images and weakly exposed images through exposure detection, excluding blurred images through blur detection, and excluding images containing raindrops through raindrop detection algorithms. It should be understood that the preprocessing of the surveillance images can be performed on the local device that collects the surveillance images, such as surveillance cameras, security cameras or other edge devices at the site. Reducing the complexity of subsequent operations is conducive to saving resources and improving efficiency.
  • the method for updating the site model 300 may not include step S304, that is, step S302 is directly skipped to step S306.
  • Step S306 Detecting a device change, if a device change is detected, step S308 is performed, and if no device change is detected, step S302 is performed.
  • step S306 the collected monitoring image or the collected monitoring image that has undergone preprocessing is input into the neural network model, and the neural network model is used to automatically determine whether there is a device that has changed in the monitoring image, and further determine whether the change has occurred.
  • the result output by the neural network model for detecting device change includes the change type of the changed device and the change amount corresponding to the change type.
  • the variation type is a preset variation type among multiple preset variation types.
  • the change types of the changed device include: device addition, device deletion, device movement, device rotation, and the like.
  • adding a device means that the device does not exist in the monitoring image that has not changed in the last period, but the device exists in the current monitoring image.
  • Device deletion means that the device exists in the monitoring image that has not changed in the last period, but does not exist in the current monitoring image.
  • the movement of the device means that the position of the device in the current monitoring image has changed compared to the position of the device in the monitoring image that is determined to have not changed in the last period of time.
  • the rotation of the device means that the orientation of the device in the current monitoring image has changed compared to the orientation of the device in the monitoring image that is determined to have not changed in the last period of time.
  • step S306 the device change step is detected, and the final output result includes the region where the changed device is located, the change type and the corresponding change amount.
  • Step S308 Calculate the pose and camera parameters of the monitoring image.
  • calculating the pose of the monitoring image refers to calculating the pose of the camera in the three-dimensional space coordinate system when the monitoring image is captured.
  • pose is short for position and orientation; the pose can be represented by six variables, three of which indicate the position and the other three the orientation.
  • the calculation of the pose of the camera when capturing the surveillance image may be implemented by a conventional algorithm such as a PNP (Perspective-N-Point) algorithm, a pose estimation algorithm or other suitable technical means, which is not specifically limited herein.
  • the calculation of the camera parameters of the surveillance image refers to the calculation of the camera parameters when capturing the surveillance image, such as the focal length, the coordinates of the principal point of the image, and the distortion parameters. It should be understood that calculating the pose of the monitoring image is for calculating the external parameters of the camera when capturing the monitoring image, and calculating the camera parameters of the monitoring image is for calculating the internal imaging information of the camera when capturing the monitoring image.
  • Step S310 determine whether to enlarge the area where the device is located, if the area where the device is located, execute step S320 , and if the area where the device is located is not enlarged, execute step S330 .
  • step S306 it is determined that there is a changed device in the monitoring image, and the proportion of the area where the changed device is located in the monitoring image can be determined, for example, the proportion of the area occupied by the area where the changed device is located in the entire monitoring image can be calculated. ; Compare the proportion of the area where the changed equipment is located in the monitoring image with the preset proportion, and when the proportion is less than the preset proportion, then judge the area where the device is located and execute step S320; when the proportion is not less than the preset proportion When the proportion is, it is determined that the area where the device is located is not to be enlarged and step S312 is executed.
  • the preset proportion can be a preset value, for example, the preset proportion is set to 30%, and it is assumed that the area where the device that has changed occupies 1% of the monitored image, it is considered that the proportion is less than the preset proportion.
  • the monitoring images often cover a large area of the scene, and the area where the changed device is located may only occupy a small part of the monitoring image, that is to say, the area where the changed device is located may occupy a larger proportion of the monitoring image. Small. In this way, by comparing the ratio with the preset ratio, it is possible to selectively enlarge the area where the changed device is located, so as to obtain a better effect.
  • the meaning of the proportion of the region where the changed device is located in the monitoring image is to include the stereoscopic projection of the region of interest (Region Of Interest, ROI) of the changed device on the monitoring image, which can be understood It is a projection of a cube including 8 points, and the proportion of the area occupied by the stereoscopic projection of the ROI on the entire monitoring image is the proportion.
  • ROI region of interest
  • Step S320 secondary collection and processing of the monitoring image. Wherein, step S320 is further subdivided into step S322 and step S324.
  • Step S322 Collect the enlarged monitoring image.
  • the magnification is calculated according to the ratio calculated in step S310 and the preset ratio. For example, assuming that the area where the changed device is located accounts for 1% of the monitored image, and the preset proportion is 30%, the magnification is sqrt(30) about 5.5, where sqrt represents the calculation of taking the square root. Correspondingly, when the magnification is 5.5, it means that the focal length of the device that collects the monitoring image needs to be enlarged by 5.5 times, thereby increasing the proportion of the area where the changed device is located in the enlarged monitoring image.
  • the focal length of the device for collecting monitoring images can be adjusted by conventional technical means, which is not specifically limited here.
  • Step S324 Calculate the pose and camera parameters of the enlarged monitoring image.
  • the pose and camera parameters of the enlarged monitoring image can be calculated, and the specific details will be described in detail in the following specific embodiments related to FIG.
  • Step S330 Update the site model according to the monitoring image or the enlarged monitoring image.
  • step S310 if it is judged in step S310 that the area where the device is not enlarged, the site model is updated by using the monitoring image, and if it is determined in step S310 that the area where the device is enlarged, the site model is updated using the enlarged monitoring image obtained in step S320.
  • step S310 if it is determined in step S310 that the area where the device is not to be enlarged, then according to the monitoring image obtained in step S302, as well as the monitoring image pose and camera parameters obtained in step S308, combined with step S306 to know the area where the device is located that has changed , change type and change amount, the equipment model corresponding to the changed equipment can be identified from the prefabricated model library used to build the site model, and then the pose of the equipment model after the change can be determined according to the change type and change amount, Finally adjust the site model to reflect the change in that equipment. For example, suppose a specific device has changed and the change type is Device Added, which means that the device model corresponding to the device needs to be added to the area where the device has changed and the site model updated.
  • the change type is Device Added
  • the change type is device deletion, which means that the device model corresponding to the device needs to be deleted from the site model.
  • the change type is device movement, which means that the pose of the device model corresponding to the device needs to be adjusted to reflect the change in device movement.
  • FIG. 4 is a schematic flowchart of the steps of secondary collection and processing of the monitoring image shown in FIG. 3 according to an embodiment of the present application.
  • the secondary collection and processing 420 of monitoring images shown in FIG. 4 corresponds to the secondary collection and processing of monitoring images S320 shown in FIG. 3 , and steps S322 and S324 shown in FIG. 3 are expanded and described in more detail.
  • the secondary acquisition and processing 420 of the monitoring image includes the following steps.
  • Step S430 Calculate the ratio of the area where the changed device is located in the monitoring image and the preset ratio to calculate the magnification.
  • Step S432 After adjusting the focal length according to the magnification, the enlarged monitoring image is obtained.
  • step S322 shown in FIG. 3 , and will not be repeated here.
  • Step S434 Perform image matching between the monitoring image and the enlarged monitoring image to determine the matching point.
  • performing image matching of the monitoring image and the enlarged monitoring image and determining the matching point refers to extracting the feature points corresponding to the changed equipment from the monitoring image and the enlarged monitoring image by means of feature extraction, and performing Image matching to determine matching points.
  • Step S436 According to the correlation formula between the pose of the enlarged monitoring image and the camera parameters, first derive the camera parameters according to the pose, and then derive the pose according to the camera parameters.
  • step S436 corresponds to the calculation of the pose and camera parameters of the enlarged monitoring image in step S324 shown in FIG. 3 .
  • calculating the pose of the enlarged monitoring image refers to calculating the pose of the camera in the three-dimensional space coordinate system when capturing the enlarged monitoring image.
  • pose is short for position and orientation; the pose can be represented by six variables, three of which indicate the position and the other three the orientation.
  • Calculating the camera parameters of the enlarged surveillance image refers to calculating the camera parameters when shooting the enlarged surveillance image, such as focal length, image principal point coordinates, distortion parameters, and the like.
  • calculating the pose of the enlarged monitoring image is for calculating the external parameters of the camera when shooting the enlarged monitoring image
  • calculating the camera parameters of the enlarged monitoring image is for calculating the camera when shooting the enlarged monitoring image. in terms of internal imaging information.
  • Step S436 and step S324 shown in FIG. 3 involve calculating the pose and camera parameters of the enlarged monitoring image, which is different from calculating the pose and camera parameters of the monitoring image in step S308 shown in FIG. 3 in that:
  • the acquisition of the enlarged surveillance image is obtained by adjusting the focal length of the acquisition device that captures the surveillance image according to the magnification and then re-acquiring it. Therefore, ideally, the camera that captures the enlarged surveillance image and the camera that captures the surveillance image should have the same external parameters. That is, the same pose, and adjusting the focal length only affects the internal imaging information of the camera, that is, the camera parameters.
  • the acquisition device may be affected by various external factors such as vibration caused by wind or vibration, and may also be affected by internal factors such as The aging of the equipment and the looseness of the lens, etc., cause the pose and camera parameters of the enlarged monitoring image to be different from the pose and camera parameters of the monitoring image, respectively.
  • Step S438 Judging that the respective changes of the pose of the enlarged monitoring image and the camera parameters are respectively smaller than the respective preset thresholds, if both are smaller than the respective preset thresholds, perform step S440, and if at least one is greater than the preset thresholds, perform step S438 S436.
  • step S436 the pose and camera parameters of the enlarged monitoring image are obtained through an iterative calculation process, and in step S438, it is judged whether to end the iteration, and if the iteration end condition is not satisfied, go back to step S436 and perform the next iterative calculation process , until the iteration end condition specified in step S438 is satisfied.
  • the iterative end condition is set to be that after the one iteration calculation process in step S436 ends, the respective changes of the pose and camera parameters of the obtained enlarged monitoring image are smaller than the respective preset thresholds.
  • the change amount of the pose of the enlarged monitoring image refers to the difference between the poses of the enlarged monitoring image before and after the iterative calculation process of step S436, that is, before the one iterative calculation process of step S436 is performed.
  • the pose of the enlarged monitoring image is compared with the pose of the enlarged monitoring image after an iterative calculation process in step S436 is performed.
  • the amount of change in the camera parameters of the enlarged monitoring image refers to the difference between the camera parameters of the enlarged monitoring image before and after the iterative calculation process of step S436, that is, before the iterative calculation process of step S436 is performed.
  • the camera parameters of the enlarged monitoring image are compared with the camera parameters of the enlarged monitoring image after one iteration of the calculation process of step S436 is performed.
  • the respective changes of the pose of the enlarged monitoring image and the camera parameters may correspond to different preset thresholds.
  • the preset threshold corresponding to the change of the pose of the enlarged monitoring image is 0.0001
  • the enlarged monitoring image corresponds to a preset threshold of 0.0001.
  • the preset threshold corresponding to the variation of the camera parameters is 0.001.
  • the iteration end condition is satisfied only when the pose of the enlarged monitoring image and the changes of the camera parameters are smaller than their respective preset thresholds.
  • Step S440 Output the pose and camera parameters of the enlarged monitoring image.
  • step S438 After it is determined in step S438 that the iteration end condition is satisfied, the pose and camera parameters of the enlarged monitoring image that satisfy the iteration end condition are output.
  • the output result of step S440 corresponds to the output result of step S324 shown in FIG. 3 , that is, the pose and camera parameters of the enlarged monitoring image obtained by output calculation, and also the secondary acquisition and processing of the monitoring image shown in step S320 shown in FIG. 3 . output result.
  • FIG. 5 is a schematic flowchart of a step of detecting a device change in the method shown in FIG. 3 according to an embodiment of the present application.
  • the detection device change 506 in FIG. 5 corresponds to step S306 shown in FIG. 3 : “whether a device change is detected”.
  • detecting a device change 506 includes the following steps.
  • Step S510 Acquire a reference image.
  • the reference image refers to a reference image used for judging whether there is a change in the device, which may be a monitoring image determined to have not changed in a previous period, or may be a manually input reference image.
  • Step S512 Obtain a monitoring image.
  • the monitoring image or the monitoring video may be obtained through a mobile phone, a monitoring camera, a security lens or other equipment based on the monocular camera technology. All or part of the frame images in the surveillance video can be extracted as surveillance images. In some exemplary embodiments, a video image can be converted into a framed image through a video frame decimation algorithm.
  • steps 510 and 512 there is no time sequence between steps 510 and 512, and they may be performed simultaneously or separately in any order.
  • Step S514 Input the reference image and the monitoring image into the neural network model.
  • the reference image and the monitoring image are input into the neural network model, and the neural network model is used to determine whether any equipment has changed in the monitoring image, and the change type and corresponding change amount of the equipment that has changed.
  • Step S516 Determine whether there is a change in the device through the neural network model, if there is a change, execute step S518, and if there is no change, execute step S520.
  • step S418 is performed and the monitoring image with the equipment changing, the region where the equipment is located and the type of change are output; when there is no equipment change in the monitoring image, step S420 can be performed and the reference image is replaced with the monitoring image , that is, the monitoring image is used as the reference image for the next time the neural network model is used to determine whether there is a device change.
  • the result output by the neural network model includes the change type of the device that has changed and the change amount corresponding to the change type.
  • the variation type is one preset variation type among a plurality of preset variation types.
  • the multiple preset change types cover the vast majority of possible changes to a device, including: adding a device, deleting a device, moving a device, and/or rotating a device.
  • the plurality of preset change types may also include a combination of the above basic change types, for example, including a change in which a device moves and rotates at the same time. Therefore, the plurality of preset change types may further include: device addition, device deletion, device movement, device rotation, device movement and rotation, and the like.
  • the training method of the neural network model used in step S516 will be described in detail in the following specific embodiments related to FIG. 6 .
  • Device deletion means that the device exists in the reference image, but does not exist in the current surveillance image.
  • Device movement means that the position of the device in the current monitoring image has changed compared to the position of the device in the reference image.
  • Device rotation means that the orientation of the device in the current surveillance image has changed compared to the orientation of the device in the reference image.
  • the types of changes such as device addition, device deletion, device movement, and device rotation can be preset, and by comparing the reference image and the monitoring image, the neural network model can determine whether there is a change and identify the type of change.
  • the trained neural network model may be more sensitive to changes in individual device models, for example, by setting a stochastic gradient for the region where the device identified as a specific device model is located in the monitoring image
  • the coefficients of the descent algorithm are used to realize that the output of the classification layer has a higher sensitivity to the input variables that characterize the degree of change in the region.
  • individual devices in the scene where the site is located can be marked as special attention objects, and the changes of these special attention objects can be detected with high sensitivity. lower detection.
  • Step S518 output the area where the changed device is located, the change type and the corresponding change amount.
  • step S516 when it is determined by the neural network model in step S516 that there is a change in the equipment in the monitoring image, the region where the equipment that has changed, the type of change and the corresponding amount of change are output.
  • Step S520 Update the reference image with the monitoring image.
  • the reference image can be replaced with the current monitoring image. That is to say, if it is determined according to the output result of the neural network model that there is no equipment change in the monitoring image of the current period, the monitoring image of the current period can be used as a reference image relative to the monitoring image acquired in the next period. For example, it can be set to detect equipment changes on time every day, and collect monitoring images and detect equipment changes at 9:00 am and 10:00 am respectively. Assuming that no devices that have changed are found in the monitoring images collected at 9:00 am, the monitoring images collected at 9:00 am can be used to replace the baseline image and compared with the monitoring images collected at 10:00 am to determine the 10:00 am collection. The monitoring image of whether there is a device that has changed.
  • the reference image in conjunction with the steps shown in FIG. 5, by inputting the reference image and the monitoring image into the trained neural network model, it is determined whether there is a device change in the monitoring image and the region where the device where the output has changed, the type of change and Corresponds to the amount of change, and when no device changes in the monitoring image, the reference image can be updated with the current monitoring image.
  • FIG. 6 is a schematic flowchart of the training method of the neural network model shown in FIG. 5 according to an embodiment of the present application.
  • the training method 600 of the neural network model shown in FIG. 6 is used to train the neural network model used in step S516 of FIG. 5 for judging whether there is a change in the equipment, and the neural network model will also output the changed equipment.
  • the training method 600 of the neural network model includes the following steps.
  • Step S610 Acquire a reference image and a training image.
  • the neural network model compares the reference image and the training image and gives the prediction result, and then according to the prediction result. Feedback to adjust the parameters of the neural network model to achieve the purpose of training.
  • the reference image refers to a reference image with no equipment changes during the training of the neural network model.
  • the training image is a device used to compare the neural network model with the reference image and determine whether the training image has changed relative to the reference image in the process of training the neural network model.
  • the training method of the neural network model adopts the method of supervised learning, that is, the training image has a label, and the label includes the following information: the training image with the label is relative to the Whether there is a changed device in the reference image, the change type of the changed device, and the corresponding change amount.
  • the prediction result of the neural network model can be evaluated, which is beneficial to adjust the parameters of the neural network model.
  • the reference image is for the process of training the neural network model
  • obtaining the reference image S510 is also mentioned.
  • the reference image mentioned in FIG. 6 is for the process of training the neural network model
  • the reference image mentioned in FIG. 5 is for the execution process of the trained neural network model.
  • the method for training a neural network model in the specific embodiment shown in FIG. 6 is a method for training a multi-task neural network model, so the trained neural network model can not only predict whether there is a device that has changed, but also output the occurrence of The region where the changed equipment is located, the type of change, and the corresponding amount of change.
  • Step S620 Compare the reference image and the training image, and determine the region where the changed device is located in the training image, the change type and the corresponding change amount.
  • the training image has a label
  • the label includes the following information: whether the training image with the label has a changed device relative to the reference image, the change type of the changed device and the corresponding change amount.
  • the change type is a preset change type among multiple preset change types, and the multiple preset change types include device addition, device deletion, device movement, device rotation, etc., and may also include device addition, device deletion, etc.
  • step S620 the device moves, the device rotates, the device both moves and rotates, etc. It should be understood that the details of the multiple preset change types involved in step S620 are consistent with the details of the multiple preset change types involved in "judging whether a device has changed through the neural network model" in step S516. This is because in the specific embodiment shown in FIG. 5 , step S516 is to use the neural network model trained by the method shown in FIG. 6 for execution.
  • Step S630 Select a sub-loss function corresponding to the change type from the plurality of sub-loss functions, and calculate the sub-loss function according to the change type and the corresponding change amount.
  • step S620 both the reference image and the training image obtained in step S610 are input into the neural network model to be trained, and the output result of the neural network model to be trained is obtained, that is, the area where the changed device is located in the training image, The type of change and the corresponding amount of change. These outputs are used to calculate the loss function to adjust the parameters of the neural network model to be trained. It should be understood that the method for training a neural network model in the specific embodiment shown in FIG.
  • the output result of that is, whether there is a device that has changed and the type of change, also includes the output result required to perform the quantization task, that is, the amount of change corresponding to the type of change.
  • the multiple sub-loss functions are designed, and the multiple sub-loss functions correspond to multiple preset change types one-to-one, and each sub-loss function of the multiple sub-loss functions is based on the change corresponding to the preset change type corresponding to the sub-loss function. Quantity is determined. In this way, the purpose of training the neural network model for performing various tasks can be achieved.
  • the multiple preset change types include device addition, and the change amount corresponding to the device addition includes the maximum value of the pixel size of the monitoring image.
  • the change amount corresponding to the device addition includes the maximum value of the pixel size of the monitoring image.
  • L ADD represents the sub-loss function corresponding to the preset change type of device addition; p max represents the maximum pixel size of the monitoring image; P ADD represents the predicted value of the neural network model to be trained.
  • the change type is the probability that the device is newly added; Y represents the label that comes with the training image in step S610.
  • the predicted change type of the neural network model to be trained after performing the prediction task can be the probability that the device is newly added and the amount of change predicted by the corresponding device after performing the quantization task.
  • the information carried in the comparison is used as the basis for adjusting the parameters of the neural network model to be trained.
  • the plurality of preset change types include device deletion, and the change amount corresponding to the device deletion includes a negative value of the maximum value of the pixel size of the monitoring image.
  • the sub-loss function corresponding to the preset change type of device deletion refers to formula (2).
  • L DEL represents the sub-loss function corresponding to the preset change type of device deletion; -p max represents the negative value of the maximum pixel size of the monitoring image; P DEL represents the neural network model to be trained
  • the predicted change type is the probability of device deletion; Y represents the label that comes with the training image in step S610.
  • the plurality of preset change types include equipment movement, and the amount of change corresponding to the equipment movement includes the moving distance of the center point of the changed equipment.
  • the sub-loss function corresponding to the preset change type of device movement Refer to formula (3) for the sub-loss function corresponding to the preset change type of device movement.
  • the plurality of preset change types include device rotation, and the amount of change corresponding to the device rotation includes the turning distance of the line connecting the edge and the center point of the changed device.
  • the sub-loss function corresponding to the preset change type of device rotation Refer to formula (4) for the sub-loss function corresponding to the preset change type of device rotation.
  • L ROTATE Loss( ⁇ A, P ROTATE , Y) (4)
  • L ROTATE represents the sub-loss function corresponding to the preset change type of device rotation
  • ⁇ A represents the turning distance of the line connecting the edge and the center point of the device that has changed
  • P ROTATE represents the neural network to be trained
  • the type of change predicted by the network model is the probability of device rotation
  • Y represents the label that comes with the training image in step S610.
  • the multiple preset change types include simultaneous movement and rotation of the device, and the amount of change corresponding to the simultaneous movement and rotation of the device includes the moving distance of the center point of the changed device and the changed device.
  • the sub-loss function corresponding to the preset change type of simultaneous movement and rotation of the device refers to formula (5).
  • L MOV_ROTATE Loss( ⁇ d+ ⁇ A, f(P MOV , P ROTATE ), Y) (5)
  • the predicted change type of the neural network model to be trained after performing the prediction task can be the probability of simultaneous movement and rotation of the device and the predicted change of the corresponding device simultaneously moving and rotating after performing the quantization task.
  • the quantity is compared with the information carried in the label, so as to be the basis for adjusting the parameters of the neural network model to be trained.
  • Step S640 Weighted addition of multiple sub-loss functions to obtain a total loss function.
  • each sub-loss function calculated in step S630 is weighted and added by using the hyperparameter as a weight to obtain a total loss function, with reference to formula (6).
  • L ALL ⁇ 1 L ADD + ⁇ 2 L DEL + ⁇ 3 L MOV + ⁇ 4 L ROTATE + ⁇ 5 L MOV_ROTATE (6)
  • L ADD represents the sub-loss function corresponding to the preset change type of device addition
  • L DEL represents the sub-loss function corresponding to the preset change type of device deletion
  • L MOV represents the sub-loss function corresponding to the device move The sub-loss function corresponding to this preset change type
  • L ROTATE represents the sub-loss function corresponding to the preset change type of device rotation
  • L MOV_ROTATE represents the sub-loss function corresponding to the preset change type of simultaneous device movement and rotation
  • ⁇ 1 to ⁇ 5 represent the hyperparameters corresponding to the respective sub-loss functions as weight coefficients
  • L ALL represents the total loss function.
  • Step S650 Adjust the parameters of the neural network model through the total loss function to obtain a trained neural network model.
  • the parameters of the neural network model can be adjusted according to the output of the total loss function through conventional algorithms for adjusting the neural network model, such as back-propagation algorithm and gradient descent algorithm, and then in The trained neural network model is obtained after multiple iterative adjustments.
  • the total loss function may further include other loss functions calculated according to the region where the changed device is located in the training image, so as to optimize the training effect.
  • the total loss function is obtained by the weighted sum of multiple sub-loss functions corresponding to multiple preset change types one-to-one, and then the parameters of the neural network model are adjusted through the total loss function.
  • a trained neural network model is obtained, and the output result of the trained neural network model includes the change type of the device that has changed and the change amount corresponding to the change type, which is conducive to quickly identifying the change type and outputting the change amount.
  • FIG. 7 is a structural block diagram of the trained neural network model shown in FIG. 6 provided by an embodiment of the present application. It should be understood that FIG. 7 only schematically shows a possible structure and should not be understood as the only structure.
  • the convolutional neural network model 700 may include an input layer 710 , a convolutional/pooling layer 720 , where the pooling layer is optional, and a neural network layer 730 .
  • the structure of the convolutional layer/pooling layer 720 is described in detail below.
  • the convolutional layer/pooling layer 720 may include layers 721-726 as examples.
  • layer 721 is a convolutional layer
  • layer 722 is a pooling layer
  • layer 723 is a convolutional layer
  • Layer 724 is a pooling layer
  • 725 is a convolutional layer
  • 726 is a pooling layer
  • 721 and 722 are convolutional layers
  • 723 are pooling layers
  • 724 and 725 are convolutional layers
  • 726 is the pooling layer. That is, the output of a convolutional layer can be used as the input of a subsequent pooling layer, or it can be used as the input of another convolutional layer to continue the convolution operation.
  • the convolution layer 721 may include many convolution operators, which are also called kernels, and their role in image processing is equivalent to a filter that extracts specific information from the input image matrix.
  • the convolution operator can be essentially a weight matrix. This weight matrix is usually pre-defined. In the process of convolving an image, the weight matrix is usually pixel by pixel along the horizontal direction on the input image ( or two pixels after two pixels, depending on the value of the step size) to complete the extraction of specific features from the image. The size of this weight matrix should be related to the size of the image. It should be noted that the depth dimension of the weight matrix is the same as the depth dimension of the input image.
  • the weight matrix will extend to the entire depth of the input image. Therefore, convolution with a single weight matrix will produce a single depth dimension of the convolutional output, but in most cases a single weight matrix is not used, but multiple weight matrices of the same dimension are applied.
  • the output of each weight matrix is stacked to form the depth dimension of the convolutional image.
  • Different weight matrices can be used to extract different features in the image. For example, one weight matrix is used to extract image edge information, another weight matrix is used to extract specific colors of the image, and another weight matrix is used to extract unwanted noise in the image.
  • the dimensions of the multiple weight matrices are the same, and the dimension of the feature maps extracted from the multiple weight matrices with the same dimensions are also the same, and then the multiple extracted feature maps with the same dimensions are combined to form the output of the convolution operation.
  • the weight values in these weight matrices need to be obtained through a lot of training in practical applications, and each weight matrix formed by the weight values obtained by training can extract information from the input image, thereby helping the convolutional neural network 700 to make correct predictions.
  • the initial convolutional layer (for example, 721) often extracts more general features, which can also be called low-level features; with the convolutional neural network
  • the features extracted by the later convolutional layers eg, 726
  • the features with higher semantics are more suitable for the problem to be solved.
  • a pooling layer after the convolutional layer can be a convolutional layer followed by a layer
  • the pooling layer can also be a multi-layer convolutional layer followed by one or more pooling layers.
  • the pooling layer may include an average pooling operator and/or a max pooling operator for sampling the input image to obtain a smaller size image.
  • the average pooling operator can calculate the average value of the pixel values in the image within a certain range.
  • the max pooling operator can take the pixel with the largest value within a specific range as the result of max pooling. Also, just as the size of the weight matrix used in the convolutional layer should be related to the size of the image, the operators in the pooling layer should also be related to the size of the image.
  • the size of the output image after processing by the pooling layer can be smaller than the size of the image input to the pooling layer, and each pixel in the image output by the pooling layer represents the average or maximum value of the corresponding sub-region of the image input to the pooling layer.
  • the structure of the neural network layer 730 is described in detail below.
  • the convolutional neural network 700 After being processed by the convolutional/pooling layer 720, the convolutional neural network 700 is not sufficient to output the required output information. Because as before, the convolutional/pooling layer 720 only extracts features and reduces the parameters brought by the input image. However, in order to generate the final output information (required class information or other relevant information), the convolutional neural network 700 needs to utilize the neural network layer 730 to generate one or a set of outputs of the required number of classes. Therefore, the neural network layer 730 may include multiple hidden layers (731, 732 to 733 as shown in FIG.
  • the output layer 740 and the parameters contained in the multiple hidden layers may be based on specific task types
  • the relevant training data is pre-trained, for example, the task type can include image recognition, image classification, image super-resolution reconstruction and so on. It should be understood that the three hidden layers 1 to 3 shown in FIG. 7 are only exemplary, and different numbers of hidden layers may be included in other embodiments.
  • the output layer 740 has a loss function similar to the classification cross entropy, and is specifically used to calculate the prediction error.
  • the forward propagation of the entire convolutional neural network 700 (as shown in Fig. 7 from 710 to 740 is forward propagation) is completed, the back propagation (as shown in Fig. 7 from 740 to 710 as back propagation) will start to update
  • the weight values and biases of the aforementioned layers are used to reduce the loss of the convolutional neural network 700 and the error between the result output by the convolutional neural network 700 through the output layer and the ideal result.
  • the convolutional neural network 700 shown in FIG. 7 is only used as an example of a convolutional neural network. In a specific application, the convolutional neural network may also exist in the form of other network models.
  • FIG. 8 is a structural block diagram of a site model updating system provided by an embodiment of the present application.
  • the site model updating system 800 includes: an image acquisition device 802 , an interface circuit 804 , a device change detection device 810 , a processor 806 , and a monitor image pose and camera parameter memory 808 .
  • the device change detection apparatus 810 further includes a neural network processor 820 , a monitoring image memory 812 and a reference image memory 814 . It should be understood that the device change detection device 810 is configured to perform, for example, the operation of detecting device change in step S306 shown in FIG.
  • the device change detection device 810 includes a reference image memory 814 for storing reference images, a monitoring image memory 812 for storing monitoring images, and a neural network processor 820 .
  • the neural network processor 820 stores a neural network model or an equivalent machine learning algorithm, and is used to perform step S516 to determine whether there is a device change and output the region where the changed device is located, the type of change, and the corresponding amount of change.
  • the neural network model stored in the neural network processor 820 is obtained by training the neural network model training method shown in FIG. 6 , and may have the structure of the convolutional neural network model 700 shown in FIG. 7 in a possible implementation manner. .
  • the image acquisition device 802 captures the monitoring image of the site in real time, and stores the monitoring image in the monitoring image memory 812 through the interface circuit 804 .
  • the processor 806 performs the operations of steps S308 to S330 shown in FIG. Pose and camera parameters.
  • the processor 806 also performs the operation of step S310 to determine whether the area where the device is located needs to be enlarged.
  • the processor 806 instructs the image acquisition device 802 to collect the enlarged monitoring image, and then the processor 806 calculates the enlarged monitoring image. pose and camera parameters, and finally perform step S330 to update the site model.
  • FIG. 9 is a structural block diagram of the neural network processor shown in FIG. 8 according to an embodiment of the present application.
  • the neural network processor 920, the external memory 960 and the main processor 950 constitute an overall system architecture.
  • the external memory 960 shown in FIG. 9 may include the monitoring image pose and camera parameter memory 808 shown in FIG. 8 , which refers to a memory that exists externally independently of the neural network processor 920 .
  • the main processor 950 shown in FIG. 9 may include the processor 806 shown in FIG. 8 , which can be understood as a main processor for processing other tasks than neural network algorithms. As shown in FIG.
  • the core part of the neural network processor 920 is the operation circuit 903 , and the controller 904 controls the operation circuit 903 to extract the data in the memory (weight memory or input memory) and perform operations.
  • the operation circuit 903 includes multiple processing units (Process Engine, PE).
  • the arithmetic circuit 903 is a two-dimensional systolic array.
  • the arithmetic circuit 903 may also be a one-dimensional systolic array or other electronic circuitry capable of performing mathematical operations such as multiplication and addition.
  • the arithmetic circuit 903 is a general-purpose matrix processor.
  • the operation circuit 903 fetches the data corresponding to the matrix B from the weight memory 902 and buffers it on each PE in the operation circuit 903 .
  • the operation circuit 903 fetches the data of the matrix A from the input memory 901 and performs the matrix operation on the matrix B, and stores the partial result or the final result of the matrix in the accumulator 908 .
  • the vector calculation unit 907 can further process the output of the operation circuit 903, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison and so on.
  • the vector calculation unit 907 can be used for network calculation of non-convolutional/non-FC layers in the neural network, such as pooling (Pooling), batch normalization (Batch Normalization), local response normalization (Local Response Normalization), etc. .
  • vector computation unit 907 stores the processed output vectors to unified buffer 906 .
  • the vector calculation unit 907 may apply a non-linear function to the output of the arithmetic circuit 903, such as a vector of accumulated values, to generate activation values.
  • the vector computation unit 907 generates normalized values, merged values, or both.
  • the vector of processed outputs can be used as an activation input to the arithmetic circuit 903, such as for use in subsequent layers in a neural network. Therefore, according to specific requirements, the neural network processor shown in FIG. 8 may run the neural network algorithm in the operation circuit 903 or the vector computing unit 907 shown in FIG.
  • the unified memory 906 is used for storing input data and output data.
  • the storage unit access controller 905 Direct Memory Access Controller, DMAC
  • DMAC Direct Memory Access Controller
  • the storage unit access controller 905 transfers the input data in the external memory to the input memory 901 and/or the unified memory 906, stores the weight data in the external memory into the weight memory 902, and stores the unified memory.
  • the data in 906 is stored in external memory.
  • the bus interface unit (Bus Interface Unit, BIU) 910 is used to realize the interaction between the main CPU, the DMAC and the instruction fetch memory 909 through the bus.
  • the instruction fetch buffer (instruction fetch buffer) 909 connected with the controller 904 is used to store the instructions used by the controller 904; the controller 904 is used to call the instructions cached in the instruction memory 909, so as to realize the working process of controlling the operation accelerator.
  • the unified memory 906, the input memory 901, the weight memory 902 and the instruction fetch memory 909 are all on-chip (On-Chip) memories, and the external memory is the memory outside the NPU, and the external memory can be double data rate synchronous dynamic random access.
  • Memory Double Data Rate Synchronous Dynamic Random Access Memory, referred to as DDR SDRAM), high bandwidth memory (High Bandwidth Memory, HBM) or other readable and writable memory.
  • the specific embodiments provided herein may be implemented in any one or combination of hardware, software, firmware or solid state logic circuits, and may be implemented in conjunction with signal processing, control and/or special purpose circuits.
  • the apparatus or apparatus provided by the specific embodiments of the present application may include one or more processors (eg, microprocessor, controller, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) ), etc.), these processors process various computer-executable instructions to control the operation of a device or apparatus.
  • the device or apparatus provided by the specific embodiments of the present application may include a system bus or a data transmission system that couples various components together.
  • a system bus may include any one or a combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or processing utilizing any of a variety of bus architectures device or local bus.
  • the equipment or apparatus provided by the specific embodiments of the present application may be provided independently, may be a part of a system, or may be a part of other equipment or apparatus.
  • Embodiments provided herein may include or be combined with computer-readable storage media, such as one or more storage devices capable of providing non-transitory data storage.
  • the computer-readable storage medium/storage device may be configured to hold data, programmers and/or instructions that, when executed by the processors of the apparatuses or apparatuses provided by the specific embodiments of the present application, cause these apparatuses Or the device realizes the relevant operation.
  • Computer-readable storage media/storage devices may include one or more of the following characteristics: volatile, non-volatile, dynamic, static, read/write, read-only, random access, sequential access, location addressability, File addressability and content addressability.
  • the computer-readable storage medium/storage device may be integrated into the device or apparatus provided by the specific embodiments of the present application or belong to a public system.
  • Computer readable storage media/storage devices may include optical storage devices, semiconductor storage devices and/or magnetic storage devices, etc., and may also include random access memory (RAM), flash memory, read only memory (ROM), erasable and programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Registers, Hard Disk, Removable Disk, Recordable and/or Rewritable Compact Disc (CD), Digital Versatile Disc (DVD), Mass storage media device or any other form of suitable storage media.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable and programmable Read Only Memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • CD Compact Disc
  • DVD Digital Versatile Disc
  • Mass storage media device or any other form of suitable storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了人工智能领域中的一种站点模型更新方法及系统。该方法包括:获取监控图像,通过获取到的所述监控图像,确定发生变化的设备的变化类型以及与所述变化类型对应的变化量;根据所述监控图像和站点模型,计算所述监控图像的位姿和相机参数;根据所述监控图像的位姿和相机参数,确定所述发生变化的设备的位姿;以及根据所述发生变化的设备的位姿、所述变化类型以及与所述变化类型对应的变化量,更新所述站点模型。

Description

站点模型更新方法及系统
本申请要求于2020年12月16日提交中国国家知识产权局、申请号为202011487305.1、发明名称为“站点模型更新方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,具体涉及站点模型更新方法及系统。
背景技术
在多种应用场景下需要针对特定地点的站点构建站点模型并更新站点模型,从而为该站点的设计和硬件安装等环节提供数据支持,提高设计效率和资产管理,另外也可以根据该站点的实际变化而提供决策指引或者安全警告。例如,站点的实际变化可能包括关键设备的位置变化或者朝向改变甚至设备损毁,从而带来安全性或者系统性能的问题,并需要及时采取措施。其中,站点的含义根据具体应用场景而定。例如在电信通讯的应用场景,站点可以理解成涉及网络部署与集成业务的网络基站、中继站或者通讯中枢。再例如在智能城市安防监控的应用场景,站点可以理解成在交通指示系统。再例如在电力运输的应用场景,站点可以理解成光伏发电系统或者继电站或者电力输送枢纽。再例如在石油行业的应用场景,站点可以理解成加油站或者炼油站。在这些应用场景中,需要监测站点的实际变化,也需要实时采集站点的监测数据并确定关键设备是否发生变化,并及时更新站点模型。现有技术中,一般通过人工上站的方式进行数据采集从而发现站点的实际变化并据此更新站点模型。然而,通过人工上站的方式不仅具有耗费人力和成本高的缺点,而且往往不能及时安排人员上站从而无法做到及时采集站点的监测数据并更新站点模型。
为此,需要一种技术方案来实现对站点实际变化的实时监测、自动判断关键设备是否发生变化以及更新站点模型。
发明内容
本申请实施例为了解决对站点实际变化的实时监测、自动判断关键设备是否发生变化以及更新站点模型的技术难题,通过结合单目摄像技术和深度学习算法来自动识别发生变化的设备和变化类型,从而实现了自动检测站点变化、采集站点数据并及时更新站点模型。
第一方面,本申请实施例提供了一种站点模型更新方法。该方法包括:获取监控图像,通过获取到的监控图像,确定发生变化的设备的变化类型以及与变化类型对应的变化量;根据监控图像和站点模型,计算监控图像的位姿和相机参数;根据监控图像的位姿和相机参数,确定发生变化的设备的位姿;以及根据发生变化的设备的位姿、变化类型以及与变化类型对应的变化量,更新站点模型。
第一方面所描述的技术方案,通过自动判断监控图像中是否存在发生变化的设备,以及进一步根据多个预设变化类型确定发生变化的设备的变化类型和对应变换量,从而实现了自动检测站点变化、采集站点数据并及时更新站点模型。
根据第一方面,在一种可能的实现方式中,通过将监控图像输入神经网络模型从而确定 发生变化的设备的变化类型以及与变化类型对应的变化量,变化类型是多个预设变化类型中的一个预设变化类型。
根据第一方面,在一种可能的实现方式中,神经网络模型通过使用损失函数训练得到。其中,损失函数包括多个子损失函数的加权之和,多个子损失函数与多个预设变化类型一一对应,多个子损失函数的每一个子损失函数根据与该子损失函数对应的预设变化类型所对应的变化量确定。如此,通过将监控图像输入神经网络模型,以及通过分别设计不同的子损失函数,实现训练该神经网络模型用于执行多种任务包括快速判断变化类型和对应变换量。
根据第一方面,在一种可能的实现方式中,多个预设变化类型包括设备新增,设备新增所对应的变化量包括监控图像的像素大小的最大值。如此,实现了快速判断变化类型是否为设备新增以及对应变化量。
根据第一方面,在一种可能的实现方式中,多个预设变化类型包括设备删除,设备删除所对应的变化量包括监控图像的像素大小的最大值的负值。如此,实现了快速判断变化类型是否为设备删除以及对应变化量。
根据第一方面,在一种可能的实现方式中,多个预设变化类型包括设备移动,设备移动所对应的变化量包括发生变化的设备的中心点的移动距离。如此,实现了快速判断变化类型是否为设备移动以及对应变化量。
根据第一方面,在一种可能的实现方式中,多个预设变化类型包括设备旋转,设备旋转所对应的变化量包括发生变化的设备的边缘与中心点的连线的转向距离。如此,实现了快速判断变化类型是否为设备旋转以及对应变化量。
根据第一方面,在一种可能的实现方式中,多个预设变化类型包括设备同时移动和旋转,设备同时移动和旋转所对应的变化量包括发生变化的设备的中心点的移动距离以及发生变化的设备的边缘与中心点的连线的转向距离。如此,实现了快速判断变化类型是否为设备同时移动和旋转以及对应变化量。
根据第一方面,在一种可能的实现方式中,方法还包括:确定发生变化的设备所在区域在监控图像中的占比;比较占比和预设占比;当占比小于预设占比时,获得放大后监控图像;根据放大后监控图像和站点模型,计算放大后监控图像的位姿和相机参数;根据放大后监控图像的位姿和相机参数,更新站点模型。如此,实现了获得放大后监控图像;根据放大后监控图像和站点模型,计算放大后监控图像的位姿和相机参数;根据放大后监控图像的位姿和相机参数,更新站点模型。
根据第一方面,在一种可能的实现方式中,放大后监控图像根据放大倍数获得,放大倍数根据占比和预设占比确定。如此,实现了放大倍数的计算。
根据第一方面,在一种可能的实现方式中,放大后监控图像的位姿和相机参数根据放大倍数、监控图像的位姿和相机参数确定。如此,实现了计算放大后监控图像的位姿和相机参数。
第二方面,本申请实施例提供了一种芯片系统,其特征在于,芯片系统应用于电子设备;芯片系统包括一个或多个接口电路,以及一个或多个处理器;接口电路和处理器通过线路互联;接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,电子设备执行如第一方面中任意一项方法。
第二方面所描述的技术方案,通过自动判断监控图像中是否存在发生变化的设备,以及进一步根据多个预设变化类型确定发生变化的设备的变化类型和对应变换量,从而实现了自 动检测站点变化、采集站点数据并及时更新站点模型。
第三方面,本申请实施例提供了一种计算机可读存储介质,其特征在于,计算机可读存储介质存储有计算机程序指令,计算机程序指令当被处理器执行时使处理器执行如第一方面中任一项的方法。
第三方面所描述的技术方案,通过自动判断监控图像中是否存在发生变化的设备,以及进一步根据多个预设变化类型确定发生变化的设备的变化类型和对应变换量,从而实现了自动检测站点变化、采集站点数据并及时更新站点模型。
第四方面,本申请实施例提供了一种计算机程序产品,其特征在于,计算机程序产品包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面中任一项的方法。
第四方面所描述的技术方案,通过自动判断监控图像中是否存在发生变化的设备,以及进一步根据多个预设变化类型确定发生变化的设备的变化类型和对应变换量,从而实现了自动检测站点变化、采集站点数据并及时更新站点模型。
第五方面,本申请实施例提供了一种站点模型更新系统。系统包括:设备变化检测装置,其中,设备变化检测装置通过监控图像,确定发生变化的设备的变化类型以及与变化类型对应的变化量;和处理器。其中,处理器用于:获取监控图像;根据监控图像和站点模型,计算监控图像的位姿和相机参数;根据监控图像的位姿和相机参数,确定发生变化的设备的位姿;以及根据发生变化的设备的位姿、变化类型以及与变化类型对应的变化量,更新站点模型。
第五方面所描述的技术方案,通过自动判断监控图像中是否存在发生变化的设备,以及进一步根据多个预设变化类型确定发生变化的设备的变化类型和对应变换量,从而实现了自动检测站点变化、采集站点数据并及时更新站点模型。
第六方面,本申请实施例提供了一种光伏发电系统。光伏发电系统包括站点模型更新系统,用于执行上述第一方面的任一方法。所述光伏发电系统通过所述站点模型更新系统来监控所述光伏发电系统的变化,所述站点对应所述光伏发电系统。
第七方面,本申请实施例提供了一种通讯中转系统。通讯中转系统包括站点模型更新系统,用于执行上述第一方面的任一方法。所述通讯中转系统通过所述站点模型更新系统来监控所述通讯中转系统的变化,所述站点对应所述通讯中转系统。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的站点模型构建和更新系统的结构示意图。
图2是本申请实施例提供的构建站点模型的方法的流程示意图。
图3是本申请实施例提供的更新站点模型的方法的流程示意图。
图4是本申请实施例提供的图3所示的监控图像二次采集及处理步骤的流程示意图。
图5是本申请实施例提供的图3所示的方法中检测设备变化步骤的流程示意图。
图6是本申请实施例提供的图5所示的神经网络模型的训练方法的流程示意图。
图7是本申请实施例提供的图6所示的已训练神经网络模型的结构框图。
图8是本申请实施例提供的站点模型更新系统的结构框图。
图9是本申请实施例提供的图8所示的神经网络处理器的结构框图。
具体实施方式
本申请实施例为了解决需要人工上站采集数据的难题,通过结合拍摄技术和深度学习算法来自动识别发生变化的设备和变化类型,从而实现了自动检测站点变化、采集站点数据并及时更新站点的三维模型。
以下,说明本申请实施例中所涉及的一些术语和技术:
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式作出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。人工智能领域的研究包括机器人,自然语言处理,计算机视觉,决策与推理,人机交互,推荐与搜索,AI基础理论等。
神经网络(Neural Network,NN)作为人工智能的重要分支,是一种模仿动物神经网络行为特征进行信息处理的网络结构。神经网络的结构由大量的节点(或称神经元)相互联接构成,基于特定运算模型通过对输入信息进行学习和训练达到处理信息的目的。一个神经网络包括输入层、隐藏层及输出层,输入层负责接收输入信号,输出层负责输出神经网络的计算结果,隐藏层负责学习、训练等计算过程,是网络的记忆单元,隐藏层的记忆功能由权重矩阵来表征,通常每个神经元对应一个权重系数。
基于单目摄像技术的设备。“单目”应当理解成单个相机,该单个相机可以包括单个摄像头或者多个摄像头。基于单目摄像技术的设备指的是利用包括单个摄像头或者多个摄像头的单个相机进行摄像的设备。本申请具体实施例以单个摄像头的单个相机作为示例性实施例进行说明,但是本申请也可以适用于包括多个摄像头的单个相机。例如,该单个相机可以包括两个或者更多个摄像头组成的摄像头阵列,而该摄像头阵列的各个摄像头之间存在固定的线性位移关系,可以根据这些线性位移关系综合各个摄像头所拍摄的图像或者视频从而得到基于单目摄像技术的数据。
本申请实施例提供了一种站点模型更新方法及系统。方法包括:获取监控图像,通过获取到的监控图像,确定发生变化的设备的变化类型以及与变化类型对应的变化量;根据监控图像和站点模型,计算监控图像的位姿和相机参数;根据监控图像的位姿和相机参数,确定发生变化的设备的位姿;以及根据发生变化的设备的位姿、变化类型以及与变化类型对应的变化量,更新站点模型。站点模型更新系统包括:设备变化检测装置,其中,设备变化检测装置通过监控图像,确定发生变化的设备的变化类型以及与变化类型对应的变化量;和处理器。其中,处理器用于:获取监控图像;根据监控图像和站点模型,计算监控图像的位姿和相机参数;根据监控图像的位姿和相机参数,确定发生变化的设备的位姿;以及根据发生变化的设备的位姿、变化类型以及与变化类型对应的变化量,更新站点模型。
本申请实施例可用于以下应用场景:电信行业的基站,中继站等场景模型更新,智能城市安防监控下的交通指示系统的场景模型更新,光伏发电系统的场景模型更新,或者其它需 要构建特定地点的站点模型并更新该站点模型的应用场景。
本申请实施例可以依据具体应用环境进行调整和改进,此处不做具体限定。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请的实施例进行描述。
请参阅图1,图1是本申请实施例提供的站点模型构建和更新系统的结构示意图。如图1所示,站点模型构建和更新系统可以分成两大部分,分别对应站点模型的构建和站点模型的更新。其中,站点模型的构建部分包括建模数据采集设备102,建模数据处理平台106和站点模型构建平台108。其中,建模数据采集设备102将采集到的建模数据104发送给建模数据处理平台106进行处理,建模数据处理平台106将处理后的建模数据发送给站点模型构建平台108,最后由站点模型构建平台108根据处理后的建模数据构建站点模型120。站点模型的更新部分包括更新数据采集设备112、更新数据处理平台116和站点模型更新平台118。其中,更新数据采集设备112将采集到的更新数据114发送给更新数据处理平台116进行处理,更新数据处理平台116将处理后的更新数据发送给站点模型更新平台118,最后由站点模型更新平台118根据处理后的更新数据更新站点模型120。
请继续参阅图1,建模数据采集设备102和更新数据采集设备112属于前端数据采集装置100。建模数据处理平台106、站点模型构建平台108、更新数据处理平台116以及站点模型更新平台118属于后端数据处理转置110。应当理解的是,前端数据采集装置100可以部署在站点所在位置或者附近,可以理解成边缘装置或者本地装置,例如设置在站点的摄像机、手机等。后端数据处理转置110可以部署在远离站点的位置,可以理解成云端装置或者数据中心装置,例如通过网路与设置在站点的摄像机连接的数据中心。在本申请中,站点指的是在一定空间范围内的或者在指定地点的场景,可以结合具体行业而具体限定站点的含义。例如,站点可以理解成电信通讯行业中的网络基站、中继站,也可以理解成城市安防行业中的交通指挥系统,或者可以理解成电力输送行业中的发电系统、继电站,又或者理解成石油行业中的炼油站、加油站。这些可以根据具体应用场景进行定义,在此不做限定。
请继续参阅图1,建模数据采集设备102指的是通过全景测量技术、激光点云测量技术、手机拍摄成像合成技术,或者其它合适的技术手段来获取用于构建站点模型的数据的相应设备。以全景测量技术为例,建模数据采集设备102指的是全景相机或者其它基于全景测量技术的采集设备,建模数据采集设备102所采集的建模数据104是代表站点所在场景全部区域的全景图像或者是分别代表站点所在场景不同区域的多个全景图像。建模数据处理平台106可以对分别代表站点所在场景不同区域的多个全景图像进行处理,从而合成代表站点所在场景全部区域的全景图像。最后由站点模型构建平台108通过常规算法例如全景双目测量算法对经过处理后的建模数据104进行处理并生成站点模型120。
请继续参阅图1,再以激光点云测量技术为例,建模数据采集设备102指的是激光扫描仪或者其它基于激光点云测量技术的采集设备。建模数据采集设备102所采集的建模数据104是代表站点所在场景全部区域的激光点云数据或者是分别代表站点所在场景不同区域的激光点云数据。建模数据处理平台106可以对分别代表站点所在场景不同区域的激光点云数据进行拼接,从而合成代表站点所在场景全部区域的激光点云数据。最后由站点模型构建平台108通过常规算法例如点云矢量建模算法对经过处理后的建模数据104进行处理并生成站点模型120。
请继续参阅图1,再以手机拍摄成像合成技术为例,建模数据采集设备102指的是手机或者平板电脑等带有拍照拍摄功能的便携式设备。建模数据采集设备102所采集的建模数据 104是代表站点所在场景全部区域的图片视频数据或者是分别代表站点所在场景不同区域的图片视频数据。建模数据处理平台106可以对分别代表站点所在场景不同区域的图片视频数据进行处理,从而合成代表站点所在场景全部区域的图片视频数据。最后由站点模型构建平台108通过常规算法例如双目测量算法或者多源图像合成算法对经过处理后的建模数据104进行处理并生成站点模型120。
请继续参阅图1,关于站点模型的更新部分,更新数据采集设备112指的是手机、监控摄像头、安防镜头或者其它基于单目摄像技术的设备。应当理解的是,虽然站点模型的构建部分通过全景测量技术、激光点云测量技术、手机拍摄成像合成技术,或者其它合适的技术手段来获取用于构建站点模型的数据并生成站点模型120,但是,站点模型的更新部分适合采用基于单目摄像技术的设备。这是因为基于单目摄像技术的设备在采集更新数据114时不需要使用其它采集设备也因此无需考虑协同或者同步问题,并且在实际应用中大多数情况下仅靠基于单目摄像技术的设备即可获得足够的精度和信息来实现对站点模型120的更新,因此有更好的泛用性和便利性。
请继续参阅图1,更新数据采集设备112也即基于单目摄像技术的设备获得监控图像或者监控视频。监控视频中的全部或者部分帧的图像可以被抽取出来作为监控图像。例如,可以通过视频抽帧算法,将视频影像转化成框幅式影像。更新数据采集设备112所采集的监控图像或者从监控视频中抽取出来的监控图像就是更新数据114。更新数据114被发送给更新数据处理平台116。更新数据处理平台116对所接收的监控图像进行处理,主要是识别监控图像中是否有发生变化的设备,并当存在发生变化的设备时进一步确定发生变化的设备所在区域、变化类型和对应的变化量。关于更新数据处理平台116的更多细节请看下面具体实施例的描述。站点模型更新平台118根据更新数据处理平台116所提供的信息更新站点模型120。
请继续参阅图1,在一种可能的实施方式中,站点模型120包括站点的环境模型和站点的设备模型。其中,站点的环境模型可以理解成站点所在场景中的背景要素,例如永久性的建筑物、道路等,还可以理解成与站点的预设功能的关联性较弱的要素,例如树木、行人等。通过将这些背景要素或者关联性较弱的要素作为站点的环境模型,可以减少站点模型因为这些环境模型的变化而更新的频率,从而提高系统效率和节省资源。相对的,站点的设备模型是站点所在场景中的关键要素,例如为了实现站点的预设功能而必要的设备。以站点是通讯基站为例子,通讯基站的设备模型可以为部署在该通讯基站的天线、供电设备、中继设备和/或者其它与该通讯基站的预设功能的关联性较强的要素。通过将这些关键要素划分为站点的设备模型,并增加站点模型因为这些设备模型变化而更新的频率,有利于提高系统效率。
请继续参阅图1,在一种可能的实施方式中,站点模型更新平台118更新站点模型120可以是针对站点所在场景的全部区域进行更新或者仅针对部分区域进行更新。在一种可能的实施方式中,站点模型更新平台118还可以标注站点所在场景的个别设备为特别关注对象,并对于这些特别关注对象的变化进行灵敏度较高的检测。在一种可能的实施方式中,站点模型更新平台118还可以标注某些设备为一般关注对象,并对于这些一般关注对象的变化进行灵敏度较低的检测。以站点是通讯基站为例子,天线可以被标注为特别关注对象,而用于提供电能给天线的供电设备可以标注为一般关注对象。如此,可以集中资源优先反映被标注为特别关注对象的设备的变化,有利于提高资源利用效率。
请继续参阅图1,站点模型120可以提供多种应用。例如,可以利用站点模型120实现特定设备与测距参考点之间的距离的测距。具体地,在包括特定设备的图像上选择三个地面参考点从而确定地平面的基准面,并根据地平面的基准面来确定特定设备的基准平面;然后 在该图像上选中特定设备,通过根据现有算法模拟生成的光线与特定设备的相交结果来确定特定设备的位姿,从而确定特定设备的高度和角度等信息;在该图像上选中测距参考点,确定测距参考点的位姿,从而计算特定设备跟测距参考点之间的距离。再例如,可以通过站点模型120来实现站点的资产管理、空间评估设计、EMF可视化等。
请参阅图2,图2是本申请实施例提供的构建站点模型的方法的流程示意图。应当理解的是,图2所示的构建站点模型的方法对应图1所示的站点模型的构建部分。图2所示的具体实施例是以全景测量技术为例,但是图2所示的方法经过适应性的改动也可以适用于其它技术手段例如激光点云测量技术和手机拍摄成像合成技术。如图2所示,构建站点模型200包括以下步骤。
步骤S202:采集全景图像。
其中,采集全景图像指的是通过全景相机或者其它基于全景测量技术的采集设备,从而获得代表站点所在场景全部区域的全景图像或者是分别代表站点所在场景不同区域的多个全景图像。可以对分别代表站点所在场景不同区域的多个全景图像进行处理,从而合成代表站点所在场景全部区域的全景图像。采集全景图像还可以理解成通过全景摄像机获得全景视频,然后利用图像跟踪算法抽取全景视频中关键帧的图像,最后以所抽取的关键帧的图像作为代表站点所在场景全部区域的全景图像。另外,采集全景图像之后还可以利用图像干扰区域识别算法等技术来识别图像中起干扰作用的行人、天空或者运动区域等,从而降低这些不相关因素或者噪声的干扰。
步骤S204:计算全景图像的位姿。
其中,计算全景图像的位姿指的是在步骤S202所采集的全景图像基础上,计算拍摄全景图像时的相机的位姿。这里,位姿(pose)是位置和朝向的简称;位姿可以用六个变量表示,其中三个变量表明位置,另外三个变量表明朝向。计算拍摄全景图像时的相机的位姿可以通过常规算法例如图像特征匹配算法、解析空中三角测量算法、多张图像位姿计算方法(Structure From Motion,SFM)或者其它合适技术手段实现,在此不做具体限定。
步骤S206:在全景图像中识别特定设备以及对应设备类型。
其中,为了覆盖尽可能大的场景和尽可能多的要素,全景图像往往会覆盖较大的范围甚至覆盖站点所在场景的全部区域,而为了简化后续的处理过程,可以通过识别特定设备和对应设备类型,从而进行一定程度的简化处理。具体地,可以通过常规算法例如特征识别等,从全景图像中识别出特定设备和对应的设备类型。例如,假设要识别的特定设备是站点的天线,可以通过特征识别算法从全景图像中识别出符合天线特征的设备,并标注这些设备为天线这一设备类型。再例如,可以在全景图像中识别特定设备为供电设备或者其它类型的设备。
步骤S208:在预制模型库中选择与特定设备的设备类型对应的设备模型。
其中,根据在步骤S206中所识别出的特定设备及其对应设备类型,可以在预制模型库中选择与特定设备的设备类型对应的设备模型。应当理解的是,预制模型库中的设备模型可以是简化后的几何模型,通过若干个关键点来简化表示对应的特定设备,从而有利于简化后续的操作和数据运算需求。例如,假设在步骤S206识别的特定设备是站点的天线,预制模型库可以包括设备类型为天线的设备模型,用来将实际中形状较复杂的天线简化地表现为包括若干个关键点的几何模型,从而有利于后续操作的简便。
步骤S210:根据全景图像的位姿和设备模型构建站点模型。
其中,根据步骤S204所得到的全景图像的位姿和步骤S208所得到的特定设备的设备模型,可以用该设备模型来替代特定设备,并计算该设备模型在全景图像中的位姿。具体地, 可以通过常规算法如目标检测技术等确定在全景图像中该特定设备所在的区域的位置和大小,再通过与该特定设备对应的设备模型上的若干个关键点推算出用设备模型替代该特定设备后,设备模型在全景图像中的位姿。以特定设备是天线为例,设备模型在全景图像中的位姿指的是用与天线对应的设备模型替代该天线后,与天线对应的设备模型在全景图像中的位置和朝向,这些可以结合该设备模型的几何模型从而判断该天线是否发生了位置和朝向的变化,例如天线的位置发生了平移或者天线的朝向发生了转向。
如此,结合附图2所示的各个步骤,实现了根据采集的全景图像来计算全景图像的位姿,并从全景图像中识别出设备类型,然后结合预制模型库中的设备模型构建起站点模型。
请参阅图3,图3是本申请实施例提供的更新站点模型的方法的流程示意图。应当理解的是,图3所示的更新站点模型的方法对应图1所示的站点模型的更新部分。如图3所示,更新站点模型300包括以下步骤。
步骤S302:采集监控图像。
其中,采集监控图像可以通过手机、监控摄像头、安防镜头或者其它基于单目摄像技术的设备获得监控图像或者监控视频。监控视频中的全部或者部分帧的图像可以被抽取出来作为监控图像。在一些示例性实施例中,可以通过视频抽帧算法,将视频影像转化成框幅式影像。
步骤S304:对采集的监控图像进行预处理。
其中,对采集的监控图像进行预处理指的是对监控图像进行曝光度修复、模糊恢复、雨雾去除等操作,从而优化监控图像质量,提高图像数据清晰度,有利于后续处理。监控图像预处理还可以包括通过曝光度检测来排除过曝光图像和弱曝光图像,通过模糊度检测来排除模糊图像,以及通过雨滴检测算法来排除含雨滴图像等操作。应当理解的是,监控图像预处理可以在采集监控图像的本地设备上进行,例如在站点的监控摄像机、安防摄像机或者其它边缘设备,如此可以对所采集的监控图像在采集端侧进行预处理从而降低后续操作的复杂程度,有利于节约资源和提高效率。在一些示例性实施例中,更新站点模型300的方法可以不包括S304步骤,也就是从步骤S302直接跳到步骤S306。
步骤S306:检测设备变化,如果检测到设备变化则执行步骤S308,如果没有检测到设备变化则执行步骤S302。
其中,在步骤S306,将采集到的监控图像或者经过预处理的采集到的监控图像输入到神经网络模型,通过神经网络模型来自动判断监控图像中是否存在发生变化的设备,以及进一步确定发生变化的设备所在的区域、变化类型和对应变换量。在步骤S306,用于检测设备变化的神经网络模型输出的结果包括发生变化的设备的变化类型以及与变化类型对应的变化量。其中,变化类型是多个预设变化类型中的一个预设变化类型。这里,发生变化的设备的变化类型包括:设备新增,设备删除,设备移动,设备旋转等。其中,设备新增意味着在上一时段确定未发生变化的监控图像中该设备不存在,而在当前监控图像中该设备存在。设备删除意味着在上一时段确定未发生变化的监控图像中该设备存在,而在当前监控图像中该设备不存在。设备移动意味着该设备在当前监控图像中的位置相比于在上一时段确定未发生变化的监控图像中该设备的位置,发生了变化。设备旋转意味着该设备在当前监控图像中的朝向相比于在上一时段确定未发生变化的监控图像中该设备的朝向,发生了变化。如此,通过设定设备新增,设备删除,设备移动,设备旋转等变化类型,可以覆盖该设备大部分的设备的变化。应当理解的是,该设备的实际变化也可以是以上基础变化类型的组合,例如该设备可以同时发生设备移动和设备旋转两种变化。因此,发生变化的设备的变化类型还可以包括:设 备新增,设备删除,设备移动,设备旋转、设备既移动又旋转等。其中,在步骤S306检测设备变化步骤,最后输出的结果包括发生变化的设备所在区域、变化类型以及对应变化量,具体细节将在下面与附图5有关的具体实施例进行详细描述。
步骤S308:计算监控图像位姿和相机参数。
其中,计算监控图像位姿指的是计算拍摄监控图像时的相机在三维空间坐标系中的位姿。这里,位姿(pose)是位置和朝向的简称;位姿可以用六个变量表示,其中三个变量表明位置,另外三个变量表明朝向。计算拍摄监控图像时的相机的位姿可以通过常规算法例如PNP(Perspective-N-Point)算法、位姿估计算法或者其它合适技术手段实现,在此不做具体限定。而计算监控图像的相机参数指的是计算拍摄监控图像时的相机参数,例如焦距、像主点坐标、畸变参数等。应当理解的是,计算监控图像位姿是针对计算拍摄监控图像时的相机的外部参数而言,而计算监控图像的相机参数是针对计算拍摄监控图像时的相机的内部成像信息而言。
步骤S310:确定是否放大设备所在区域,如果放大设备所在区域则执行步骤S320,如果不放大设备所在区域则执行步骤S330。
其中,在步骤S306判断监控图像中存在发生变化的设备,可以确定发生变化的设备所在区域在监控图像中的占比,例如计算发生变化的设备所在区域在整幅监控图像中所占面积的比例;将发生变化的设备所在区域在监控图像中的占比与预设占比比较,当该占比小于预设占比时则判断放大设备所在区域并执行步骤S320;当占比不小于预设占比时,则判断不放大设备所在区域并执行步骤S312。其中,预设占比可以是预先设定的数值,例如将预设占比设为30%,而假设发生变化的设备所在区域在监控图像中的占比为1%,则认为占比小于预设占比并判断放大设备所在区域。实际应用中,监控图像往往覆盖较大区域的场景,而发生变化的设备所在区域可能仅占监控图像中较小的一部分,也就是说发生变化的设备所在区域在监控图像中的占比可能较小。如此,通过比较占比和预设占比,可以选择性地放大发生变化的设备所在的区域,从而获得更好的效果。
在一种可能的实施方式中,发生变化的设备所在区域在监控图像中的占比的含义是包括发生变化的设备的兴趣区域(Region Of Interest,ROI)在监控图像上的立体投影,可以理解成包括8个点的立方体的投影,而该ROI的立体投影在整个监控图像上所占面积的比例就是占比。
步骤S320:监控图像二次采集及处理。其中,步骤S320进一步细分成步骤S322和步骤S324。
步骤S322:采集放大后监控图像。
其中,根据在步骤S310计算的占比和预设占比来计算放大倍数。例如,假设发生变化的设备所在区域在监控图像中的占比是1%,而预设占比为30%,则放大倍数是sqrt(30)约为5.5,其中sqrt表示取平方根的计算。相应地,当放大倍数为5.5时,这意味着需要将采集监控图像的设备的焦距放大5.5倍,从而提高发生变化的设备所在区域在放大后监控图像中的占比。可以通过常规技术手段调整采集监控图像的设备的焦距,在此不做具体限定。
步骤S324:计算放大后监控图像位姿和相机参数。
其中,根据在步骤S322采集的放大后监控图像和站点模型,可以计算放大后监控图像的位姿和相机参数,具体细节将在下面与附图4有关的具体实施例详细描述。
步骤S330:根据监控图像或者放大后监控图像更新站点模型。
其中,如果在步骤S310判断不放大设备所在区域,则利用监控图像来更新站点模型,而如果在步骤S310判断放大设备所在区域,则利用在步骤S320获得的放大后监控图像来更新 站点模型。具体地,假设在步骤S310判断不放大设备所在区域,则根据步骤S302获得的监控图像,还有根据步骤S308获得的监控图像位姿和相机参数,结合步骤S306得知的发生变化的设备所在区域、变化类型以及变化量,可以从建立站点模型用到的预制模型库中识别出与发生变化的设备对应的设备模型,然后根据变化类型和变化量来确定该设备模型在变化之后的位姿,最后调整站点模型以反映该设备的变化。例如,假设特定设备发生了变化且变化类型为设备新增,这意味着需要将与该设备对应的设备模型添加到发生变化的设备所在区域并更新站点模型。再例如,假设特定设备发生了变化且变化类型为设备删除,这意味着需要从站点模型中删除与该设备对应的设备模型。再例如,假设特定设备发生了变化且变化类型为设备移动,这意味着需要调整与该设备对应的设备模型的位姿以反映设备移动的变化。
如此,结合附图3所示的各个步骤,通过神经网络模型来自动判断监控图像中是否存在发生变化的设备,以及进一步确定发生变化的设备所在的区域、变化类型和对应变换量,并且判断是否进行监控图像二次采集及处理,最后利用监控图像或者放大后监控图像更新站点模型。
请参阅图4,图4是本申请实施例提供的图3所示的监控图像二次采集及处理步骤的流程示意图。其中,图4所示的监控图像二次采集及处理420对应图3所示的监控图像二次采集及处理S320,并对图3所示的步骤S322和S324进行展开后做更具体描述。如图4所示,监控图像二次采集及处理420包括以下步骤。
步骤S430:计算发生变化的设备所在区域在监控图像中的占比和预设占比计算放大倍数。
其中,关于计算占比和放大倍数的有关细节,与图3所示的步骤S322相似,在此不再赘述。
步骤S432:根据放大倍数调整焦距后获得放大后监控图像。
其中,关于调整焦距后获得放大后监控图像的有关细节,与图3所示的步骤S322相似,在此不再赘述。
步骤S434:将监控图像的和放大后监控图像进行影像匹配,确定匹配点。
其中,将监控图像的和放大后监控图像进行影像匹配并确定匹配点指的是通过特征提取的方式,从监控图像的和放大后监控图像中提取出对应发生变化的设备的特征点,并进行影像匹配从而确定匹配点。
步骤S436:根据放大后监控图像的位姿和相机参数之间的关联公式,先根据位姿推导相机参数,再根据相机参数推导位姿。
其中,步骤S436的有关细节对应图3所示的步骤S324中的计算放大后监控图像的位姿和相机参数。应当理解的是,计算放大后监控图像的位姿指的是计算拍摄放大后监控图像时的相机在三维空间坐标系中的位姿。这里,位姿(pose)是位置和朝向的简称;位姿可以用六个变量表示,其中三个变量表明位置,另外三个变量表明朝向。而计算放大后监控图像的相机参数指的是计算拍摄放大后监控图像时的相机参数,例如焦距、像主点坐标、畸变参数等。应当理解的是,计算放大后监控图像的位姿是针对计算拍摄放大后监控图像时的相机的外部参数而言,而计算放大后监控图像的相机参数是针对计算拍摄放大后监控图像时的相机的内部成像信息而言。
步骤S436和图3所示的步骤S324所涉及的是计算放大后监控图像的位姿和相机参数,这与图3所示的步骤S308中的计算监控图像位姿和相机参数不同之处在于,放大后监控图像的采集是通过将采集监控图像的采集设备的焦距根据放大倍数调整后再次采集获得的,因此理想情况下拍摄放大后监控图像的相机与拍摄监控图像的相机应该具有相同的外部参数也就 是相同的位姿,而调整焦距仅仅影响相机的内部成像信息也就是相机参数。然而,实际应用中,采集设备在拍摄监控图像和拍摄放大后监控图像的两个时刻之间,可能受到各种外在因素的影响例如风力或者震动引起的抖动,还可能受到内在因素的影响例如设备老化镜头松动等,从而导致放大后监控图像的位姿和相机参数分别不同于监控图像位姿和相机参数。
为此,需要在已经计算出的监控图像的位姿和相机参数的基础上,通过现有技术中的图像位姿和相机参数之间的关联公式,来推导出放大后监控图像的位姿和相机参数。具体地,先以监控图像的位姿作为放大后监控图像的初始位姿,然后根据关联公式将放大后监控图像的初始位姿作为常数导入后推导出放大后监控图像的相机参数,再然后根据关联公式将推导出的放大后监控图像的相机参数作为常数导入后推导出放大后监控图像的位姿,这样完成一次迭代计算过程。每次执行步骤S436都根据现有技术中的关联公式,进行一次上述迭代计算过程,从而得到放大后监控图像的位姿和相机参数。
步骤S438:判断放大后监控图像的位姿和相机参数各自的变化量分别小于各自的预设阈值,如果均小于各自的预设阈值则执行步骤S440,如果至少有一个大于预设阈值则执行步骤S436。
其中,在步骤S436执行后通过一次迭代计算过程得到放大后监控图像的位姿和相机参数,在步骤S438判断是否结束迭代,如果不满足迭代结束条件则回到步骤S436再进行下一次迭代计算过程,直到满足步骤S438规定的迭代结束条件。这里,迭代结束条件设定为在步骤S436的一次迭代计算过程结束后,得到的放大后监控图像的位姿和相机参数各自的变化量小于各自的预设阈值。其中,放大后监控图像的位姿的变化量指的是在步骤S436的一次迭代计算过程前后的放大后监控图像的位姿之间的差异,也就是将执行步骤S436的一次迭代计算过程之前的放大后监控图像的位姿跟执行完步骤S436的一次迭代计算过程之后的放大后监控图像的位姿进行比较。类似地,放大后监控图像的相机参数的变化量指的是在步骤S436的一次迭代计算过程前后的放大后监控图像的相机参数之间的差异,也就是将执行步骤S436的一次迭代计算过程之前的放大后监控图像的相机参数跟执行完步骤S436的一次迭代计算过程之后的放大后监控图像的相机参数进行比较。其中,放大后监控图像的位姿和相机参数各自的变化量可以对应不同的预设阈值,例如设定放大后监控图像的位姿的变化量对应的预设阈值是0.0001,而放大后监控图像的相机参数的变化量对应的预设阈值是0.001。只有当放大后监控图像的位姿和相机参数的变化量分别小于各自对应的预设阈值时,才满足迭代结束条件。
步骤S440:输出放大后监控图像的位姿和相机参数。
其中,在步骤S438判断满足迭代结束条件后,则输出满足迭代结束条件的放大后监控图像的位姿和相机参数。步骤S440输出的结果对应图3所示的步骤S324的输出结果,也就是输出计算得到的放大后监控图像的位姿和相机参数,同时也是图3所示的步骤S320监控图像二次采集及处理的输出结果。
如此,结合附图4所示的各个步骤,确定发生变化的设备所在区域在监控图像中的占比;比较占比和预设占比;当占比小于预设占比时,获得放大后监控图像;根据放大后监控图像和站点模型,计算放大后监控图像的位姿和相机参数;根据放大后监控图像的位姿和相机参数,更新站点模型。
请参阅图5,图5是本申请实施例提供的图3所示的方法中检测设备变化步骤的流程示意图。其中,图5的检测设备变化506对应图3所示的步骤S306:“是否检测到设备变化”。如图5所示,检测设备变化506包括以下步骤。
步骤S510:获取基准图像。
其中,基准图像指的是用于判断是否有设备发生变化的参考图像,可以是上一时段确定未发生变化的监控图像,或者可以是人工输入的参考图像。
步骤S512:获取监控图像。
其中,获取监控图像可以通过手机、监控摄像头、安防镜头或者其它基于单目摄像技术的设备获得监控图像或者监控视频。监控视频中的全部或者部分帧的图像可以被抽取出来作为监控图像。在一些示例性实施例中,可以通过视频抽帧算法,将视频影像转化成框幅式影像。
值得说明的是,步骤510和512之间没有时间顺序,可以同时执行也可以任意次序分别执行。
步骤S514:将基准图像和监控图像输入到神经网络模型。
其中,将基准图像和监控图像输入到神经网络模型,该神经网络模型用于确定监控图像中是否有设备发生变化,以及发生变化的设备的变化类型和对应的变化量。
步骤S516:通过神经网络模型判断是否有设备发生变化,如果发生变化则执行步骤S518,如果没有发生变化则执行步骤S520。
其中,根据神经网络模型的输出结果,可以得知监控图像中是否有设备发生变化。当监控图像中有设备发生变化时,执行步骤S418并输出有设备发生变化的监控图像以及设备所在区域和变化类型;当监控图像中没有设备发生变化时,可以执行步骤S420并用监控图像替换基准图像,即将监控图像作为下一次使作神经网络模型确定是否有设备变化时的基准图像。
应当理解的是,该神经网络模型输出的结果包括发生变化的设备的变化类型以及与变化类型对应的变化量。变化类型是多个预设变化类型中的一个预设变化类型。多个预设变化类型涵盖了设备可能发生的变化的绝大多数情况,具体包括:设备新增,设备删除,设备移动,和/或设备旋转等。在一些示例性实施例中,多个预设变化类型还可以包括以上基础变化类型的组合,例如包括设备同时发生设备移动和设备旋转的变化。因此,多个预设变化类型还可以包括:设备新增,设备删除,设备移动,设备旋转、设备既移动又旋转等。其中,在步骤S516所用到的神经网络模型的训练方法将在下面与附图6有关的具体实施例进行详细描述。
基准图像可以理解成被设定为上一时段确定未发生变化的监控图像。设备新增意味着在基准图像中该设备不存在,而在当前监控图像中该设备存在。设备删除意味着在基准图像中该设备存在,而在当前监控图像中该设备不存在。设备移动意味着该设备在当前监控图像中的位置相比于在基准图像中该设备的位置发生了变化。设备旋转意味着该设备在当前监控图像中的朝向相比于在基准图像中该设备的朝向发生了变化。本申请实施例可以通过预先设定设备新增,设备删除,设备移动,设备旋转等变化类型,以及通过比较基准图像和监控图像,从而实现神经网络模型确定是否有变化以及识别变化类型。
在一些示例性实施例中,已训练的神经网络模型可以针对个别设备模型的变化有更高的敏感度,例如可以对监控图像中被识别为特定设备模型的设备所在区域,通过设定随机梯度下降算法的系数来实现分类层输出结果对表征该区域变化程度的输入变量具有更高的敏感度。如此,可以标注站点所在场景的个别设备为特别关注对象,并对于这些特别关注对象的变化进行灵敏度较高的检测,同时标注某些设备为一般关注对象,并对于这些一般关注对象的变化进行灵敏度较低的检测。
步骤S518:输出发生变化的设备所在区域、变化类型以及对应变化量。
其中,当在步骤S516通过神经网络模型判断监控图像中有设备发生变化时,输出发生变化的设备所在区域、变化类型以及对应变化量。
步骤S520:用监控图像更新基准图像。
其中,当在步骤S516通过神经网络模型判断监控图像中没有设备发生变化时,可以用当前监控图像替换基准图像。也就说,当前时段的监控图像如果根据神经网络模型的输出结果被确定为没有设备发生变化,则可以用当前时段的监控图像作为相对于在下一个时段获取的监控图像而言的基准图像。例如,可以设定为每天按时进行检测设备变化,在早上9点和早上10点分别采集监控图像并检测设备变化。假设在早上9点采集的监控图像中没有发现发生变化的设备,则早上9点采集的监控图像可以用来替换基准图像,并与早上10点采集的监控图像进行比较,从而判断早上10点采集的监控图像中是否有发生变化的设备。
如此,结合附图5所示的各个步骤,通过将基准图像和监控图像输入到已训练的神经网络模型,从而确定监控图像中是否有设备发生变化以及输出发生变化的设备所在区域、变化类型以及对应变化量,并且当监控图像中没有设备发生变化时可以用当前监控图像更新基准图像。
请参阅图6,图6是本申请实施例提供的图5所示的神经网络模型的训练方法的流程示意图。其中,图6所示的神经网络模型的训练方法600用于训练图5的步骤S516所用到的用于判断是否有设备发生变化的神经网络模型,并且该神经网络模型还会输出发生变化的设备所在区域、变化类型以及对应变化量。如图6所示,神经网络模型的训练方法600包括以下步骤。
步骤S610:获取基准图像和训练图像。
其中,为了训练神经网络模型以达到能识别出监控图像中是否有设备发生变化的预测能力,在训练过程中通过让神经网络模型比较基准图像和训练图像并给出预测结果,然后根据预测结果的反馈来调整神经网络模型的参数从而达到训练目的。为此,在附图6所示的具体实施例中,基准图像指的是训练神经网络模型的过程中作为没有设备发生变化的参考图像。而训练图像则是训练神经网络模型的过程中用于让神经网路模型跟基准图像进行比较并判断训练图像相对于基准图像是否存在发生变化的设备。在附图6所示的具体实施例中,对神经网络模型的训练方法采用监督学习的方式,也就是在训练图像中带有标签,该标签包括以下信息:带有该标签的训练图像相对于基准图像是否存在发生变化的设备、发生变化的设备的变化类型以及对应变化量。通过标签中携带的信息,可以评价神经网络模型的预测结果从而有利于调整神经网络模型参数。
应当理解的是,在附图6所示的具体实施例中,基准图像是针对训练神经网络模型的过程,而在附图5所示的具体实施例中也提到了获取基准图像S510。附图6所提及的基准图像是针对训练神经网络模型的过程,而附图5所提及的基准图像是针对已经训练的神经网络模型的执行过程。通过在附图6所示的具体实施例中训练神经网络模型来学会识别训练图像是否相对于基准图像存在发生变化的设备,由此得到的已训练的神经网络模型可用于在附图5所示的具体实施例中进行预测任务,也即判断步骤S512所获得的监控图像是否相对于步骤S510所获得的基准图像存在发生变化的设备。同时,在附图6所示的具体实施例中训练神经网络模型的方法是训练多任务神经网络模型的方法,所以已经训练的神经网络模型不仅可以预测是否存在发生变化的设备,还会输出发生变化的设备所在区域、变化类型以及对应变化量。
步骤S620:比较基准图像和训练图像,确定发生变化的设备在训练图像中所在区域、变化类型和对应变化量。
其中,在步骤S610提到了训练图像带有标签,该标签包括以下信息:带有该标签的训练 图像相对于基准图像是否存在发生变化的设备、发生变化的设备的变化类型以及对应变化量。为此,将步骤S610获得的基准图像和训练图像,都输入待训练的神经网络模型,通过待训练的神经网络模型比较基准图像和训练图像,确定发生变化的设备在训练图像中所在区域、变化类型和对应变化量。这里,变化类型是多个预设变化类型中的一个预设变化类型,而多个预设变化类型包括设备新增,设备删除,设备移动,设备旋转等,还可以包括设备新增,设备删除,设备移动,设备旋转、设备既移动又旋转等。应当理解的是,步骤S620涉及的多个预设变化类型的细节与步骤S516的“通过神经网络模型判断是否有设备发生变化”所涉及的多个预设变化类型的细节保持一致。这是因为在附图5所示的具体实施例中步骤S516是将通过附图6所示的方法训练得到的神经网络模型用于执行。
步骤S630:从多个子损失函数中选择与变化类型对应的子损失函数,根据变化类型和对应变化量计算该子损失函数。
其中,在步骤S620将步骤S610所获得的基准图像和训练图像,都输入待训练的神经网络模型,得到待训练的神经网络模型的输出结果,也即发生变化的设备在训练图像中所在区域、变化类型和对应变化量。这些输出结果用于计算损失函数从而调整待训练的神经网络模型的参数。应当理解是,在附图6所示的具体实施例中训练神经网络模型的方法是训练多任务神经网络模型的方法,所以待训练的神经网络模型的输出结果既包括用于执行分类任务所需的输出结果也即是否存在发生变化的设备和变化类型,还包括用于执行量化任务所需的输出结果也即与变化类型对应的变化量。为此设计了多个子损失函数,多个子损失函数与多个预设变化类型一一对应,多个子损失函数的每一个子损失函数根据与该子损失函数对应的预设变化类型所对应的变化量确定。如此,可以实现训练该神经网络模型用于执行多种任务的目的。
请继续参阅图6,在步骤S630中,多个预设变化类型包括设备新增,设备新增所对应的变化量包括监控图像的像素大小的最大值。与设备新增这一预设变化类型对应的子损失函数参考公式(1)。
L ADD=Loss(p max,P ADD,Y)  (1)
在公式(1)中,L ADD表示与设备新增这一预设变化类型对应的子损失函数;p max表示监控图像的像素大小的最大值;P ADD表示待训练的神经网络模型所预测的变化类型为设备新增的概率;Y表示在步骤S610中的训练图像自带的标签。通过公式(1)所示的子损失函数,可以将待训练的神经网络模型执行预测任务后预测变化类型为设备新增的概率以及执行量化任务后预测的对应设备新增的变化量,与标签中所携带的信息比较,从而作为调整该待训练的神经网络模型的参数的基础。
请继续参阅图6,在步骤S630中,多个预设变化类型包括设备删除,设备删除所对应的变化量包括监控图像的像素大小的最大值的负值。与设备删除这一预设变化类型对应的子损失函数参考公式(2)。
L DEL=Loss(-p max,P DEL,Y)  (2)
在公式(2)中,L DEL表示与设备删除这一预设变化类型对应的子损失函数;-p max表示监控图像的像素大小的最大值的负值;P DEL表示待训练的神经网络模型所预测的变化类型为设备删除的概率;Y表示在步骤S610中的训练图像自带的标签。通过公式(2)所示的子损失函数,可以将待训练的神经网络模型执行预测任务后预测变化类型为设备删除的概率以及执行量化任务后预测的对应设备删除的变化量,与标签中所携带的信息比较,从而作为调整该待训练的神经网络模型的参数的基础。
请继续参阅图6,在步骤S630中,多个预设变化类型包括设备移动,设备移动所对应的变化量包括发生变化的设备的中心点的移动距离。与设备移动这一预设变化类型对应的子损失函数参考公式(3)。
L MOV=Loss(Δd,P MOV,Y)  (3)
在公式(3)中,L MOV表示与设备移动这一预设变化类型对应的子损失函数;Δd表示发生变化的设备的中心点的移动距离;P MOV表示待训练的神经网络模型所预测的变化类型为设备移动的概率;Y表示在步骤S610中的训练图像自带的标签。通过公式(3)所示的子损失函数,可以将待训练的神经网络模型执行预测任务后预测变化类型为设备移动的概率以及执行量化任务后预测的对应设备移动的变化量,与标签中所携带的信息比较,从而作为调整该待训练的神经网络模型的参数的基础。
请继续参阅图6,在步骤S630中,多个预设变化类型包括设备旋转,设备旋转所对应的变化量包括发生变化的设备的边缘与中心点的连线的转向距离。与设备旋转这一预设变化类型对应的子损失函数参考公式(4)。
L ROTATE=Loss(ΔA,P ROTATE,Y)  (4)
在公式(4)中,L ROTATE表示与设备旋转这一预设变化类型对应的子损失函数;ΔA表示发生变化的设备的边缘与中心点的连线的转向距离;P ROTATE表示待训练的神经网络模型所预测的变化类型为设备旋转的概率;Y表示在步骤S610中的训练图像自带的标签。通过公式(4)所示的子损失函数,可以将待训练的神经网络模型执行预测任务后预测变化类型为设备旋转的概率以及执行量化任务后预测的对应设备旋转的变化量,与标签中所携带的信息比较,从而作为调整该待训练的神经网络模型的参数的基础。
请继续参阅图6,在步骤S630中,多个预设变化类型包括设备同时移动和旋转,设备同时移动和旋转所对应的变化量包括发生变化的设备的中心点的移动距离以及发生变化的设备的边缘与中心点的连线的转向距离。与设备同时移动和旋转这一预设变化类型对应的子损失函数参考公式(5)。
L MOV_ROTATE=Loss(Δd+ΔA,f(P MOV,P ROTATE),Y)  (5)
在公式(5)中,L MOV_ROTATE表示与设备同时移动和旋转这一预设变化类型对应的子损失函数;Δd表示发生变化的设备的中心点的移动距离;ΔA表示发生变化的设备的边缘与中心点的连线的转向距离;P MOV表示待训练的神经网络模型所预测的变化类型为设备移动的概率;P ROTATE表示待训练的神经网络模型所预测的变化类型为设备旋转的概率;f(P MOV,P ROTATE)表示设备同时移动和旋转的联合概率,其可以理解成是P MOV和P ROTATE相乘或者其他常规技术中计算联合概率的表达式;Y表示在步骤S610中的训练图像自带的标签。通过公式(5)所示的子损失函数,可以将待训练的神经网络模型执行预测任务后预测变化类型为设备同时移动和旋转的概率以及执行量化任务后预测的对应设备同时移动和旋转的变化量,与标签中所携带的信息比较,从而作为调整该待训练的神经网络模型的参数的基础。
步骤S640:将多个子损失函数进行加权相加得到总损失函数。
其中,将在步骤S630所计算得到的各个子损失函数,通过超参数作为权重进行加权相加,得到总损失函数,参考公式(6)。
L ALL=α 1L ADD2L DEL3L MOV4L ROTATE5L MOV_ROTATE  (6)
在公式(6)中,L ADD表示与设备新增这一预设变化类型对应的子损失函数;L DEL表示与设备删除这一预设变化类型对应的子损失函数;L MOV表示与设备移动这一预设变化类型对应的子损失函数;L ROTATE表示与设备旋转这一预设变化类型对应的子损失函数;L MOV_ROTATE表示与设备同时移动和旋转这一预设变化类型对应的子损失函数;α 1至α 5表示与各个子损失函数对应的作为权重系数的超参数;L ALL表示总损失函数。
步骤S650:通过总损失函数来调整神经网络模型的参数,得到已训练的神经网络模型。
其中,在步骤S640得到的总损失函数,通过常规的调整神经网络模型的算法例如过反向传播算法和梯度下降算法,可以实现根据总的损失函数的输出来调整神经网络模型的参数,进而在多次迭代调整后得到已训练的神经网络模型。
在一种可能的实施方式中,总损失函数还可以包括根据发生变化的设备在训练图像中所在区域而计算的其它损失函数,从而优化训练效果。
如此,结合附图6所示的各个步骤,与多个预设变化类型一一对应的多个子损失函数的加权之和得到总损失函数,再通过总损失函数来调整神经网络模型的参数,可以得到已训练的神经网络模型,该已训练的神经网络模型输出的结果包括发生变化的设备的变化类型以及与变化类型对应的变化量,有利于快速识别变化类型并输出变化量。
请参阅图7,图7是本申请实施例提供的图6所示的已训练神经网络模型的结构框图。应当理解的是,图7只是示意性示出了一种可能的结构,不应理解为唯一结构。如图7所示,卷积神经网络模型700可以包括输入层710,卷积层/池化层720,其中池化层为可选的,以及神经网络层730。
下面详细描述卷积层/池化层720的结构。
如图7所示卷积层/池化层720可以包括如示例721-726层,在一种实现方式中,721层为卷积层,722层为池化层,723层为卷积层,724层为池化层,725为卷积层,726为池化层;在另一种实现方式中,721、722为卷积层,723为池化层,724、725为卷积层,726为池化层。即卷积层的输出可以作为随后的池化层的输入,也可以作为另一个卷积层的输入以继续进行卷积操作。
以卷积层721为例,卷积层721可以包括很多个卷积算子,卷积算子也称为核,其在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素,取决于步长的取值)的进行处理,从而完成从图像中提取特定特征的工作。该权重矩阵的大小应该与图像的大小相关。需要注意的是,权重矩阵的纵深维度和输入图像的纵深维度是相同的,在进行卷积运算的过程中,权重矩阵会延伸到输入图像的整个深度。因此,和一个单一的权重矩阵进行卷积会产生一个单一纵深维度的卷积化输出,但是大多数情况下不使用单一权重矩阵,而是应用维度相同的多个权重矩阵。每个权重矩阵的输出被堆叠起来形成卷积图像的纵深维度。不同的权重矩阵可以用来提取图像中不同的特征,例如一个权重矩阵用来提取图像边缘信息,另一个权重矩阵用来提取图像的特定颜色,又一个权重矩阵用来对图像中不需要的噪点进行模糊化,该多个权重矩阵维度相同,经过该多个维度相同的权重矩阵提取后的特征图维度也相同,再将提取到的多个维度相同的特征图合并形成卷积运算的输出。这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各 个权重矩阵可以从输入图像中提取信息,从而帮助卷积神经网络700进行正确的预测。
当卷积神经网络700有多个卷积层的时候,初始的卷积层(例如721)往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着卷积神经网络700深度的加深,越往后的卷积层(例如726)提取到的特征越来越复杂,比如高级别的语义之类的特征,语义越高的特征越适用于待解决的问题。
由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,即如图7中720所示例的721-726各层,可以是一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化层可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。另外,就像卷积层中用权重矩阵的大小应该与图像大小相关一样,池化层中的运算符也应该与图像的大小相关。通过池化层处理后输出的图像尺寸可以小于输入池化层的图像的尺寸,池化层输出的图像中每个像素点表示输入池化层的图像的对应子区域的平均值或最大值。
下面详细描述神经网络层730的结构。
在经过卷积层/池化层720的处理后,卷积神经网络700还不足以输出所需要的输出信息。因为如前,卷积层/池化层720只会提取特征,并减少输入图像带来的参数。然而为了生成最终的输出信息(所需要的类信息或别的相关信息),卷积神经网络700需要利用神经网络层730来生成一个或者一组所需要的类的数量的输出。因此,在神经网络层730中可以包括多层隐含层(如图7所示的731、732至733)以及输出层740,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如该任务类型可以包括图像识别,图像分类,图像超分辨率重建等等。应当理解的是,图7所示的三个隐含层1至3仅为示例性,在其他实施方式中可能包括不同数量的隐含层。
在神经网络层730中的多层隐含层之后,也就是整个卷积神经网络700的最后层为输出层740,该输出层740具有类似分类交叉熵的损失函数,具体用于计算预测误差,一旦整个卷积神经网络700的前向传播(如图7由710至740的传播为前向传播)完成,反向传播(如图7由740至710的传播为反向传播)就会开始更新前面提到的各层的权重值以及偏差,以减少卷积神经网络700的损失及卷积神经网络700通过输出层输出的结果和理想结果之间的误差。需要说明的是,如图7所示的卷积神经网络700仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在,
请参阅图8,图8是本申请实施例提供的站点模型更新系统的结构框图。如图8所示,站点模型更新系统800包括:图像采集设备802、接口电路804、设备变化检测装置810、处理器806以及监控图像位姿和相机参数存储器808。其中,设备变化检测装置810进一步包括神经网络处理器820、监控图像存储器812和基准图像存储器814。应当理解的是,设备变化检测装置810用于执行例如附图3所示的步骤S306检测设备变化的操作,也即用于执行对应的附图5所示的检测设备变化S506的各个步骤。设备变化检测装置810包括基准图像存储器814用于存储基准图像,也包括监控图像存储器812用于存储监控图像,还包括神经网络处理器820。神经网络处理器820存储有神经网络模型或者等同的机器学习算法,用于执行步骤S516的判断是否有设备发生变化并输出发生变化的设备所在区域、变化类型以及对应变化量。神经网络处理器820存储的神经网络模型通过附图6所示的神经网络模型的训练方法训练得到,在一种可能的实施方式中可能具有附图7所示的卷积神经网络模型700的结构。
请继续参阅图8,图像采集设备802实时拍摄站点的监控图像,通过接口电路804将监控图像存入监控图像存储器812。当设备变化检测装置810判断出有设备发生变化并输出发生变化的设备所在区域、变化类型以及对应变化量,处理器806执行附图3所示的步骤S308至S330的操作,包括计算监控图像的位姿和相机参数。处理器806还执行步骤S310的操作,判断是否需要放大设备所在区域,如果需要放大设备所在区域,则处理器806指示图像采集设备802采集放大后监控图像,然后通过处理器806计算放大后监控图像的位姿和相机参数,并最终执行步骤S330来更新站点模型。
请参阅图9,图9是本申请实施例提供的图8所示的神经网络处理器的结构框图。如图9所示,神经网络处理器920与外部存储器960和主处理器950构成整体系统架构。这里,图9所示的外部存储器960可以包括图8所示的监控图像位姿和相机参数存储器808,指的是独立于神经网络处理器920的外部存在的存储器。而图9所示的主处理器950可以包括图8所示的处理器806,可以理解成用于处理神经网络算法以外的其它任务的主处理器。如图9所示,神经网络处理器920的核心部分为运算电路903,控制器904控制运算电路903提取存储器(权重存储器或输入存储器)中的数据并进行运算。在一些实现方式中,运算电路903内部包括多个处理单元(Process Engine,PE)。在一些实现方式中,运算电路903是二维脉动阵列。运算电路903还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现方式中,运算电路903是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路903从权重存储器902中取矩阵B相应的数据,并缓存在运算电路903中每一个PE上。运算电路903从输入存储器901中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器908中。向量计算单元907可以对运算电路903的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。例如,向量计算单元907可以用于神经网络中非卷积/非FC层的网络计算,如池化(Pooling),批归一化(Batch Normalization),局部响应归一化(Local Response Normalization)等。在一些实现方式中,向量计算单元907将经处理的输出的向量存储到统一缓存器906。例如,向量计算单元907可以将非线性函数应用到运算电路903的输出,例如累加值的向量,用以生成激活值。在一些实现方式中,向量计算单元907生成归一化的值、合并值,或二者均有。在一些实现方式中,处理过的输出的向量能够用作到运算电路903的激活输入,例如用于在神经网络中的后续层中的使用。因此,根据具体的需求,图8所示的神经网络处理器,其中所运行的神经网络算法可以由图9所示的运算电路903或者向量计算单元907执行,或者由两者协同执行。
请参阅图9,统一存储器906用于存放输入数据以及输出数据。存储单元访问控制器905(Direct Memory Access Controller,DMAC)将外部存储器中的输入数据搬运到输入存储器901和/或统一存储器906、将外部存储器中的权重数据存入权重存储器902,以及将统一存储器906中的数据存入外部存储器。总线接口单元(Bus Interface Unit,BIU)910用于通过总线实现主CPU、DMAC和取指存储器909之间进行交互。与控制器904连接的取指存储器(instruction fetch buffer)909用于存储控制器904使用的指令;控制器904用于调用指存储器909中缓存的指令,实现控制该运算加速器的工作过程。
一般地,统一存储器906,输入存储器901,权重存储器902以及取指存储器909均为片上(On-Chip)存储器,外部存储器为该NPU外部的存储器,该外部存储器可以为双倍数据率同步动态随机存储器(Double Data Rate Synchronous Dynamic Random Access Memory,简称DDR SDRAM)、高带宽存储器(High Bandwidth Memory,HBM)或其他可读可写的存储 器。
本申请提供的具体实施例可以用硬件,软件,固件或固态逻辑电路中的任何一种或组合来实现,并且可以结合信号处理,控制和/或专用电路来实现。本申请具体实施例提供的设备或装置可以包括一个或多个处理器(例如,微处理器,控制器,数字信号处理器(DSP),专用集成电路(ASIC),现场可编程门阵列(FPGA)等),这些处理器处理各种计算机可执行指令从而控制设备或装置的操作。本申请具体实施例提供的设备或装置可以包括将各个组件耦合在一起的系统总线或数据传输系统。系统总线可以包括不同总线结构中的任何一种或不同总线结构的组合,例如存储器总线或存储器控制器,外围总线,通用串行总线和/或利用多种总线体系结构中的任何一种的处理器或本地总线。本申请具体实施例提供的设备或装置可以是单独提供,也可以是系统的一部分,也可以是其它设备或装置的一部分。
本申请提供的具体实施例可以包括计算机可读存储介质或与计算机可读存储介质相结合,例如能够提供非暂时性数据存储的一个或多个存储设备。计算机可读存储介质/存储设备可以被配置为保存数据,程序器和/或指令,这些数据,程序器和/或指令在由本申请具体实施例提供的设备或装置的处理器执行时使这些设备或装置实现有关操作。计算机可读存储介质/存储设备可以包括以下一个或多个特征:易失性,非易失性,动态,静态,可读/写,只读,随机访问,顺序访问,位置可寻址性,文件可寻址性和内容可寻址性。在一个或多个示例性实施例中,计算机可读存储介质/存储设备可以被集成到本申请具体实施例提供的设备或装置中或属于公共系统。计算机可读存储介质/存储设备可以包括光存储设备,半导体存储设备和/或磁存储设备等等,也可以包括随机存取存储器(RAM),闪存,只读存储器(ROM),可擦可编程只读存储器(EPROM),电可擦可编程只读存储器(EEPROM),寄存器,硬盘,可移动磁盘,可记录和/或可重写光盘(CD),数字多功能光盘(DVD),大容量存储介质设备或任何其他形式的合适存储介质。
以上是本申请实施例的实施方式,应当指出,本申请具体实施例描述的方法中的步骤可以根据实际需要进行顺序调整、合并和删减。在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参见其他实施例的相关描述。可以理解的是,本申请实施例以及附图所示的结构并不构成对有关装置或系统的具体限定。在本申请另一些实施例中,有关装置或系统可以包括比具体实施例和附图更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者具有不同的部件布置。本领域技术人员将理解,在不脱离本申请具体实施例的精神和范围的情况下,可以对具体实施例记载的方法和设备的布置,操作和细节进行各种修改或变化;在不脱离本申请实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本申请的保护范围。

Claims (28)

  1. 一种站点模型更新方法,其特征在于,所述方法包括:
    获取监控图像,通过获取到的所述监控图像,确定发生变化的设备的变化类型以及与所述变化类型对应的变化量;
    根据所述监控图像和站点模型,计算所述监控图像的位姿和相机参数;
    根据所述监控图像的位姿和相机参数,确定所述发生变化的设备的位姿;以及
    根据所述发生变化的设备的位姿、所述变化类型以及与所述变化类型对应的变化量,更新所述站点模型。
  2. 根据权利要求1所述的方法,其特征在于,所述通过获取到的所述监控图像,确定发生变化的设备的变化类型以及与所述变化类型对应的变化量包括:
    通过将所述监控图像输入神经网络模型从而确定所述发生变化的设备的变化类型以及与所述变化类型对应的所述变化量,所述变化类型是多个预设变化类型中的一个预设变化类型。
  3. 根据权利要求2所述的方法,其特征在于,所述神经网络模型通过使用损失函数训练得到,
    其中,所述损失函数包括多个子损失函数的加权之和,
    所述多个子损失函数与所述多个预设变化类型一一对应,
    所述多个子损失函数的每一个子损失函数根据与该子损失函数对应的预设变化类型所对应的变化量确定。
  4. 根据权利要求2或3所述的方法,其特征在于,所述多个预设变化类型包括设备新增,所述设备新增所对应的变化量包括所述监控图像的像素大小的最大值。
  5. 根据权利要求2或3所述的方法,其特征在于,所述多个预设变化类型包括设备删除,所述设备删除所对应的变化量包括所述监控图像的像素大小的最大值的负值。
  6. 根据权利要求2或3所述的方法,其特征在于,所述多个预设变化类型包括设备移动,所述设备移动所对应的变化量包括所述发生变化的设备的中心点的移动距离。
  7. 根据权利要求2或3所述的方法,其特征在于,所述多个预设变化类型包括设备旋转,所述设备旋转所对应的变化量包括所述发生变化的设备的边缘与中心点的连线的转向距离。
  8. 根据权利要求2或3所述的方法,其特征在于,所述多个预设变化类型包括设备同时移动和旋转,所述设备同时移动和旋转所对应的变化量包括所述发生变化的设备的中心点的移动距离以及所述发生变化的设备的边缘与中心点的连线的转向距离。
  9. 根据权利要求1-8任一所述的方法,其特征在于,所述方法还包括:
    确定所述发生变化的设备所在区域在所述监控图像中的占比;
    比较所述占比和预设占比;
    当所述占比小于所述预设占比时,获得放大后监控图像;
    根据所述放大后监控图像和站点模型,计算所述放大后监控图像的位姿和相机参数;
    根据所述放大后监控图像的位姿和相机参数,更新所述站点模型。
  10. 根据权利要求9所述的方法,其特征在于,
    所述放大后监控图像根据放大倍数获得,所述放大倍数根据所述占比和所述预设占比确定。
  11. 根据权利要求8所述的方法,其特征在于,
    所述放大后监控图像的位姿和相机参数根据所述放大倍数、所述监控图像的位姿和相机参数确定。
  12. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备;所述芯片系统包括一个或多个接口电路,以及一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如权利要求1-11中任意一项所述方法。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序指令,所述计算机程序指令当被处理器执行时使所述处理器执行如权利要求1-11中任一项所述的方法。
  14. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-11中任一项所述的方法。
  15. 一种站点模型更新系统,其特征在于,所述系统包括:
    设备变化检测装置,其中,所述设备变化检测装置通过监控图像,确定发生变化的设备的变化类型以及与所述变化类型对应的变化量;和
    处理器,其中,所述处理器用于:
    获取所述监控图像;
    根据所述监控图像和站点模型,计算所述监控图像的位姿和相机参数;
    根据所述监控图像的位姿和相机参数,确定所述发生变化的设备的位姿;以及
    根据所述发生变化的设备的位姿、所述变化类型以及与所述变化类型对应的变化量,更新所述站点模型。
  16. 根据权利要求15所述的系统,其特征在于,所述设备变化检测装置存储有神经网络模型,通过将所述监控图像输入所述神经网络模型从而确定所述发生变化的设备的变化类型以及与所述变化类型对应的所述变化量,所述变化类型是多个预设变化类型中的一个预设变化类型。
  17. 根据权利要求16所述的系统,其特征在于,所述神经网络模型通过使用损失函数训 练得到,
    其中,所述损失函数包括多个子损失函数的加权之和,
    所述多个子损失函数与所述多个预设变化类型一一对应,
    所述多个子损失函数的每一个子损失函数根据与该子损失函数对应的预设变化类型所对应的变化量确定。
  18. 根据权利要求16或17所述的系统,其特征在于,所述多个预设变化类型包括设备新增,所述设备新增所对应的变化量包括所述监控图像的像素大小的最大值。
  19. 根据权利要求16或17所述的系统,其特征在于,所述多个预设变化类型包括设备删除,所述设备删除所对应的变化量包括所述监控图像的像素大小的最大值的负值。
  20. 根据权利要求16或17所述的系统,其特征在于,所述多个预设变化类型包括设备移动,所述设备移动所对应的变化量包括所述发生变化的设备的中心点的移动距离。
  21. 根据权利要求16或17所述的系统,其特征在于,所述多个预设变化类型包括设备旋转,所述设备旋转所对应的变化量包括所述发生变化的设备的边缘与中心点的连线的转向距离。
  22. 根据权利要求16或17所述的系统,其特征在于,所述多个预设变化类型包括设备同时移动和旋转,所述设备同时移动和旋转所对应的变化量包括所述发生变化的设备的中心点的移动距离以及所述发生变化的设备的边缘与中心点的连线的转向距离。
  23. 根据权利要求15-21任一所述的系统,其特征在于,所述处理器还用于:
    确定所述发生变化的设备所在区域在所述监控图像中的占比;
    比较所述占比和预设占比;
    当所述占比小于所述预设占比时,获得放大后监控图像;
    根据所述放大后监控图像和站点模型,计算所述放大后监控图像的位姿和相机参数;
    根据所述放大后监控图像的位姿和相机参数,更新所述站点模型。
  24. 根据权利要求23所述的系统,其特征在于,
    所述放大后监控图像根据放大倍数获得,所述放大倍数根据所述占比和所述预设占比确定。
  25. 根据权利要求23所述的系统,其特征在于,
    所述放大后监控图像的位姿和相机参数根据所述放大倍数、所述监控图像的位姿和相机参数确定。
  26. 一种光伏发电系统,其特征在于,所述光伏发电系统包括根据权利要求15-25任一项所述的站点模型更新系统,所述光伏发电系统通过所述站点模型更新系统来监控所述光伏发电系统的变化,所述站点对应所述光伏发电系统。
  27. 一种通讯中转系统,其特征在于,所述通讯中转系统包括根据权利要求15-25任一项所述的站点模型更新系统,所述通讯中转系统通过所述站点模型更新系统来监控所述通讯中转系统的变化,所述站点对应所述通讯中转系统。
  28. 一种交通指挥系统,其特征在于,所述交通指挥系统包括根据权利要求15-25任一项所述的站点模型更新系统,所述交通指挥系统通过所述站点模型更新系统来监控所述交通指挥系统的变化,所述站点对应所述交通指挥系统。
PCT/CN2021/134154 2020-12-16 2021-11-29 站点模型更新方法及系统 WO2022127576A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21905505.0A EP4199498A4 (en) 2020-12-16 2021-11-29 SITE MODEL UPDATE METHOD AND SYSTEM
US18/336,101 US20230334774A1 (en) 2020-12-16 2023-06-16 Site model updating method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011487305.1A CN114640785A (zh) 2020-12-16 2020-12-16 站点模型更新方法及系统
CN202011487305.1 2020-12-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/336,101 Continuation US20230334774A1 (en) 2020-12-16 2023-06-16 Site model updating method and system

Publications (1)

Publication Number Publication Date
WO2022127576A1 true WO2022127576A1 (zh) 2022-06-23

Family

ID=81945419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134154 WO2022127576A1 (zh) 2020-12-16 2021-11-29 站点模型更新方法及系统

Country Status (4)

Country Link
US (1) US20230334774A1 (zh)
EP (1) EP4199498A4 (zh)
CN (1) CN114640785A (zh)
WO (1) WO2022127576A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979301A (zh) * 2022-07-28 2022-08-30 成都锐菲网络科技有限公司 公安视图库与交警集指协议替身数据实时共享方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002051156A (ja) * 2000-07-12 2002-02-15 Sadao Takaoka 移動体通信装置による監視システム
CN103702071A (zh) * 2013-12-11 2014-04-02 国家电网公司 基于rfid技术的变电站设备视频监控方法
CN105141912A (zh) * 2015-08-18 2015-12-09 浙江宇视科技有限公司 一种信号灯重定位的方法及设备
CN110473259A (zh) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 位姿确定方法及装置、电子设备和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1297691A2 (en) * 2000-03-07 2003-04-02 Sarnoff Corporation Camera pose estimation
US20070065002A1 (en) * 2005-02-18 2007-03-22 Laurence Marzell Adaptive 3D image modelling system and apparatus and method therefor
CN111462316B (zh) * 2020-04-20 2023-06-20 国网河北省电力有限公司培训中心 一种光伏电站三维全景监视方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002051156A (ja) * 2000-07-12 2002-02-15 Sadao Takaoka 移動体通信装置による監視システム
CN103702071A (zh) * 2013-12-11 2014-04-02 国家电网公司 基于rfid技术的变电站设备视频监控方法
CN105141912A (zh) * 2015-08-18 2015-12-09 浙江宇视科技有限公司 一种信号灯重定位的方法及设备
CN110473259A (zh) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 位姿确定方法及装置、电子设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4199498A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979301A (zh) * 2022-07-28 2022-08-30 成都锐菲网络科技有限公司 公安视图库与交警集指协议替身数据实时共享方法及系统

Also Published As

Publication number Publication date
EP4199498A1 (en) 2023-06-21
CN114640785A (zh) 2022-06-17
EP4199498A4 (en) 2024-03-20
US20230334774A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
JP6898534B2 (ja) 機械学習におけるデータ・ストレージを低減するためのシステムおよび方法
WO2021164234A1 (zh) 图像处理方法以及图像处理装置
CN113076871B (zh) 一种基于目标遮挡补偿的鱼群自动检测方法
JP7284352B2 (ja) リアルタイムオブジェクト検出及び語意分割の同時行いシステム及び方法及び非一時的なコンピュータ可読媒体
CN113936256A (zh) 一种图像目标检测方法、装置、设备以及存储介质
CN113065645B (zh) 孪生注意力网络、图像处理方法和装置
CN113066017B (zh) 一种图像增强方法、模型训练方法及设备
CN112990211A (zh) 一种神经网络的训练方法、图像处理方法以及装置
CN109543691A (zh) 积水识别方法、装置以及存储介质
WO2021249114A1 (zh) 目标跟踪方法和目标跟踪装置
CN112037142B (zh) 一种图像去噪方法、装置、计算机及可读存储介质
CN113486887B (zh) 三维场景下的目标检测方法和装置
WO2022052782A1 (zh) 图像的处理方法及相关设备
WO2023125628A1 (zh) 神经网络模型优化方法、装置及计算设备
CN113850136A (zh) 基于yolov5与BCNN的车辆朝向识别方法及系统
CN115953643A (zh) 基于知识蒸馏的模型训练方法、装置及电子设备
WO2022127576A1 (zh) 站点模型更新方法及系统
CN104463962B (zh) 基于gps信息视频的三维场景重建方法
Wang et al. Object counting in video surveillance using multi-scale density map regression
CN117593702B (zh) 远程监控方法、装置、设备及存储介质
Liu et al. Two-stream refinement network for RGB-D saliency detection
CN116258756B (zh) 一种自监督单目深度估计方法及系统
CN114820755B (zh) 一种深度图估计方法及系统
CN116664694A (zh) 图像亮度获取模型的训练方法、图像获取方法及移动终端
CN115249269A (zh) 目标检测方法、计算机程序产品、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905505

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021905505

Country of ref document: EP

Effective date: 20230316

NENP Non-entry into the national phase

Ref country code: DE