CN117437602B - Dual-layer data calibration method, device, equipment and readable storage medium - Google Patents

Dual-layer data calibration method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN117437602B
CN117437602B CN202311765774.9A CN202311765774A CN117437602B CN 117437602 B CN117437602 B CN 117437602B CN 202311765774 A CN202311765774 A CN 202311765774A CN 117437602 B CN117437602 B CN 117437602B
Authority
CN
China
Prior art keywords
layer data
marked
data
analyzed
top layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311765774.9A
Other languages
Chinese (zh)
Other versions
CN117437602A (en
Inventor
何宝东
龚贺
张百喆
黄诗扬
魏菁杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tianyi Technology Co ltd
Original Assignee
Guangzhou Tianyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tianyi Technology Co ltd filed Critical Guangzhou Tianyi Technology Co ltd
Priority to CN202311765774.9A priority Critical patent/CN117437602B/en
Publication of CN117437602A publication Critical patent/CN117437602A/en
Application granted granted Critical
Publication of CN117437602B publication Critical patent/CN117437602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding

Abstract

The application provides a double-layer data calibration method, device, equipment and readable storage medium, and the data calibration method provided by the embodiment of the application can be suitable for calibrating 3D sensing point cloud data formed by laser and millimeter wave radars when out-of-range behaviors exist in layer data. The method provided by the embodiment of the invention can combine the two-dimensional data provided by the client, perform area calibration by sensing and acquiring the 3D point cloud data to be marked through different sensing kits, and simultaneously can perform different degrees of alarm on the behavior with out-of-range so as to more accurately acquire the out-of-range behavior in the acquired data. Compared with the calibration of the traditional data out-of-range behavior, the data calibration method provided by the embodiment of the application can effectively provide the calibration accuracy and precision of the data to be calibrated.

Description

Dual-layer data calibration method, device, equipment and readable storage medium
Technical Field
The present disclosure relates to the field of data calibration technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for calibrating dual layer data.
Background
In an industry application scene, recognition based on machine vision out-of-range behaviors mostly adopts two-dimensional machine vision calibration, and an identified object is judged to be out-of-range after entering a corresponding calibration range. The method has error problem and precision problem for calibrating the out-of-range behavior of the monitored object. For example, due to the deployment position of the image capturing apparatus, when a projection of a person or object appears in an area to be calibrated, false triggering of an alarm may be caused; in the monitoring scene of the unmanned and unmanned factory, when specific behaviors of vehicles and robots are identified, out-of-range monitoring needs to be carried out on a specific 3D space region, the accuracy requirement of the monitoring range is high, and the current two-dimensional machine vision calibration accuracy cannot reach the monitoring scene of the unmanned and unmanned factory. It can be seen that the mode of adopting 3D perception or video and 3D fusion perception is a future development trend of out-of-range behavior monitoring.
Disclosure of Invention
The present application aims to solve at least one of the above technical drawbacks, and accordingly, the present application provides a method, an apparatus, a device, and a readable storage medium for calibrating double-layer data, which are used for solving the technical defect of inaccurate data calibration in the picture data in the prior art.
A dual layer data calibration method comprises the following steps:
acquiring bottom layer image data and top layer image data to be marked;
performing alignment analysis on the acquired bottom layer data and top layer data to be marked, and determining areas to be analyzed in the bottom layer data and the top layer data to be marked;
setting detection heights of areas to be analyzed in the bottom layer data and the top layer data to be marked to form 3D area identification heights to be marked, and judging whether coordinate information of an area where the behaviors of people or objects in the areas to be analyzed in the bottom layer data and the top layer data to be marked are located exceeds an area alarm threshold corresponding to the areas to be analyzed in the bottom layer data and the top layer data to be marked or not through the 3D area identification heights to be marked;
setting a region alarm threshold corresponding to a region to be analyzed in the bottom layer data to be marked and the top layer data;
determining coordinate information of areas to be analyzed in the bottom layer image layer data to be marked and the top layer image layer data;
judging whether the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds a set region alarm threshold value;
And if the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds the set region alarm threshold, sending out an alarm corresponding to the exceeded region alarm threshold.
Preferably, the aligning analysis of the acquired bottom layer data and top layer data to be marked, and the determining the manner of the area to be analyzed in the bottom layer data and the top layer data to be marked include:
aligning the acquired bottom layer image layer data to be marked with the top layer image layer data to obtain target bottom layer image layer data to be marked with the top layer image layer data;
determining the delineated area as the area to be analyzed in the bottom layer data and the top layer data to be marked of the target by circumscribing the area to be analyzed in the bottom layer data and the top layer data to be marked of the target;
and/or the number of the groups of groups,
and selecting the abscissa and the ordinate of the region to be analyzed in the bottom layer data and the top layer data to be marked of the target, and taking the determined region formed by connecting a plurality of abscissas and ordinates of the region to be analyzed in the bottom layer data and the top layer data to be marked of the target as the region to be analyzed in the bottom layer data and the top layer data to be marked of the target.
Preferably, the region alarm threshold corresponding to the region to be analyzed in the bottom layer data and the top layer data to be marked comprises a first-level alarm threshold, a second-level alarm threshold and a third-level alarm threshold, wherein the first-level alarm threshold is an alarm threshold for general out-of-range behaviors in the layer data, the second-level alarm threshold is an alarm threshold for serious out-of-range behaviors in the layer data, and the third-level alarm threshold is an alarm threshold for deadly out-of-range behaviors in the layer data.
Preferably, if the coordinate information of the area to be analyzed in the bottom layer data and the top layer data to be marked exceeds the set area alarm threshold, sending an alarm corresponding to the exceeded area alarm threshold, including:
if the coordinate information of the region to be analyzed in the bottom layer image layer data and the top layer image layer data to be marked exceeds the region alarm threshold corresponding to the set first-level alarm threshold, a first-level alarm is sent out;
if the coordinate information of the region to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the region alarm threshold corresponding to the set second-level alarm threshold, a second-level alarm is sent out;
And if the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds the region alarm threshold corresponding to the set three-level alarm threshold, sending out three-level alarm.
Preferably, the method further comprises:
and storing the layer data corresponding to the alarm.
Preferably, the method further comprises:
setting display attributes of the bottom layer image layer data and the top layer image layer data to be marked;
and if the display attribute of the bottom layer data and the top layer data to be marked is display, displaying the bottom layer data and/or the top layer data to be marked.
A dual layer data calibration system for use in any of the dual layer data calibration methods described above, the system comprising: a multi-source heterogeneous sensing platform and at least one multi-source heterogeneous sensing suite;
wherein,
the multi-source heterogeneous sensing suite is responsible for acquiring top layer image layer data to be marked;
the multi-source heterogeneous sensing platform is responsible for acquiring bottom layer image data to be marked, carrying out alignment analysis on the acquired bottom layer image data to be marked and top layer image data, and determining areas to be analyzed in the bottom layer image data to be marked and the top layer image data; setting detection heights of areas to be analyzed in the bottom layer data and the top layer data to be marked to form 3D area identification heights to be marked, and judging whether coordinate information of an area where the behaviors of people or objects in the areas to be analyzed in the bottom layer data and the top layer data to be marked are located exceeds an area alarm threshold corresponding to the areas to be analyzed in the bottom layer data and the top layer data to be marked or not through the 3D area identification heights to be marked; setting a region alarm threshold corresponding to a region to be analyzed in the bottom layer data to be marked and the top layer data; determining coordinate information of areas to be analyzed in the bottom layer image layer data to be marked and the top layer image layer data; judging whether the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds a set region alarm threshold value; and if the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds the set region alarm threshold, sending out an alarm corresponding to the exceeded region alarm threshold.
Preferably, each multi-source heterogeneous sensing kit comprises at least one sensing component, wherein the sensing components comprise a laser radar sensing component, a millimeter wave radar sensing component and a thermal camera sensing component;
based on the method, the mode of acquiring the top layer image layer data to be marked by the multi-source heterogeneous sensing suite comprises the mode of acquiring by one or more sensing modes of a laser radar sensing mode, a millimeter wave radar sensing mode and a thermal camera sensing mode.
A dual layer data calibration device comprising: one or more processors, and memory;
the memory has stored therein computer readable instructions which, when executed by the one or more processors, implement the steps of the dual layer data calibration method as described in any of the preceding introduction.
A readable storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to implement the steps of the dual layer data calibration method of any of the preceding introduction.
According to the technical scheme, when out-of-range behaviors in the picture data are required to be marked, the method provided by the embodiment of the application can acquire the bottom layer picture layer data and the top layer picture layer data to be marked; and performing alignment analysis on the acquired bottom layer data and top layer data to be marked so that the top view projection of the top layer to be marked on the bottom layer can be overlapped, therefore, the areas to be analyzed in the bottom layer image layer data and the top layer image layer data to be marked can be determined; meanwhile, in order to judge out-of-range behaviors in the picture data, the detection heights of the areas needing to be analyzed in the bottom layer picture data and the top layer picture data to be marked can be set, so that the 3D area identification heights to be marked can be formed, and whether the coordinate information of the areas where the behaviors of people or objects in the areas needing to be analyzed in the bottom layer picture data and the top layer picture data to be marked are located exceeds the area alarm threshold corresponding to the areas needing to be analyzed in the bottom layer picture data and the top layer picture data to be marked is judged through the 3D area identification heights to be marked; meanwhile, the region alarm threshold corresponding to the region to be analyzed in the bottom layer image data and the top layer image data to be marked can be set; so that it can be better analyzed whether the object in the acquired layer data exceeds the area alarm threshold. After setting the region alarm threshold corresponding to the region to be analyzed in the bottom layer data and the top layer data to be marked, the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked can be further determined; after the coordinate information of the area to be analyzed in the bottom layer data and the top layer data to be marked is determined, whether the coordinate information of the area to be analyzed in the bottom layer data and the top layer data to be marked exceeds a set area alarm threshold value can be further judged; if the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the set area alarm threshold, the out-of-range behavior in the image layer data is indicated, and an alarm corresponding to the exceeded area alarm threshold can be sent out.
Therefore, when whether out-of-range behaviors exist in the image layer data or not is needed, the data calibration method provided by the embodiment of the application can be suitable for calibrating 3D sensing point cloud data formed by laser and millimeter wave radars. The method provided by the embodiment of the invention can combine the two-dimensional data provided by the client, perform area calibration by sensing and acquiring the 3D point cloud data to be marked through different sensing kits, and simultaneously can perform different degrees of alarm on the behavior with out-of-range so as to more accurately acquire the out-of-range behavior in the acquired data. Compared with the calibration of the traditional data out-of-range behavior, the data calibration method provided by the embodiment of the application can effectively provide the calibration accuracy and precision of the data to be calibrated. Furthermore, the method provided by the embodiment of the application can effectively utilize the existing CAD graph as the bottom layer to carry out the marking of the X-axis and Y-axis coordinates, and further can support the monitoring of the sensing equipment acquired 3D data, and the method provided by the embodiment of the application can realize the marking of the 3D area without complex 3D accurate modeling, is simple, convenient and easy, and can effectively match with the existing business process and technical development trend.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a system frame structure diagram for calibrating out-of-range behavior in dual-layer data according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for implementing dual layer data calibration according to an embodiment of the present application;
fig. 3 is a hardware structure block diagram of a dual layer data calibration device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In view of the fact that most of the prior double-layer data calibration schemes are difficult to adapt to complex and changeable business requirements, the applicant researches a double-layer data calibration scheme, and when out-of-range behaviors exist in the layer data or not, the data calibration method provided by the embodiment of the application can be suitable for calibrating 3D sensing point cloud data formed by laser and millimeter wave radars. The method provided by the embodiment of the invention can combine the two-dimensional data provided by the client, perform area calibration by sensing and acquiring the 3D point cloud data to be marked through different sensing kits, and simultaneously can perform different degrees of alarm on the behavior with out-of-range so as to more accurately acquire the out-of-range behavior in the acquired data. Compared with the calibration of the traditional data out-of-range behavior, the data calibration method provided by the embodiment of the application can effectively provide the calibration accuracy and precision of the data to be calibrated. Furthermore, the method provided by the embodiment of the application can effectively utilize the existing CAD graph as the bottom layer to carry out the marking of the X-axis and Y-axis coordinates, and further can support the monitoring of the sensing equipment acquired 3D data, and the method provided by the embodiment of the application can realize the marking of the 3D area without complex 3D accurate modeling, is simple, convenient and easy, and can effectively match with the existing business process and technical development trend.
The methods provided by the embodiments of the present application may be used in a wide variety of general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor devices, distributed computing environments that include any of the above devices or devices, and the like.
The embodiment of the application provides a double-layer data calibration scheme, which can be applied to various data management systems or out-of-range behavior monitoring systems, and also can be applied to various computer terminals or intelligent terminals, wherein an execution subject can be a processor or a server of the computer terminal or the intelligent terminal.
An optional system architecture for implementing out-of-range data calibration according to an embodiment of the present application is described below with reference to fig. 1, where, as shown in fig. 1, the system architecture may include: a multi-source heterogeneous sensing platform and at least one multi-source heterogeneous sensing suite;
wherein,
the multi-source heterogeneous sensing suite can be responsible for acquiring top-layer image layer data to be marked;
wherein,
the top layer data may be 3D point cloud data.
Each multi-source heterogeneous sensing kit can include at least one sensing component;
wherein,
The sensing component may include a lidar sensing component, a millimeter wave radar sensing component, a thermal camera sensing component.
The multi-source heterogeneous sensing suite can be a device capable of supporting multiple sensing modes, and the device can support multiple sensing devices and also comprises an edge computing server which can be integrated with the multiple sensing devices.
Wherein,
the sensing component may be a sensing sensor that can support a variety of sensing modes.
Wherein,
each sensing sensor can acquire data of a corresponding point cloud, and the edge computing server corresponding to each sensing sensor can align and fuse the data of a plurality of point source data to form point cloud data which can be output to a multi-source heterogeneous sensing platform.
Based on the above, the method for acquiring the top layer image layer data to be marked by the multi-source heterogeneous sensing suite can comprise one or more sensing modes of a laser radar sensing mode, a millimeter wave radar sensing mode and a thermal camera sensing mode.
The multi-source heterogeneous sensing platform can acquire the bottom layer image data to be marked, and can conduct alignment analysis on the acquired bottom layer image data to be marked and top layer image data, so that the areas needing to be analyzed in the bottom layer image data to be marked and the top layer image data can be determined.
For example, the number of the cells to be processed,
the acquired bottom layer data to be marked and the top layer data can be aligned, so that the acquired top layer to be marked can be analyzed after the top view projections of the bottom layer can be overlapped, so that the areas to be analyzed in the bottom layer data and the top layer data to be annotated can be determined.
Wherein,
the underlying layer data may be an underlying layer site construction map or two-dimensional map data provided by a third party.
As can be seen from the above description, each of the multi-source heterogeneous sensing assemblies may be a sensing sensor.
And each perception sensor can acquire the data of the corresponding point cloud, and each edge computing server corresponding to each perception sensor can align and fuse the data of a plurality of point sources to form the point cloud data which can be output to the multi-source heterogeneous perception platform, so that the multi-source heterogeneous perception platform can fuse the point cloud data formed by a plurality of multi-source heterogeneous perception suites into the point cloud image data of a large scene.
For example, in the practical application process, the multi-source heterogeneous sensing platform provided by the embodiment of the application can analyze and calibrate independent point cloud data formed when the laser radar sensor and the millimeter wave radar sensor are combined or deployed independently.
The calibration system provided by the embodiment of the application can analyze and calibrate the 3D layer data, so that after determining the areas needing to be analyzed in the bottom layer data and the top layer data to be marked, in order to better analyze the bottom layer data and the top layer data to be marked, the detection heights of the areas needing to be analyzed in the bottom layer data and the top layer data to be marked can be set, so that the 3D area identification height to be identified is formed, and whether the coordinate information of the areas where the behaviors of people or objects in the areas needing to be analyzed in the bottom layer data and the top layer data to be marked are located in the areas needing to be analyzed in the bottom layer data and the top layer data to be marked exceeds the alarm area threshold corresponding to the areas needing to be analyzed in the bottom layer data and the top layer data to be marked is judged through the 3D area identification height to be marked.
Furthermore, an area alarm threshold corresponding to an area to be analyzed in the bottom layer data and the top layer data to be marked can be set, so that when the analyzed area coordinate information exceeds the analyzed area in the bottom layer data and the top layer data, an alarm can be given.
For example, after setting the region alarm threshold corresponding to the region to be analyzed in the bottom layer data and the top layer data to be annotated, the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be annotated can be further determined; so as to judge whether the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the set area alarm threshold value; if the coordinate information of the region to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the set region alarm threshold, an alarm corresponding to the exceeded region alarm threshold can be sent out.
In the actual application process, the multi-source heterogeneous sensing platform also has a global function, wherein the global function of the multi-source heterogeneous sensing platform can comprise adding, deleting, modifying and checking equipment parameters, upgrading software and hardware versions, managing logs, managing the running state of equipment and managing the topology of the equipment.
The alarm function of the multi-source heterogeneous sensing platform provided by the embodiment of the application can not only alarm out of range, but also alarm abnormal behaviors and alarm illegal operations, wherein the abnormal behaviors can include abnormal behaviors defined by characteristic subdivision scenes such as frame drawing, smoking, falling and the like.
According to the description, when whether out-of-range behaviors exist in the image layer data or not is needed, the data calibration system provided by the embodiment of the application can be suitable for calibrating 3D sensing point cloud data formed by laser and millimeter wave radars. The method provided by the embodiment of the invention can combine the two-dimensional data provided by the client, perform area calibration by sensing and acquiring the 3D point cloud data to be marked through different sensing kits, and simultaneously can perform different degrees of alarm on the behavior with out-of-range so as to more accurately acquire the out-of-range behavior in the acquired data. Compared with the calibration of the traditional data out-of-range behavior, the data calibration method provided by the embodiment of the application can effectively provide the calibration accuracy and precision of the data to be calibrated. Furthermore, the method provided by the embodiment of the application can effectively utilize the existing CAD graph as the bottom layer to carry out the marking of the X-axis and Y-axis coordinates, and further can support the monitoring of the sensing equipment acquired 3D data, and the method provided by the embodiment of the application can realize the marking of the 3D area without complex 3D accurate modeling, is simple, convenient and easy, and can effectively match with the existing business process and technical development trend.
The following describes, with reference to fig. 2, a flow of a dual layer data calibration method according to an embodiment of the present application, as shown in fig. 2, where the flow may include the following steps:
step S101, bottom layer image data and top layer image data to be marked are obtained.
Specifically, in the practical application process, when the behavior of the layer data needs to be marked, in order to improve the marking precision of the layer data, the bottom layer image data and the top layer image protection tool to be marked can be obtained, the double-layer image data is obtained for analysis marking, and the marking precision of the layer data can be effectively improved.
Wherein,
the underlying layer data may be site construction maps of the target monitoring site or two-dimensional map data provided by a third party.
The top layer data can be 3D point cloud data obtained through sensing of the multi-source heterogeneous sensing terminal.
For example, the number of the cells to be processed,
the bottom layer data can be site construction map data provided by a client or two-dimensional map data provided by a third party;
the top layer data are 3D point cloud data obtained through perception of each multi-source heterogeneous perception terminal.
Wherein,
the 3D point cloud data format may be: an array of points may contain 6 floating point numbers, e.g., three floating point numbers for x, y, z coordinates, three for rgb color values, respectively, may be included in each point.
For example, the number of the cells to be processed,
the json expression format of a certain 3D point cloud data may be as follows:
[ {x : 32.0245, y : 42.8323, z : 382.0398, r : 14, g : 54, b : 234}, {...}, ...]。
in practical application, considering data transmission performance, it is possible to directly store the perceived 3D point cloud data at a predetermined position, and then express the perceived 3D point cloud data with a floating point number array.
Step S102, analyzing the acquired bottom layer data and top layer data to be marked, and determining the areas to be analyzed in the bottom layer data and the top layer data to be marked.
Specifically, as can be seen from the above description, the method provided by the embodiment of the present application may obtain the bottom layer image data and the top layer image data to be marked.
After the bottom layer data and the top layer data to be marked are obtained, the obtained bottom layer data and top layer data to be marked can be further aligned and analyzed, so that the top view projection of the top layer to be marked on the bottom layer can be overlapped, and the bottom layer data to be marked and the area to be analyzed in the top layer data can be better determined.
Wherein,
the manner of determining the bottom layer data to be marked and the area to be analyzed in the top layer data can be as follows:
The method comprises the steps that the area which is marked by the method is defined as the area which needs to be analyzed in the bottom layer data and the top layer data to be marked by the area which needs to be analyzed in the acquired bottom layer data and the top layer data to be marked;
and/or the number of the groups of groups,
by selecting the abscissa and ordinate of the acquired bottom layer data to be marked and the region to be analyzed in the top layer data, and taking the determined region formed by connecting a plurality of abscissas and ordinates of the region to be analyzed in the bottom layer data and the top layer data to be marked as the region to be analyzed in the bottom layer data and the top layer data to be marked.
Particularly, in the actual application process, display attributes of the bottom layer image layer data and the top layer image layer data to be marked can be set according to the requirements of the actual application scene;
if the display attribute of the bottom layer data and the top layer data to be marked is display, the bottom layer data and/or the top layer data to be marked can be displayed for viewing.
Step S103, setting detection heights of areas to be analyzed in the bottom layer image layer data and the top layer image layer data to be marked so as to form 3D area identification heights to be identified.
Specifically, as can be seen from the above description, the method provided in the embodiments of the present application may perform alignment analysis on dual-layer data, where the data of one layer may be 3D point cloud data.
Therefore, after determining the areas to be analyzed in the bottom layer data and the top layer data to be marked, the detection heights of the areas to be analyzed in the bottom layer data and the top layer data to be marked can be set to form the 3D area identification heights to be identified, and judging whether the coordinate information of the region where the behavior of the person or object of the region to be analyzed in the bottom layer image data and the top layer image data to be annotated is located exceeds the region alarm threshold corresponding to the region to be analyzed in the bottom layer image data and the top layer image data to be annotated or not through the height of the 3D region identifier to be annotated.
Wherein,
the monitoring area height setting mode can be realized by setting the Z value of the coordinate point of the area or by selecting the area height range in a circle.
Step S104, setting the region alarm threshold corresponding to the region to be analyzed in the bottom layer data to be marked and the top layer data.
Specifically, in the actual application process, if the behaviors of all the areas of the bottom layer image data and the top layer image data to be marked are directly considered as out-of-range behaviors, the marked area range is large.
Therefore, after determining the areas to be analyzed in the bottom layer data and the top layer data to be marked, an area alarm threshold corresponding to the areas to be analyzed in the bottom layer data and the top layer data to be marked can be set, and when the area coordinate information of the behaviors of people or objects in the layer data exceeds the area alarm threshold, the behaviors can be considered as out-of-range behaviors.
Wherein,
the region alarm threshold corresponding to the region to be analyzed in the bottom layer data and the top layer data to be marked can comprise a primary alarm threshold, a secondary alarm threshold and a tertiary alarm threshold.
Wherein,
the first-level alarm threshold may be set as an alarm threshold for general out-of-range behavior;
the secondary alarm threshold may be set as an alarm threshold for severe out-of-range behavior;
the tertiary alarm threshold may be set as an alarm threshold for deadly out of bounds behavior.
For example, the number of the cells to be processed,
the regional alarm threshold can be designed by referring to collision early warning, and the value of the point cloud data of the identified object in the collision early warning, which is far from the boundary of the out-of-range region, is set as a first-level alarm threshold, a second-level alarm threshold and a third-level alarm threshold of three levels.
Wherein,
the alarm level corresponding to the first-level alarm threshold value is critical alarm: the marked object does not cross the boundary, but approaches the boundary of the cross-boundary area, so that the risk of collision or cross-boundary exists, and corresponding braking treatment is needed;
Alarm level out-of-range alarm corresponding to the second-level alarm threshold: the marked object is out of range, and the marked object is in an out-of-range state, but does not enter the out-of-range area completely;
the alarm level corresponding to the three-level alarm threshold is a serious out-of-range alarm, which means that the marked object completely enters the out-of-range area.
Step S105, determining the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be labeled.
Specifically, as can be seen from the above description, the method provided by the embodiment of the present application may determine the areas to be analyzed and the corresponding area alarm thresholds in the bottom layer data and the top layer data to be labeled.
After determining the areas to be analyzed in the bottom layer data and the top layer data to be marked, the coordinate information of the areas to be analyzed in the bottom layer data and the top layer data to be marked can be further determined, so that whether the coordinate information of the areas to be analyzed in the bottom layer data and the top layer data to be marked exceeds the set area alarm threshold value can be judged, and the layer data can be marked.
And S106, judging whether the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds a set region alarm threshold value.
Specifically, as can be seen from the above description, the method provided by the embodiment of the present application may determine the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be labeled, and the set region alarm threshold.
Further, it can be determined whether the coordinate information of the region to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the set region alarm threshold.
If the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the set area alarm threshold, it is indicated that the area to be analyzed in the bottom layer image data and the top layer image data to be marked may have out-of-range behavior, and step S107 may be executed.
Step S107, an alarm corresponding to the exceeded regional alarm threshold is sent out.
Specifically, as can be seen from the foregoing description, after determining the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked, the method provided by the embodiment of the application may further determine whether the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the set area alarm threshold.
When the coordinate information of the area to be analyzed in the bottom layer data and the top layer data to be marked is determined to exceed the set area alarm threshold value, the fact that the area to be analyzed in the bottom layer data and the top layer data to be marked possibly has out-of-range behaviors is indicated, and an alarm corresponding to the exceeded area alarm threshold value can be sent out.
For example, the number of the cells to be processed,
if the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the area alarm threshold corresponding to the set first-level alarm threshold, the marked object is not out of range, but is close to the boundary of the out-of-range area, the possible risk of collision or out-of-range exists, and corresponding braking processing is needed, so that the first-level alarm can be sent out;
if the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the area alarm threshold corresponding to the set secondary alarm threshold, the marked object is indicated to be out of range, but the marked object is in an out-of-range state, but does not enter the out-of-range area completely, and a secondary alarm can be sent out;
if the coordinate information of the area to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the area alarm threshold corresponding to the set three-level alarm threshold, the marked object is completely entered into the out-of-range area, and three-level alarm can be sent out.
In the practical application process, in order to trace the image layer data, the method provided by the embodiment of the application can also store the image data corresponding to the sent alarm, so that the out-of-range behavior of the image layer data can be traced.
According to the description, when whether out-of-range behaviors exist in the image layer data or not is needed, the data calibration system provided by the embodiment of the application can be suitable for calibrating 3D sensing point cloud data formed by laser and millimeter wave radars. The method provided by the embodiment of the invention can combine the two-dimensional data provided by the client, perform area calibration by sensing and acquiring the 3D point cloud data to be marked through different sensing kits, and simultaneously can perform different degrees of alarm on the behavior with out-of-range so as to more accurately acquire the out-of-range behavior in the acquired data. Compared with the calibration of the traditional data out-of-range behavior, the data calibration method provided by the embodiment of the application can effectively provide the calibration accuracy and precision of the data to be calibrated. Furthermore, the method provided by the embodiment of the application can effectively utilize the existing CAD graph as the bottom layer to carry out the marking of the X-axis and Y-axis coordinates, and further can support the monitoring of the sensing equipment acquired 3D data, and the method provided by the embodiment of the application can realize the marking of the 3D area without complex 3D accurate modeling, is simple, convenient and easy, and can effectively match with the existing business process and technical development trend.
The dual-layer data calibration device provided by the embodiment of the application can be applied to dual-layer data calibration equipment, such as a terminal: cell phones, computers, etc. Optionally, fig. 3 shows a hardware structure block diagram of a dual layer data calibration device, and referring to fig. 3, the hardware structure of the dual layer data calibration device may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4.
In the embodiment of the present application, the number of the processor 1, the communication interface 2, the memory 3, and the communication bus 4 is at least one, and the processor 1, the communication interface 2, and the memory 3 complete communication with each other through the communication bus 4.
The processor 1 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present application, etc.;
the memory 3 may comprise a high-speed RAM memory, and may further comprise a non-volatile memory (non-volatile memory) or the like, such as at least one magnetic disk memory;
wherein the memory stores a program, the processor is operable to invoke the program stored in the memory, the program operable to: and realizing each processing flow in the terminal double-layer data calibration scheme.
The embodiment of the application also provides a readable storage medium, which can store a program suitable for being executed by a processor, the program being configured to: and realizing each processing flow of the terminal in the double-layer data calibration scheme.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. The various embodiments may be combined with one another. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The method for calibrating the layer data of the double layer is characterized by comprising the following steps of:
acquiring bottom layer image data and top layer image data to be marked;
performing alignment analysis on the acquired bottom layer data and top layer data to be marked, and determining areas to be analyzed in the bottom layer data and the top layer data to be marked;
Setting detection heights of areas to be analyzed in the bottom layer data and the top layer data to be marked to form 3D area identification heights to be marked, and judging whether coordinate information of an area where the behaviors of people or objects in the areas to be analyzed in the bottom layer data and the top layer data to be marked are located exceeds an area alarm threshold corresponding to the areas to be analyzed in the bottom layer data and the top layer data to be marked or not through the 3D area identification heights to be marked;
setting a region alarm threshold corresponding to a region to be analyzed in the bottom layer data to be marked and the top layer data;
determining coordinate information of areas to be analyzed in the bottom layer image layer data to be marked and the top layer image layer data;
judging whether the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds a set region alarm threshold value;
and if the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds the set region alarm threshold, sending out an alarm corresponding to the exceeded region alarm threshold.
2. The method according to claim 1, wherein the performing alignment analysis on the acquired bottom layer data and top layer data to be marked, and determining a manner of an area to be analyzed in the bottom layer data and the top layer data to be marked, includes:
aligning the acquired bottom layer image layer data to be marked with the top layer image layer data to obtain target bottom layer image layer data to be marked with the top layer image layer data;
determining the delineated area as the area to be analyzed in the bottom layer data and the top layer data to be marked of the target by circumscribing the area to be analyzed in the bottom layer data and the top layer data to be marked of the target;
and/or the number of the groups of groups,
and selecting the abscissa and the ordinate of the region to be analyzed in the bottom layer data and the top layer data to be marked of the target, and taking the determined region formed by connecting a plurality of abscissas and ordinates of the region to be analyzed in the bottom layer data and the top layer data to be marked of the target as the region to be analyzed in the bottom layer data and the top layer data to be marked of the target.
3. The method according to claim 1, wherein the region alarm threshold corresponding to the region to be analyzed in the bottom layer data and the top layer data to be marked comprises a first level alarm threshold, a second level alarm threshold and a third level alarm threshold, wherein the first level alarm threshold is an alarm threshold for a general out-of-range behavior in the layer data, the second level alarm threshold is an alarm threshold for a serious out-of-range behavior in the layer data, and the third level alarm threshold is an alarm threshold for a fatal out-of-range behavior in the layer data.
4. The method according to claim 3, wherein if the coordinate information of the area to be analyzed in the bottom layer data and the top layer data to be marked exceeds the set area alarm threshold, sending an alarm corresponding to the exceeded area alarm threshold, including:
if the coordinate information of the region to be analyzed in the bottom layer image layer data and the top layer image layer data to be marked exceeds the region alarm threshold corresponding to the set first-level alarm threshold, a first-level alarm is sent out;
if the coordinate information of the region to be analyzed in the bottom layer image data and the top layer image data to be marked exceeds the region alarm threshold corresponding to the set second-level alarm threshold, a second-level alarm is sent out;
And if the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds the region alarm threshold corresponding to the set three-level alarm threshold, sending out three-level alarm.
5. The method according to claim 1, characterized in that the method further comprises:
and storing the layer data corresponding to the alarm.
6. The method according to claim 1, characterized in that the method further comprises:
setting display attributes of the bottom layer image layer data and the top layer image layer data to be marked;
and if the display attribute of the bottom layer data and the top layer data to be marked is display, displaying the bottom layer data and/or the top layer data to be marked.
7. A dual layer data calibration system, characterized in that it is applied to the dual layer data calibration method of any one of claims 1 to 6, the system comprising: a multi-source heterogeneous sensing platform and at least one multi-source heterogeneous sensing suite;
wherein,
the multi-source heterogeneous sensing suite is responsible for acquiring top layer image layer data to be marked;
the multi-source heterogeneous sensing platform is responsible for acquiring bottom layer image data to be marked, carrying out alignment analysis on the acquired bottom layer image data to be marked and top layer image data, and determining areas to be analyzed in the bottom layer image data to be marked and the top layer image data; setting detection heights of areas to be analyzed in the bottom layer data and the top layer data to be marked to form 3D area identification heights to be marked, and judging whether coordinate information of an area where the behaviors of people or objects in the areas to be analyzed in the bottom layer data and the top layer data to be marked are located exceeds an area alarm threshold corresponding to the areas to be analyzed in the bottom layer data and the top layer data to be marked or not through the 3D area identification heights to be marked; setting a region alarm threshold corresponding to a region to be analyzed in the bottom layer data to be marked and the top layer data; determining coordinate information of areas to be analyzed in the bottom layer image layer data to be marked and the top layer image layer data; judging whether the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds a set region alarm threshold value; and if the coordinate information of the region to be analyzed in the bottom layer data and the top layer data to be marked exceeds the set region alarm threshold, sending out an alarm corresponding to the exceeded region alarm threshold.
8. The system of claim 7, wherein each of the multi-source heterogeneous sensing assemblies comprises at least one sensing component, wherein the sensing component comprises a lidar sensing component, a millimeter wave radar sensing component, a thermal camera sensing component;
based on the method, the mode of acquiring the top layer image layer data to be marked by the multi-source heterogeneous sensing suite comprises the mode of acquiring by one or more sensing modes of a laser radar sensing mode, a millimeter wave radar sensing mode and a thermal camera sensing mode.
9. A dual layer data calibration device, comprising: one or more processors, and memory;
stored in the memory are computer readable instructions which, when executed by the one or more processors, implement the steps of the dual layer data calibration method of any one of claims 1 to 6.
10. A readable storage medium, characterized by: the readable storage medium has stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to implement the steps of the dual layer data calibration method of any of claims 1 to 6.
CN202311765774.9A 2023-12-21 2023-12-21 Dual-layer data calibration method, device, equipment and readable storage medium Active CN117437602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311765774.9A CN117437602B (en) 2023-12-21 2023-12-21 Dual-layer data calibration method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311765774.9A CN117437602B (en) 2023-12-21 2023-12-21 Dual-layer data calibration method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN117437602A CN117437602A (en) 2024-01-23
CN117437602B true CN117437602B (en) 2024-03-22

Family

ID=89558686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311765774.9A Active CN117437602B (en) 2023-12-21 2023-12-21 Dual-layer data calibration method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117437602B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417965A (en) * 2020-10-21 2021-02-26 湖北亿咖通科技有限公司 Laser point cloud processing method, electronic device and storage medium
JP2021042626A (en) * 2019-09-13 2021-03-18 株式会社日立プラントサービス Robot control device and program
CN114332830A (en) * 2021-12-15 2022-04-12 中汽创智科技有限公司 Image processing method, apparatus and medium
CN115578475A (en) * 2022-10-18 2023-01-06 深圳思谋信息科技有限公司 Image storage method, device, readable medium and equipment
CN115797406A (en) * 2022-11-29 2023-03-14 广东电网有限责任公司 Out-of-range warning method, device, equipment and storage medium
CN117079238A (en) * 2023-08-18 2023-11-17 上海云骥跃动智能科技发展有限公司 Road edge detection method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021042626A (en) * 2019-09-13 2021-03-18 株式会社日立プラントサービス Robot control device and program
CN112417965A (en) * 2020-10-21 2021-02-26 湖北亿咖通科技有限公司 Laser point cloud processing method, electronic device and storage medium
CN114332830A (en) * 2021-12-15 2022-04-12 中汽创智科技有限公司 Image processing method, apparatus and medium
CN115578475A (en) * 2022-10-18 2023-01-06 深圳思谋信息科技有限公司 Image storage method, device, readable medium and equipment
CN115797406A (en) * 2022-11-29 2023-03-14 广东电网有限责任公司 Out-of-range warning method, device, equipment and storage medium
CN117079238A (en) * 2023-08-18 2023-11-17 上海云骥跃动智能科技发展有限公司 Road edge detection method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹一冰 等.面向可视分析的用户标注共享与互操作.地矿测绘.第30卷(第2期),第1-3、21页. *

Also Published As

Publication number Publication date
CN117437602A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
EP2405393B1 (en) Device, method and program for creating information for object position estimation
US20210103741A1 (en) Detection method and apparatus for automatic driving sensor, and electronic device
CN113030990B (en) Fusion ranging method, device, ranging equipment and medium for vehicle
CN107920246B (en) The gradient test method and device of camera module
CN110751012B (en) Target detection evaluation method and device, electronic equipment and storage medium
CN109711274A (en) Vehicle checking method, device, equipment and storage medium
CN112549034B (en) Robot task deployment method, system, equipment and storage medium
TW201510878A (en) Measurement device
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN111931704A (en) Method, device, equipment and computer readable storage medium for evaluating map quality
CN113804100A (en) Method, device, equipment and storage medium for determining space coordinates of target object
CN117437602B (en) Dual-layer data calibration method, device, equipment and readable storage medium
CN112749894A (en) Defect detection model evaluation method and device
CN112381873A (en) Data labeling method and device
CN116203976A (en) Indoor inspection method and device for transformer substation, unmanned aerial vehicle and storage medium
CN110634159A (en) Target detection method and device
CN110909668B (en) Target detection method and device, computer readable storage medium and electronic equipment
CN112233171A (en) Target labeling quality inspection method and device, computer equipment and storage medium
CN111880182A (en) Meteorological environment data analysis method and system, storage medium and radar
CN111398961B (en) Method and apparatus for detecting obstacles
CN115936638B (en) Security control system and method based on geographic information influence analysis
CN116597292A (en) Image processing method, device, electronic equipment and storage medium
US20220262034A1 (en) Generation of Non-Semantic Reference Data for Positioning a Motor Vehicle
CN117671477A (en) Obstacle detection method, electronic device, and storage medium
CN115951335A (en) Angle resolution testing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant