CN111652940A - Target abnormity identification method and device, electronic equipment and storage medium - Google Patents

Target abnormity identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111652940A
CN111652940A CN202010359984.8A CN202010359984A CN111652940A CN 111652940 A CN111652940 A CN 111652940A CN 202010359984 A CN202010359984 A CN 202010359984A CN 111652940 A CN111652940 A CN 111652940A
Authority
CN
China
Prior art keywords
data
target
image
attribute information
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010359984.8A
Other languages
Chinese (zh)
Other versions
CN111652940B (en
Inventor
曹素云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202010359984.8A priority Critical patent/CN111652940B/en
Priority to PCT/CN2020/099068 priority patent/WO2021217859A1/en
Publication of CN111652940A publication Critical patent/CN111652940A/en
Application granted granted Critical
Publication of CN111652940B publication Critical patent/CN111652940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to artificial intelligence, and provides a target abnormity identification method, which is applied to electronic equipment and comprises the steps of calculating second position coordinate data of a target to be detected according to collected data, inputting first image data into an image identification model to output first image attribute information, finding an area corresponding to the second position coordinate data from a map according to the second position coordinate data, extracting second image attribute information corresponding to the second image data on the area, judging whether the second image attribute information is consistent with the first image attribute information, calculating a similarity value between the first image data and the second image data if the second image attribute information is consistent with the first image attribute information, judging that the target to be detected corresponding to the first image data is abnormal when the similarity value is smaller than or equal to a first preset threshold value, and generating feedback information to be sent to a client. In addition, the invention also relates to a block chain technology, and the acquired data uploaded by the data acquisition terminal can be stored in the block chain nodes. The invention can timely and comprehensively control the road traffic equipment.

Description

Target abnormity identification method and device, electronic equipment and storage medium
Technical Field
The present invention relates to artificial intelligence, and in particular, to a method and an apparatus for identifying a target anomaly, an electronic device, and a storage medium.
Background
With the rapid development of the traffic industry in China, thousands of traffic devices (road markings, signboards, traffic isolation and collision avoidance facilities, signal lamps, gantry cranes, cameras and the like) on roads need to be managed accurately, and at present, manual evidence obtaining maintenance is mainly adopted in the aspect, the whole process is full of manual participation in the collection of traffic device data, and patrolmen perform blanket road patrol to obtain the evidence of damaged road traffic devices and take pictures of handheld terminals to upload the system. However, the damaged road traffic equipment is easy to miss out by adopting the mode, and the damaged road traffic equipment is not maintained for a long time. And the situation that road traffic equipment is difficult to manage and control in time by adopting the mode. Therefore, how to timely and comprehensively control the road traffic equipment becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The invention mainly aims to provide a target abnormity identification method, a target abnormity identification device, electronic equipment and a storage medium, and aims to solve the problem of how to timely and comprehensively control road traffic equipment.
In order to achieve the above object, the present invention provides a target abnormality identification method applied to an electronic device, including:
an acquisition step: acquiring acquisition data uploaded by a data acquisition terminal, wherein the acquisition data comprises depth distance data between the data acquisition terminal and a target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle of the data acquisition terminal, and second coordinate data of the target to be detected is obtained through calculation according to the depth distance data, the first coordinate data and the azimuth angle of the data acquisition terminal;
an identification step: inputting the first image data into a pre-trained image recognition model, and outputting first image attribute information corresponding to the first image data;
a first processing step: positioning an area corresponding to the second coordinate data from a pre-established map according to the second coordinate data, extracting second image attribute information corresponding to the second image data in the area, and judging whether the first image attribute information is consistent with the second image attribute information; and
a second processing step: and when the first image attribute information is judged to be consistent with the second image attribute information, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a first preset threshold value, judging that the target to be detected is abnormal, generating feedback information comprising the first image attribute information and abnormal state information of the target to be detected, and sending the feedback information to a client.
Preferably, the data acquisition terminal comprises a binocular camera, and the acquisition process of the depth distance data comprises:
shooting calibration objects from different angles by using the binocular camera, calibrating two sub-cameras of the binocular camera according to the shot calibration object images, and calculating to obtain calibration parameters;
shooting the target to be detected by using the binocular camera, and matching the image of the target to be detected shot between the two sub-cameras by using Sobel edge characteristics as characteristic points so as to calculate a visual difference value between the two sub-cameras; and
and calculating the depth distance data according to the calibration parameters and the vision difference value by a predetermined calculation rule.
Preferably, the calculation rule is:
Figure BDA0002474686450000021
z represents the depth distance between the binocular camera and the target to be measured, f and B are the calibration parameters, f represents the focal length of the binocular camera, B represents the center distance between the two sub-cameras, and X represents the distance between the two sub-camerasRAnd XTRepresenting the optical centers, X, of the two sub-camerasR-XTIndicating a visual difference.
Preferably, the data acquisition terminal further includes a GPS processing unit, a mileage coding unit, and an inertial navigation unit, and the acquisition process of the first coordinate data and the azimuth angle includes:
the GPS processing unit is used for receiving a differential signal sent by a differential reference station, outputting position information with first preset precision and sending the position information to the inertial navigation unit;
acquiring mileage information of the vehicle by using the mileage coding unit and sending the mileage information to the inertial navigation unit; and
and receiving the position information and the mileage information by using the inertial navigation unit, fusing the position information and the mileage information, and outputting a first position coordinate and an azimuth angle of the acquisition terminal with a second preset precision, wherein the second preset precision is higher than the first preset precision.
Preferably, the acquired data is stored in a blockchain, and the second processing step further includes:
acquiring a preset number of first image data which are uploaded by different data acquisition terminals and have the same second coordinate data;
respectively calculating a similarity value between each first image data and the corresponding second image data, and counting the number of the similarity values smaller than or equal to a second preset threshold value; and
and if the number of the first image data corresponding to the similarity value is less than or equal to a second preset threshold value and is greater than or equal to a third preset threshold value, judging that the target to be detected corresponding to the first image data is abnormal.
In order to achieve the above object, the present invention further provides a target abnormality recognition apparatus including:
an acquisition module: acquiring acquisition data uploaded by a data acquisition terminal, wherein the acquisition data comprises depth distance data between the data acquisition terminal and a target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle of the data acquisition terminal, and second coordinate data of the target to be detected is obtained through calculation according to the depth distance data, the first coordinate data and the azimuth angle of the data acquisition terminal;
an identification module: inputting the first image data into a pre-trained image recognition model, and outputting first image attribute information corresponding to the first image data;
a first processing module: positioning an area corresponding to the second coordinate data from a pre-established map according to the second coordinate data, extracting second image attribute information corresponding to the second image data in the area, and judging whether the first image attribute information is consistent with the second image attribute information; and
a second processing module: and when the first image attribute information is judged to be consistent with the second image attribute information, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a first preset threshold value, judging that the target to be detected is abnormal, generating feedback information comprising the first image attribute information and abnormal state information of the target to be detected, and sending the feedback information to a client.
Preferably, the data acquisition terminal comprises a binocular camera, and the acquisition process of the depth distance data comprises:
shooting calibration objects from different angles by using the binocular camera, calibrating two sub-cameras of the binocular camera according to the shot calibration object images, and calculating to obtain calibration parameters;
shooting the target to be detected by using the binocular camera, and matching the image of the target to be detected shot between the two sub-cameras by using Sobel edge characteristics as characteristic points so as to calculate a visual difference value between the two sub-cameras; and
and calculating the depth distance data according to the calibration parameters and the vision difference value by a predetermined calculation rule.
Preferably, the calculation rule is:
Figure BDA0002474686450000041
z represents the depth distance between the binocular camera and the target to be measured, f and B are the calibration parameters, f represents the focal length of the binocular camera, B represents the center distance between the two sub-cameras, and X represents the distance between the two sub-camerasRAnd XTRepresenting the optical centers, X, of the two sub-camerasR-XTIndicating a visual difference.
To achieve the above object, the present invention further provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the target abnormity identification method.
To achieve the above object, the present invention further provides a computer-readable storage medium including a storage data area storing data created according to use of a blockchain node and a storage program area storing a computer program; wherein the computer program realizes the steps of the target anomaly identification method as described above when executed by a processor.
The target abnormity identification method, the device, the electronic equipment and the storage medium provided by the invention have the advantages that the collected data uploaded by a data collection terminal installed on a vehicle is obtained, the second position coordinate data of a target to be detected is obtained through calculation according to the collected data, the first image data is input into an image recognition model to output the first image attribute information, the area corresponding to the second position coordinate data is found out from a map according to the second position coordinate data, the second image attribute information corresponding to the second image data on the area is extracted, whether the second image attribute information is consistent with the second image attribute information is judged, if so, the similarity value between the first image data and the second image data is calculated, and when the similarity value is smaller than or equal to a first preset threshold value, the target to be detected corresponding to the first image data is judged to have abnormity, and feedback information is generated and sent to a client. The invention can timely and comprehensively control the road traffic equipment.
Drawings
Fig. 1 is a schematic diagram of an internal structure of an electronic device implementing a target anomaly identification method according to an embodiment of the present invention;
fig. 2 is a block diagram of a target abnormality recognition apparatus according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a target anomaly identification method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical embodiments and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the technical embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the combination of the technical embodiments contradicts each other or cannot be realized, such combination of the technical embodiments should be considered to be absent and not within the protection scope of the present invention.
The invention provides a target abnormity identification method. Fig. 1 is a schematic flow chart of a target anomaly identification method according to an embodiment of the present invention. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the target abnormality identification method includes:
s110, acquiring acquisition data uploaded by a data acquisition terminal, wherein the acquisition data comprises depth distance data between the data acquisition terminal and a target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle of the data acquisition terminal, and calculating according to the depth distance data, the first coordinate data and the azimuth angle of the data acquisition terminal to obtain second coordinate data of the target to be detected.
For example, the invention can be applied to an application scene based on a vehicle networking crowdsourcing mode. Crowdsourcing refers to the practice of a company or organization outsourcing work tasks performed by employees in the past to unspecified (and often large) public volunteers in a free-voluntary manner. By utilizing the crowdsourcing concept and combining the car networking technology, the data acquisition terminals are respectively installed on each car, and the data acquired by all the data acquisition terminals is uploaded to the electronic equipment 1 to realize data sharing, so that the method has the advantages of high timeliness, wide range and large acquisition amount.
In this embodiment, the electronic device 1 acquires the collected data uploaded by the data collecting terminal, and performs data processing and analysis on the collected data to obtain second coordinate data of the target to be measured (e.g., a road marking, a signboard, a traffic isolation and collision avoidance facility, a signal lamp, a gantry crane, a camera, etc.). It is emphasized that, in order to further ensure the privacy and security of the collected data, the collected data may also be stored in a node of a block chain.
The acquisition data comprises depth distance data between the data acquisition terminal and the target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle, and second coordinate data of the target to be detected is obtained through calculation according to the acquired acquisition data, namely the depth distance data, the first coordinate data and the azimuth angle of the data acquisition terminal.
The first coordinate data is the geodetic longitude and the geodetic latitude of the data acquisition terminal, and the second coordinate data is the geodetic longitude and the geodetic latitude of the target to be detected. And calculating second coordinate data of the target to be detected by using a geodetic theme algorithm through the first coordinate data and the azimuth angle.
Specifically, the data acquisition terminal includes a binocular camera, and the acquisition process of the depth distance data includes:
shooting calibration objects from different angles by using the binocular camera, calibrating two sub-cameras of the binocular camera according to the shot calibration object images, and calculating to obtain calibration parameters;
shooting the target to be detected by using the binocular camera, and matching the image of the target to be detected shot between the two sub-cameras by using Sobel edge characteristics as characteristic points so as to calculate a visual difference value between the two sub-cameras;
and calculating the depth distance data according to the calibration parameters and the vision difference value by a predetermined calculation rule.
In this embodiment, the binocular camera uses the principle of binocular positioning, uses two sub-cameras to position the target to be detected, and uses two sub-cameras fixed at different positions to shoot the image of the target to be detected for a feature point on the target to be detected, so as to obtain the coordinates of the point on the image planes of the two sub-cameras respectively. As long as the precise relative positions of the two sub-cameras are known, the coordinates of the feature point in the coordinate system fixed by the camera can be calculated by using a predetermined calculation rule, that is, the depth distance data of the feature point is determined, where the feature point refers to the target to be measured in this embodiment.
The calculation rule is as follows:
Figure BDA0002474686450000081
z represents the depth distance between the binocular camera and the target to be measured, f and B are the calibration parameters, f represents the focal length of the binocular camera, B represents the center distance between the two sub-cameras, and X represents the distance between the two sub-camerasRAnd XTRepresenting the optical centers, X, of the two sub-camerasR-XTIndicating a visual difference.
Further, the data acquisition terminal further comprises a GPS processing unit, a mileage coding unit, and an inertial navigation unit, and the acquisition process of the first coordinate data and the azimuth angle includes:
the GPS processing unit is used for receiving a differential signal sent by a differential reference station, outputting position information with first preset precision and sending the position information to the inertial navigation unit;
acquiring mileage information of the vehicle by using the mileage coding unit and sending the mileage information to the inertial navigation unit;
and receiving the position information and the mileage information by using the inertial navigation unit, fusing the position information and the mileage information, and outputting a first position coordinate and an azimuth angle of the acquisition terminal with a second preset precision, wherein the second preset precision is higher than the first preset precision.
And S120, inputting the first image data into a pre-trained image recognition model, and outputting first image attribute information corresponding to the first image data.
In this embodiment, the first image attribute information corresponding to the first image data can be recognized by using the image recognition model trained in advance.
Wherein the first image attribute information represents the name of the object to be measured in the first image data, such as road marking, signboard, traffic isolation and anti-collision facility, signal lamp, gantry crane, camera, etc.
The image recognition model is obtained by training a Convolutional Neural Network (CNN) model, and the training process of the image recognition model is as follows:
acquiring a preset number (for example, 10 tens of thousands) of first image data samples, wherein corresponding first image attribute information is marked in each first image data sample;
dividing the first image data samples into a training set and a verification set according to a preset proportion (for example, 5:1), wherein the number of the first image data samples in the training set is greater than that of the first image data samples in the verification set;
inputting a first image data sample in the training set into the convolutional neural network model for training, verifying the convolutional neural network model by using the verification set every preset period (for example, every 1000 iterations), and verifying the accuracy of the image identification model by using each piece of first image data and corresponding first image attribute information in the verification set; and
and when the verification accuracy is greater than a first preset threshold (for example, 85%), ending the training to obtain the image recognition model.
S130, positioning an area corresponding to the second coordinate data from a pre-created map according to the second coordinate data, extracting second image attribute information corresponding to the second image data in the area, and judging whether the first image attribute information is consistent with the second image attribute information.
In this embodiment, a corresponding area is found from a pre-created map according to second coordinate data of the target to be detected, second image attribute information corresponding to second image data in the area is extracted, and whether the data uploaded by the data acquisition terminal is accurate or not can be obtained by judging whether the first image attribute information is consistent with the second image attribute information or not.
The map is an electronic map with higher precision and more data dimensions. The accuracy is higher, and the data dimension is more embodied by the fact that the data dimension comprises surrounding static information which is related to traffic besides road information.
Maps store a large amount of driving assistance information as structured data, which can be divided into two categories. The first type is road data such as lane information such as the position, type, width, gradient, and curvature of a lane line. The second type is fixed object information around a lane, such as traffic signs, traffic lights, etc., lane limits, junctions, obstacles and other road details, and further includes infrastructure information such as overhead objects, guard rails, number, road edge types, roadside landmarks, etc.
The information is geocoded, and a navigation system can accurately position terrain, objects and road profiles so as to guide the vehicle to run. The most important of them is the accurate three-dimensional representation of the road network (centimeter level accuracy), such as the geometric structure of the road surface, the position of the road sign line, the point cloud model of the surrounding road environment, etc. With these high precision three-dimensional representations, the autopilot system can accurately determine its current location by comparing data from onboard GPS, IMU, LiDAR or cameras. In addition, the map contains rich semantic information, such as the position and type of traffic lights, the type of road marking lines, and which roads can be driven.
S140, when the first image attribute information is judged to be consistent with the second image attribute information, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a first preset threshold value, judging that the target to be detected is abnormal, generating feedback information including the first image attribute information and abnormal state information of the target to be detected, and sending the feedback information to a client.
In this embodiment, when it is determined that the first image attribute information is consistent with the second image attribute information, for example, the first image attribute information and the second image attribute information are both telegraph poles, it indicates that the data acquisition terminal uploads your data. And then, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a second preset threshold value, judging that the target to be detected corresponding to the first image data is abnormal (such as deformation, breakage and the like), generating feedback information, and sending the feedback information to the client, so as to inform technicians of detecting and maintaining the target to be detected which is possibly abnormal.
The feedback information includes first image attribute information (e.g., a telegraph pole) and abnormal state information (e.g., possible damage) of the target to be measured.
The similarity algorithm is SURF algorithm, and SURF (speeded Up Robust features) algorithm is an interest point detection and description sub-algorithm similar to SIFT.
Further, the second processing step further includes:
acquiring a preset number of first image data which are uploaded by different data acquisition terminals and have the same second coordinate data;
respectively calculating a similarity value between each first image data and the corresponding second image data, and counting the number of the similarity values smaller than or equal to a second preset threshold value; and
and if the number of the first image data corresponding to the similarity value is less than or equal to a second preset threshold value and is greater than or equal to a third preset threshold value, judging that the target to be detected corresponding to the first image data is abnormal.
In order to verify the accuracy of whether the target to be tested has abnormality, in this embodiment, the same target to be tested is verified through the data uploaded by the multiple data acquisition terminals.
In order to verify the accuracy of whether the target to be tested has abnormality, in this embodiment, the same target to be tested is verified through the data uploaded by the multiple data acquisition terminals.
Specifically, by acquiring first image data with the same second coordinate data uploaded by different data acquisition terminals with a preset number (for example, 10), calculating a similarity value between each first image data and the corresponding second image data, counting the number of the similarity values smaller than or equal to a second preset threshold (for example, 10), and if the number of the first image data corresponding to the similarity value smaller than or equal to the second preset threshold is larger than or equal to a third preset threshold (for example, 7), determining that the target to be measured corresponding to the first image data is abnormal.
For detailed description of the above steps, please refer to the following description of fig. 2 regarding a schematic diagram of program modules of an embodiment of the target anomaly identification program 10 and fig. 3 regarding a schematic diagram of a method flow of an embodiment of the target anomaly identification method.
As shown in fig. 2, a functional block diagram of the target abnormality recognition apparatus 100 according to the present invention is shown.
The target abnormality recognition apparatus 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the target abnormality recognition apparatus 100 may include an obtaining module 110, a recognition module 120, a first processing module 130, and a second processing module 140. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the acquiring module 110 is configured to acquire acquired data uploaded by a data acquisition terminal, where the acquired data includes depth distance data between the data acquisition terminal and a target to be detected, first image data including the target to be detected, first coordinate data of the data acquisition terminal, and an azimuth angle of the data acquisition terminal, and calculate, according to the depth distance data, the first coordinate data, and the azimuth angle, second coordinate data of the target to be detected.
For example, the invention can be applied to an application scene based on a vehicle networking crowdsourcing mode. Crowdsourcing refers to the practice of a company or organization outsourcing work tasks performed by employees in the past to unspecified (and often large) public volunteers in a free-voluntary manner. By utilizing the crowdsourcing concept and combining the car networking technology, the data acquisition terminals are respectively installed on each car, and the data acquired by all the data acquisition terminals is uploaded to the electronic equipment 1 to realize data sharing, so that the method has the advantages of high timeliness, wide range and large acquisition amount.
In this embodiment, the electronic device 1 acquires the collected data uploaded by the data collecting terminal, and performs data processing and analysis on the collected data to obtain second coordinate data of the target to be measured (e.g., a road marking, a signboard, a traffic isolation and collision avoidance facility, a signal lamp, a gantry crane, a camera, etc.). It is emphasized that, in order to further ensure the privacy and security of the collected data, the collected data may also be stored in a node of a block chain.
The acquisition data comprises depth distance data between the data acquisition terminal and the target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle of the data acquisition terminal, and second coordinate data of the target to be detected is obtained through calculation according to the acquired acquisition data, namely the depth distance data, the first coordinate data and the azimuth angle.
The first coordinate data is the geodetic longitude and the geodetic latitude of the data acquisition terminal, and the second coordinate data is the geodetic longitude and the geodetic latitude of the target to be detected. And calculating second coordinate data of the target to be measured by using a geodetic theme algorithm through the first coordinate data and the azimuth angle of the data acquisition terminal.
Specifically, the data acquisition terminal includes a binocular camera, and the acquisition process of the depth distance data includes:
shooting calibration objects from different angles by using the binocular camera, calibrating two sub-cameras of the binocular camera according to the shot calibration object images, and calculating to obtain calibration parameters;
shooting the target to be detected by using the binocular camera, and matching the image of the target to be detected shot between the two sub-cameras by using Sobel edge characteristics as characteristic points so as to calculate a visual difference value between the two sub-cameras;
and calculating the depth distance data according to the calibration parameters and the vision difference value by using a predetermined calculation rule.
In this embodiment, the binocular camera uses the principle of binocular positioning, uses two sub-cameras to position the target to be detected, and uses two sub-cameras fixed at different positions to shoot the image of the target to be detected for a feature point on the target to be detected, so as to obtain the coordinates of the point on the image planes of the two sub-cameras respectively. As long as the precise relative positions of the two sub-cameras are known, the coordinates of the feature point in the coordinate system fixed by the camera can be calculated by using a predetermined calculation rule, that is, the depth distance data of the feature point is determined, where the feature point refers to the target to be measured in this embodiment.
The calculation rule is as follows:
Figure BDA0002474686450000131
z represents the depth distance between the binocular camera and the target to be measured, f and B are the calibration parameters, and f represents the depth distance between the binocular camera and the target to be measuredFocal length of the binocular camera, B denotes a center distance between the two sub-cameras, XRAnd XTRepresenting the optical centers, X, of the two sub-camerasR-XTIndicating a visual difference.
Further, the data acquisition terminal further comprises a GPS processing unit, a mileage coding unit, and an inertial navigation unit, and the acquisition process of the first coordinate data and the azimuth angle includes:
the GPS processing unit is used for receiving a differential signal sent by a differential reference station, outputting position information with first preset precision and sending the position information to the inertial navigation unit;
acquiring mileage information of the vehicle by using the mileage coding unit and sending the mileage information to the inertial navigation unit;
and receiving the position information and the mileage information by using the inertial navigation unit, fusing the position information and the mileage information, and outputting a first position coordinate and an azimuth angle of the acquisition terminal with a second preset precision, wherein the second preset precision is higher than the first preset precision.
The recognition module 120 is configured to input the first image data into a pre-trained image recognition model, and output first image attribute information corresponding to the first image data.
In this embodiment, the first image attribute information corresponding to the first image data can be recognized by using the image recognition model trained in advance.
Wherein the first image attribute information represents the name of the object to be measured in the first image data, such as road marking, signboard, traffic isolation and anti-collision facility, signal lamp, gantry crane, camera, etc.
The image recognition model is obtained by training a Convolutional Neural Network (CNN) model, and the training process of the image recognition model is as follows:
acquiring a preset number (for example, 10 tens of thousands) of first image data samples, wherein corresponding first image attribute information is marked in each first image data sample;
dividing the first image data samples into a training set and a verification set according to a preset proportion (for example, 5:1), wherein the number of the first image data samples in the training set is greater than that of the first image data samples in the verification set;
inputting a first image data sample in the training set into the convolutional neural network model for training, verifying the convolutional neural network model by using the verification set every preset period (for example, every 1000 iterations), and verifying the accuracy of the image identification model by using each piece of first image data and corresponding first image attribute information in the verification set; and
and when the verification accuracy is greater than a first preset threshold (for example, 85%), ending the training to obtain the image recognition model.
The first processing module 130 is configured to locate an area corresponding to the second coordinate data from a pre-created map according to the second coordinate data, extract second image attribute information corresponding to the second image data in the area, and determine whether the first image attribute information is consistent with the second image attribute information.
In this embodiment, a corresponding area is found from a pre-created map according to second coordinate data of the target to be detected, second image attribute information corresponding to second image data in the area is extracted, and whether the data uploaded by the data acquisition terminal is accurate or not can be obtained by judging whether the first image attribute information is consistent with the second image attribute information or not.
The map is an electronic map with higher precision and more data dimensions. The accuracy is higher, and the data dimension is more embodied by the fact that the data dimension comprises surrounding static information which is related to traffic besides road information.
Maps store a large amount of driving assistance information as structured data, which can be divided into two categories. The first type is road data such as lane information such as the position, type, width, gradient, and curvature of a lane line. The second type is fixed object information around a lane, such as traffic signs, traffic lights, etc., lane limits, junctions, obstacles and other road details, and further includes infrastructure information such as overhead objects, guard rails, number, road edge types, roadside landmarks, etc.
The information is geocoded, and a navigation system can accurately position terrain, objects and road profiles so as to guide the vehicle to run. The most important of them is the accurate three-dimensional representation of the road network (centimeter level accuracy), such as the geometric structure of the road surface, the position of the road sign line, the point cloud model of the surrounding road environment, etc. With these high precision three-dimensional representations, the autopilot system can accurately determine its current location by comparing data from onboard GPS, IMU, LiDAR or cameras. In addition, the map contains rich semantic information, such as the position and type of traffic lights, the type of road marking lines, and which roads can be driven.
The second processing module 140 is configured to, when it is determined that the first image attribute information is consistent with the second image attribute information, calculate a similarity value between the first image data and the second image data by using a similarity algorithm, determine that the target to be detected is abnormal if the similarity value is smaller than or equal to a first preset threshold, generate feedback information including the first image attribute information and abnormal state information of the target to be detected, and send the feedback information to the client.
In this embodiment, when it is determined that the first image attribute information is consistent with the second image attribute information, for example, the first image attribute information and the second image attribute information are both telegraph poles, it indicates that the data acquisition terminal uploads your data. And then, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a second preset threshold value, judging that the target to be detected corresponding to the first image data is abnormal (such as deformation, breakage and the like), generating feedback information, and sending the feedback information to the client, so as to inform technicians of detecting and maintaining the target to be detected which is possibly abnormal.
The feedback information includes first image attribute information (e.g., a telegraph pole) and abnormal state information (e.g., possible damage) of the target to be measured.
The similarity algorithm is SURF algorithm, and SURF (speeded Up Robust features) algorithm is an interest point detection and description sub-algorithm similar to SIFT.
Further, the second processing module is further configured to:
acquiring a preset number of first image data which are uploaded by different data acquisition terminals and have the same second coordinate data;
respectively calculating a similarity value between each first image data and the corresponding second image data, and counting the number of the similarity values smaller than or equal to a second preset threshold value; and
and if the number of the first image data corresponding to the similarity value is less than or equal to a second preset threshold value and is greater than or equal to a third preset threshold value, judging that the target to be detected corresponding to the first image data is abnormal.
In order to verify the accuracy of whether the target to be tested has abnormality, in this embodiment, the same target to be tested is verified through the data uploaded by the multiple data acquisition terminals.
Specifically, by acquiring a preset number (for example, 10) of first image data with the same second coordinate data uploaded by different data acquisition terminals, calculating a similarity value between each first image data and the corresponding second image data, counting the number of the similarity values smaller than or equal to a second preset threshold (for example, 10), and if the number of the first image data corresponding to the similarity value smaller than or equal to the second preset threshold is larger than or equal to a third preset threshold (for example, 7), determining that the target to be measured corresponding to the first image data is abnormal.
Fig. 3 is a schematic structural diagram of an electronic device implementing the target anomaly identification method according to the present invention.
The electronic device 1 may comprise a processor 12, a memory 11 and a bus, and may further comprise a computer program, such as a target anomaly recognition program 10, stored in the memory 11 and executable on the processor 12.
Wherein the memory 11 includes at least one type of readable storage medium, and the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like. The readable storage medium includes flash memory, removable hard disks, multimedia cards, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as the code of the target abnormality recognition program 10, but also to temporarily store data that has been output or is to be output.
The processor 12 may be formed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 12 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., target exception recognition programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 12 or the like.
Fig. 3 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 12 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface 13, and optionally, the network interface 13 may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The target anomaly recognition program 10 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 12, enable:
an acquisition step: acquiring acquisition data uploaded by a data acquisition terminal, wherein the acquisition data comprises depth distance data between the data acquisition terminal and a target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle of the data acquisition terminal, and second coordinate data of the target to be detected is obtained through calculation according to the depth distance data, the first coordinate data and the azimuth angle of the data acquisition terminal;
an identification step: inputting the first image data into a pre-trained image recognition model, and outputting first image attribute information corresponding to the first image data;
a first processing step: positioning an area corresponding to the second coordinate data from a pre-established map according to the second coordinate data, extracting second image attribute information corresponding to the second image data in the area, and judging whether the first image attribute information is consistent with the second image attribute information; and
a second processing step: and when the first image attribute information is judged to be consistent with the second image attribute information, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a first preset threshold value, judging that the target to be detected is abnormal, generating feedback information comprising the first image attribute information and abnormal state information of the target to be detected, and sending the feedback information to a client.
In another embodiment, the program further performs the steps of:
acquiring a preset number of first image data which are uploaded by different data acquisition terminals and have the same second coordinate data;
respectively calculating a similarity value between each first image data and the corresponding second image data, and counting the number of the similarity values smaller than or equal to a second preset threshold value; and
and if the number of the first image data corresponding to the similarity value is less than or equal to a second preset threshold value and is greater than or equal to a third preset threshold value, judging that the target to be detected corresponding to the first image data is abnormal.
Specifically, the processor 11 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again. It is emphasized that, in order to further ensure the privacy and security of the collected data, the collected data may also be stored in a node of a block chain.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A target abnormity identification method is applied to electronic equipment and is characterized by comprising the following steps:
an acquisition step: acquiring acquisition data uploaded by a data acquisition terminal, wherein the acquisition data comprises depth distance data between the data acquisition terminal and a target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle of the data acquisition terminal, and second coordinate data of the target to be detected is obtained through calculation according to the depth distance data, the first coordinate data and the azimuth angle;
an identification step: inputting the first image data into a pre-trained image recognition model, and outputting first image attribute information corresponding to the first image data;
a first processing step: positioning an area corresponding to the second coordinate data from a pre-established map according to the second coordinate data, extracting second image attribute information corresponding to the second image data in the area, and judging whether the first image attribute information is consistent with the second image attribute information; and
a second processing step: and when the first image attribute information is judged to be consistent with the second image attribute information, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a first preset threshold value, judging that the target to be detected is abnormal, generating feedback information comprising the first image attribute information and abnormal state information of the target to be detected, and sending the feedback information to a client.
2. The target abnormality recognition method according to claim 1, wherein the data acquisition terminal includes a binocular camera, and the acquisition process of the depth distance data includes:
shooting calibration objects from different angles by using the binocular camera, calibrating two sub-cameras of the binocular camera according to the shot calibration object images, and calculating to obtain calibration parameters;
shooting the target to be detected by using the binocular camera, and matching the image of the target to be detected shot between the two sub-cameras by using Sobel edge characteristics as characteristic points so as to calculate a visual difference value between the two sub-cameras; and
and calculating the depth distance data according to the calibration parameters and the vision difference value by a predetermined calculation rule.
3. The target anomaly identification method according to claim 2, characterized in that said calculation rule is:
Figure FDA0002474686440000021
z represents the depth distance between the binocular camera and the target to be measured, f and B are the calibration parameters, f represents the focal length of the binocular camera, B represents the center distance between the two sub-cameras, and X represents the distance between the two sub-camerasRAnd XTRepresenting the optical centers, X, of the two sub-camerasR-XTIndicating a visual difference.
4. The method for identifying the target anomaly according to claim 1, wherein the data acquisition terminal further comprises a GPS processing unit, a mileage coding unit and an inertial navigation unit, and the acquisition process of the first coordinate data and the azimuth angle comprises:
the GPS processing unit is used for receiving a differential signal sent by a differential reference station, outputting position information with first preset precision and sending the position information to the inertial navigation unit;
acquiring mileage information of the vehicle by using the mileage coding unit and sending the mileage information to the inertial navigation unit; and
and receiving the position information and the mileage information by using the inertial navigation unit, fusing the position information and the mileage information, and outputting a first position coordinate and an azimuth angle of the acquisition terminal with a second preset precision, wherein the second preset precision is higher than the first preset precision.
5. The method of target anomaly identification according to claim 1, characterized in that said acquisition data are stored in a blockchain, said second processing step further comprising:
acquiring a preset number of first image data which are uploaded by different data acquisition terminals and have the same second coordinate data;
respectively calculating a similarity value between each first image data and the corresponding second image data, and counting the number of the similarity values smaller than or equal to a second preset threshold value; and
and if the number of the first image data corresponding to the similarity value is less than or equal to a second preset threshold value and is greater than or equal to a third preset threshold value, judging that the target to be detected corresponding to the first image data is abnormal.
6. A target abnormality recognition apparatus, characterized by comprising:
an acquisition module: acquiring acquisition data uploaded by a data acquisition terminal, wherein the acquisition data comprises depth distance data between the data acquisition terminal and a target to be detected, first image data containing the target to be detected, first coordinate data of the data acquisition terminal and an azimuth angle of the data acquisition terminal, and second coordinate data of the target to be detected is obtained through calculation according to the depth distance data, the first coordinate data and the azimuth angle of the data acquisition terminal;
an identification module: inputting the first image data into a pre-trained image recognition model, and outputting first image attribute information corresponding to the first image data;
a first processing module: positioning an area corresponding to the second coordinate data from a pre-established map according to the second coordinate data, extracting second image attribute information corresponding to the second image data in the area, and judging whether the first image attribute information is consistent with the second image attribute information; and
a second processing module: and when the first image attribute information is judged to be consistent with the second image attribute information, calculating a similarity value between the first image data and the second image data by using a similarity algorithm, if the similarity value is smaller than or equal to a first preset threshold value, judging that the target to be detected is abnormal, generating feedback information comprising the first image attribute information and abnormal state information of the target to be detected, and sending the feedback information to a client.
7. The seed target anomaly identification device according to claim 6, characterized in that said calculation rule is:
Figure FDA0002474686440000031
z represents the depth distance between the binocular camera and the target to be measured, f and B are the calibration parameters, f represents the focal length of the binocular camera, B represents the center distance between the two sub-cameras, and X represents the distance between the two sub-camerasRAnd XTRepresenting the optical centers, X, of the two sub-camerasR-XTIndicating a visual difference.
8. The apparatus for recognizing abnormality of seed target according to claim 7, wherein said data collecting terminal includes a binocular camera, and the collecting process of said depth distance data includes:
shooting calibration objects from different angles by using the binocular camera, calibrating two sub-cameras of the binocular camera according to the shot calibration object images, and calculating to obtain calibration parameters;
shooting the target to be detected by using the binocular camera, and matching the image of the target to be detected shot between the two sub-cameras by using Sobel edge characteristics as characteristic points so as to calculate a visual difference value between the two sub-cameras; and
and calculating the depth distance data according to the calibration parameters and the vision difference value by a predetermined calculation rule.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the target anomaly identification method of any one of claims 1 to 5.
10. A computer-readable storage medium comprising a storage data area storing data created according to use of a blockchain node and a storage program area storing a computer program; wherein the computer program realizes the steps of the target anomaly identification method according to any one of claims 1-5 when being executed by a processor.
CN202010359984.8A 2020-04-30 2020-04-30 Target abnormality recognition method, target abnormality recognition device, electronic equipment and storage medium Active CN111652940B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010359984.8A CN111652940B (en) 2020-04-30 2020-04-30 Target abnormality recognition method, target abnormality recognition device, electronic equipment and storage medium
PCT/CN2020/099068 WO2021217859A1 (en) 2020-04-30 2020-06-30 Target anomaly identification method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010359984.8A CN111652940B (en) 2020-04-30 2020-04-30 Target abnormality recognition method, target abnormality recognition device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111652940A true CN111652940A (en) 2020-09-11
CN111652940B CN111652940B (en) 2024-06-04

Family

ID=72346559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010359984.8A Active CN111652940B (en) 2020-04-30 2020-04-30 Target abnormality recognition method, target abnormality recognition device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111652940B (en)
WO (1) WO2021217859A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112180285A (en) * 2020-09-23 2021-01-05 北京百度网讯科技有限公司 Method and device for identifying faults of traffic signal lamp, navigation system and road side equipment
CN112258842A (en) * 2020-10-26 2021-01-22 北京百度网讯科技有限公司 Traffic monitoring method, device, equipment and storage medium
CN112446312A (en) * 2020-11-19 2021-03-05 深圳市中视典数字科技有限公司 Three-dimensional model identification method and device, electronic equipment and storage medium
CN112507902A (en) * 2020-12-15 2021-03-16 深圳市城市交通规划设计研究中心股份有限公司 Traffic sign abnormality detection method, computer device, and storage medium
CN112633701A (en) * 2020-12-25 2021-04-09 北京天仪百康科贸有限公司 Traffic engineering road crack inspection method and system based on block chain
CN112686322A (en) * 2020-12-31 2021-04-20 柳州柳新汽车冲压件有限公司 Part difference identification method, device, equipment and storage medium
CN113436255A (en) * 2021-05-18 2021-09-24 广东中发星通技术有限公司 Track abnormal object identification method and system based on train positioning and visual information
CN113435342A (en) * 2021-06-29 2021-09-24 平安科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN113579512A (en) * 2021-08-02 2021-11-02 北京深点视觉科技有限公司 Position adjusting method and device, electronic equipment and storage medium
CN114092392A (en) * 2021-09-29 2022-02-25 中铁第四勘察设计院集团有限公司 A real-time monitoring and maintenance system and method for a high-speed magnetic levitation air valve
CN114235652A (en) * 2021-11-30 2022-03-25 国网北京市电力公司 Smoke dust particle concentration abnormity identification method and device, storage medium and equipment
CN114663390A (en) * 2022-03-22 2022-06-24 平安普惠企业管理有限公司 Intelligent anti-pinch method, device, equipment and storage medium for automatic door
CN115062242A (en) * 2022-07-11 2022-09-16 广东加一信息技术有限公司 Intelligent information identification method based on block chain and artificial intelligence and big data system

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120221A (en) * 2021-11-06 2022-03-01 北京奇天大胜网络科技有限公司 Environment checking method based on deep learning, electronic equipment and storage medium
CN114200877B (en) * 2021-11-12 2024-02-27 珠海大横琴科技发展有限公司 Monitoring method and device for electric equipment
CN114115162B (en) * 2021-11-29 2024-05-10 扬州三星塑胶有限公司 Material throwing control method and system in PET production process
CN114254038B (en) * 2021-12-03 2024-05-14 中安链科技(重庆)有限公司 Campus security data synchronization system based on blockchain network
CN113902047B (en) * 2021-12-10 2022-03-04 腾讯科技(深圳)有限公司 Image element matching method, device, equipment and storage medium
CN114252013B (en) * 2021-12-22 2024-03-22 深圳市天昕朗科技有限公司 AGV visual identification accurate positioning system based on wired communication mode
CN114157526B (en) * 2021-12-23 2022-08-12 广州新华学院 Digital image recognition-based home security remote monitoring method and device
CN114359221A (en) * 2022-01-04 2022-04-15 北京金山云网络技术有限公司 Method, device and system for detecting depth of water accumulated on road surface and electronic equipment
CN114519499B (en) * 2022-01-10 2024-05-24 湖北国际物流机场有限公司 BIM model-based inspection batch positioning method and system
CN114445805A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Attribute recognition model training, attribute recognition method, device and equipment
CN114719799B (en) * 2022-03-04 2024-04-26 武汉海微科技股份有限公司 Soft material boundary detection method, device and storage medium
CN114642125B (en) * 2022-03-25 2023-02-03 中国铁建重工集团股份有限公司 Silage silo throwing barrel control method, device, equipment and storage medium
CN114896363B (en) * 2022-04-19 2023-03-28 北京月新时代科技股份有限公司 Data management method, device, equipment and medium
CN114662617B (en) * 2022-05-18 2022-08-09 国网浙江省电力有限公司杭州供电公司 Multi-source data weaving system processing method and device based on multimodal learning strategy
CN115112024B (en) * 2022-05-31 2023-09-26 江苏濠汉信息技术有限公司 Algorithm for texture positioning in wire length measurement process
CN115047008B (en) * 2022-07-19 2024-04-30 苏州大学 Road crack detection system based on Faster R-CNN
CN115546710A (en) * 2022-08-09 2022-12-30 国网湖北省电力有限公司黄龙滩水力发电厂 Method, device and equipment for locating personnel in hydraulic power plant and readable storage medium
CN115437025A (en) * 2022-08-23 2022-12-06 杭州睿影科技有限公司 Open source detection method, device, system, electronic equipment and medium of security inspection equipment
CN115861912A (en) * 2022-09-27 2023-03-28 北京京天威科技发展有限公司 Bolt loosening state detection system and method
CN115345878B (en) * 2022-10-18 2023-01-31 广州市易鸿智能装备有限公司 High-precision method and device for detecting distance between nickel sheet and bus sheet of lithium battery
CN115527199B (en) * 2022-10-31 2023-05-12 通号万全信号设备有限公司 Rail transit train positioning method, device, medium and electronic equipment
CN116389676B (en) * 2022-12-21 2024-07-30 西部科学城智能网联汽车创新中心(重庆)有限公司 Safety monitoring method and device for parking lot
CN116026859B (en) * 2023-01-30 2023-12-12 讯芸电子科技(中山)有限公司 Method, device, equipment and storage medium for detecting installation of optoelectronic module
CN116343137B (en) * 2023-02-21 2024-04-19 北京海上升科技有限公司 Tail gas abnormal automobile big data detection method and system based on artificial intelligence
CN116048124B (en) * 2023-02-23 2023-07-28 北京思维实创科技有限公司 Unmanned plane subway tunnel inspection method and device, computer equipment and storage medium
CN116523852B (en) * 2023-04-13 2024-12-13 成都飞机工业(集团)有限责任公司 A foreign body detection method for carbon fiber composite materials based on feature matching
CN116778202B (en) * 2023-06-02 2024-04-12 广州粤建三和软件股份有限公司 Electronic seal-based test sample sealing method, system and device
CN116593151B (en) * 2023-07-17 2023-09-12 创新奇智(青岛)科技有限公司 Dental socket chest expander testing method and device, electronic equipment and readable storage medium
CN116665138B (en) * 2023-08-01 2023-11-07 临朐弘泰汽车配件有限公司 Visual detection method and system for stamping processing of automobile parts
CN116758400B (en) * 2023-08-15 2023-10-17 安徽容知日新科技股份有限公司 Method and device for detecting abnormality of conveyor belt and computer readable storage medium
CN117314890B (en) * 2023-11-07 2024-04-23 东莞市富明钮扣有限公司 Safety control method, device, equipment and storage medium for button making processing
CN117456482B (en) * 2023-12-25 2024-05-10 暗物智能科技(广州)有限公司 Abnormal event identification method and system for traffic monitoring scene
CN118015536A (en) * 2024-01-18 2024-05-10 国网山西省电力公司电力科学研究院 A method and system for quantitative recognition of transmission line monitoring images
CN117671507B (en) * 2024-01-29 2024-05-10 南昌大学 A river water quality prediction method combined with meteorological data
CN117830295B (en) * 2024-02-21 2024-07-16 广州搏辉特自动化设备有限公司 Control method, system, equipment and medium for automatically adjusting winding parameters of winding machine
CN117830961B (en) * 2024-03-06 2024-05-10 山东达斯特信息技术有限公司 Environment-friendly equipment operation and maintenance behavior analysis method and system based on image analysis
CN117850216B (en) * 2024-03-08 2024-05-24 深圳市锐赛科技有限公司 Intelligent control method and system for acrylic lens production equipment
CN118195111A (en) * 2024-03-20 2024-06-14 湖州暨芯半导体科技有限公司 Tea garden tea picking location planning method, device, equipment and medium
CN117934481B (en) * 2024-03-25 2024-06-11 国网浙江省电力有限公司宁波供电公司 Transmission cable status identification and processing method and system based on artificial intelligence
CN117974719B (en) * 2024-03-28 2024-07-19 深圳新联胜光电科技有限公司 Processing tracking and detecting method, system and medium for optical lens
CN118072113B (en) * 2024-04-19 2024-07-26 山东金蔡伦纸业有限公司 Multi-sense paper production intelligent quality control method and system
CN118706425A (en) * 2024-07-17 2024-09-27 内蒙古电力(集团)有限责任公司内蒙古电力科学研究院分公司 Transmission tower cable loosening detection method and device
CN118555149B (en) * 2024-07-30 2024-10-11 大数据安全工程研究中心(贵州)有限公司 Abnormal behavior safety analysis method based on artificial intelligence
CN118859259A (en) * 2024-09-27 2024-10-29 深圳市北斗云信息技术有限公司 Beidou-R-based anomaly confirmation method and medium
CN118898924B (en) * 2024-10-09 2025-01-14 赣州汉邦咨询服务有限公司 A teaching and training method for medical cosmetic surgery

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446674A (en) * 2018-04-28 2018-08-24 平安科技(深圳)有限公司 Electronic device, personal identification method and storage medium based on facial image and voiceprint
CN108801274A (en) * 2018-04-16 2018-11-13 电子科技大学 A kind of terrestrial reference ground drawing generating method of fusion binocular vision and differential satellite positioning
CN109559347A (en) * 2018-11-28 2019-04-02 中南大学 Object identifying method, device, system and storage medium
CN109782364A (en) * 2018-12-26 2019-05-21 中设设计集团股份有限公司 Traffic mark board based on machine vision lacks detection method
WO2019114617A1 (en) * 2017-12-12 2019-06-20 华为技术有限公司 Method, device, and system for fast capturing of still frame
CN110175533A (en) * 2019-05-07 2019-08-27 平安科技(深圳)有限公司 Overpass traffic condition method of real-time, device, terminal and storage medium
CN110443110A (en) * 2019-06-11 2019-11-12 平安科技(深圳)有限公司 Face identification method, device, terminal and storage medium based on multichannel camera shooting
CN110969666A (en) * 2019-11-15 2020-04-07 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device and system and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101574876B1 (en) * 2014-02-13 2015-12-04 영남대학교 산학협력단 Distance measuring method using vision sensor database
CN104766086B (en) * 2015-04-15 2017-08-25 湖南师范大学 The monitoring and managing method and system of a kind of way mark
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN107194383A (en) * 2017-07-10 2017-09-22 上海应用技术大学 Based on improving Hu not bending moment and ELM traffic mark board recognition methods and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019114617A1 (en) * 2017-12-12 2019-06-20 华为技术有限公司 Method, device, and system for fast capturing of still frame
CN108801274A (en) * 2018-04-16 2018-11-13 电子科技大学 A kind of terrestrial reference ground drawing generating method of fusion binocular vision and differential satellite positioning
CN108446674A (en) * 2018-04-28 2018-08-24 平安科技(深圳)有限公司 Electronic device, personal identification method and storage medium based on facial image and voiceprint
CN109559347A (en) * 2018-11-28 2019-04-02 中南大学 Object identifying method, device, system and storage medium
CN109782364A (en) * 2018-12-26 2019-05-21 中设设计集团股份有限公司 Traffic mark board based on machine vision lacks detection method
CN110175533A (en) * 2019-05-07 2019-08-27 平安科技(深圳)有限公司 Overpass traffic condition method of real-time, device, terminal and storage medium
CN110443110A (en) * 2019-06-11 2019-11-12 平安科技(深圳)有限公司 Face identification method, device, terminal and storage medium based on multichannel camera shooting
CN110969666A (en) * 2019-11-15 2020-04-07 北京中科慧眼科技有限公司 Binocular camera depth calibration method, device and system and storage medium

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112180285A (en) * 2020-09-23 2021-01-05 北京百度网讯科技有限公司 Method and device for identifying faults of traffic signal lamp, navigation system and road side equipment
CN112180285B (en) * 2020-09-23 2024-05-31 阿波罗智联(北京)科技有限公司 Method and device for identifying traffic signal lamp faults, navigation system and road side equipment
CN112258842A (en) * 2020-10-26 2021-01-22 北京百度网讯科技有限公司 Traffic monitoring method, device, equipment and storage medium
CN112446312A (en) * 2020-11-19 2021-03-05 深圳市中视典数字科技有限公司 Three-dimensional model identification method and device, electronic equipment and storage medium
CN112507902A (en) * 2020-12-15 2021-03-16 深圳市城市交通规划设计研究中心股份有限公司 Traffic sign abnormality detection method, computer device, and storage medium
CN112633701B (en) * 2020-12-25 2021-10-26 蚌埠科睿达机械设计有限公司 Traffic engineering road crack inspection method and system based on block chain
CN112633701A (en) * 2020-12-25 2021-04-09 北京天仪百康科贸有限公司 Traffic engineering road crack inspection method and system based on block chain
CN112686322A (en) * 2020-12-31 2021-04-20 柳州柳新汽车冲压件有限公司 Part difference identification method, device, equipment and storage medium
CN113436255A (en) * 2021-05-18 2021-09-24 广东中发星通技术有限公司 Track abnormal object identification method and system based on train positioning and visual information
CN113436255B (en) * 2021-05-18 2024-06-04 安徽正弦空间科学技术有限公司 Rail abnormal object identification method and system based on train positioning and visual information
CN113435342A (en) * 2021-06-29 2021-09-24 平安科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN113435342B (en) * 2021-06-29 2022-08-12 平安科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN113579512A (en) * 2021-08-02 2021-11-02 北京深点视觉科技有限公司 Position adjusting method and device, electronic equipment and storage medium
CN114092392A (en) * 2021-09-29 2022-02-25 中铁第四勘察设计院集团有限公司 A real-time monitoring and maintenance system and method for a high-speed magnetic levitation air valve
CN114235652A (en) * 2021-11-30 2022-03-25 国网北京市电力公司 Smoke dust particle concentration abnormity identification method and device, storage medium and equipment
CN114663390A (en) * 2022-03-22 2022-06-24 平安普惠企业管理有限公司 Intelligent anti-pinch method, device, equipment and storage medium for automatic door
CN115062242A (en) * 2022-07-11 2022-09-16 广东加一信息技术有限公司 Intelligent information identification method based on block chain and artificial intelligence and big data system

Also Published As

Publication number Publication date
WO2021217859A1 (en) 2021-11-04
CN111652940B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN111652940B (en) Target abnormality recognition method, target abnormality recognition device, electronic equipment and storage medium
CN108318043B (en) Method, apparatus, and computer-readable storage medium for updating electronic map
CN111386559B (en) Method and system for judging whether target road facilities exist at intersection or not
Song et al. Enhancing GPS with lane-level navigation to facilitate highway driving
CN109584706B (en) Electronic map lane line processing method, device and computer readable storage medium
JP5057183B2 (en) Reference data generation system and position positioning system for landscape matching
CN107430815A (en) Method and system for automatic identification parking area
CN111380543A (en) Map data generation method and device
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
JP2011215057A (en) Scene matching reference data generation system and position measurement system
JP2011227888A (en) Image processing system and location positioning system
CN112432650B (en) High-precision map data acquisition method, vehicle control method and device
CN114397685A (en) Vehicle navigation method, device, equipment and storage medium for weak GNSS signal area
JP2018084126A (en) Road state management program, road state management device, and road state management method
CN109785637A (en) The assay method and device of rule-breaking vehicle
CN114970705A (en) Driving state analysis method, device, equipment and medium based on multi-sensing data
CN114783188A (en) Inspection method and device
CN116434525A (en) Intelligent management early warning system for expressway
CN114201482A (en) Dynamic population distribution statistical method and device, electronic equipment and readable storage medium
CN111340015B (en) Positioning method and device
CN111899505B (en) Detection method and device for traffic restriction removal
CN114661055A (en) Emergency logistics vehicle optimal path planning method, device, equipment and storage medium
KR102288623B1 (en) Map Data Processing and Format Change Method for Land Vehicle Simulation
CN117197227A (en) Method, device, equipment and medium for calculating yaw angle of target vehicle
CN115757987B (en) Method, device, equipment and medium for determining companion object based on track analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant