CN113808200A - Method and device for detecting moving speed of target object and electronic equipment - Google Patents

Method and device for detecting moving speed of target object and electronic equipment Download PDF

Info

Publication number
CN113808200A
CN113808200A CN202110886652.XA CN202110886652A CN113808200A CN 113808200 A CN113808200 A CN 113808200A CN 202110886652 A CN202110886652 A CN 202110886652A CN 113808200 A CN113808200 A CN 113808200A
Authority
CN
China
Prior art keywords
image
target object
detected
images
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110886652.XA
Other languages
Chinese (zh)
Other versions
CN113808200B (en
Inventor
吴新涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petromentor International Education Beijing Co ltd
Original Assignee
Petromentor International Education Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petromentor International Education Beijing Co ltd filed Critical Petromentor International Education Beijing Co ltd
Priority to CN202110886652.XA priority Critical patent/CN113808200B/en
Publication of CN113808200A publication Critical patent/CN113808200A/en
Application granted granted Critical
Publication of CN113808200B publication Critical patent/CN113808200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method for detecting the moving speed of a target object, which comprises the following steps: obtaining a set of images containing a target object; constructing a target object detection model according to the image set to obtain a target image containing a target object in two images to be detected at preset time intervals, and obtaining the corresponding image position difference of the target image on the two images to be detected; acquiring real height information of a target object and height information in an image to be detected; and determining the moving speed of the target object according to the preset time, the image position difference and the real height information of the target object. According to the method and the device, the target image containing the target object in the image to be detected is obtained through the target object detection model, the moving speed of the target object is determined through the image position difference, the real height information and the height information of the target object and the preset time, the detection accuracy is improved, and the input cost of the speed detection of the object moving is reduced through self-detection.

Description

Method and device for detecting moving speed of target object and electronic equipment
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for detecting a moving speed of a target object, and an electronic device.
Background
In recent years, deep learning develops rapidly, and has attracted wide attention at home and abroad, and with the continuous progress of deep learning technology and the continuous improvement of data processing capability, more and more deep learning algorithms are used in the fields of image processing and computer vision. Among them, target detection is widely used in many fields such as security, industry, unmanned driving, etc. as an important branch of computer vision. For example, in the field of security protection, in some outdoor work sites, the moving speed of an object is clearly specified, and if the moving speed of the object exceeds a preset threshold, the safety of the work site operation is affected.
In order to prevent accidents, the existing detection of the moving speed of the object generally adopts a mode of manually checking a field monitoring video, the mode consumes a large amount of manpower, and due to the fact that the number of videos is large, people may not take into consideration, and therefore the accuracy of detecting the moving speed of the object is reduced.
Therefore, how to reduce the investment cost for detecting the moving speed of the object and improve the detection accuracy at the same time becomes a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a method for detecting the moving speed of a target object, which aims to solve the problems that in the prior art, the input cost of the speed detection of the moving of the object is reduced, and the detection accuracy is improved. The embodiment of the application provides a device for detecting the moving speed of a target object and electronic equipment.
The embodiment of the application provides a method for detecting the moving speed of a target object, which comprises the following steps:
obtaining a set of images containing a target object;
constructing a target object detection model according to the image set;
obtaining a target image containing a target object in two images to be detected at preset time intervals through the target object detection model, and obtaining the corresponding image position difference of the target image on the two images to be detected;
obtaining real height information of the target object in a physical world and height information of the target object in an image to be detected;
and determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected.
Optionally, the obtaining a target image including a target object in two images to be detected separated by a preset time includes:
setting preset time, and acquiring a first image to be detected corresponding to the starting time of the preset time and a second image to be detected corresponding to the ending time of the preset time through the target object detection model;
obtaining a first prediction result of a target object image pixel by pixel in a first image to be detected, and obtaining a second prediction result of the target object image pixel by pixel in a second image to be detected;
respectively comparing the first prediction result and the second prediction result with the actual result of the target object image marked by the marked frame in the image set, and respectively calculating a first loss value of the first prediction result and the actual result, and a second loss value of the second prediction result and the actual result;
determining the corresponding image when the first loss value is minimum as a target image containing a target object in a first image to be detected; and determining the corresponding image when the second loss value is minimum as a target image containing a target object in the second image to be detected.
Optionally, the obtaining of the image position difference of the target image corresponding to the two images to be detected includes:
acquiring first position information and second position information which respectively correspond to a target image on a first image to be detected and a second image to be detected;
and obtaining the corresponding image position difference of the target image on the two images to be detected according to the first position information and the second position information.
Optionally, the obtaining first position information and second position information of the target image respectively corresponding to the first image to be detected and the second image to be detected includes:
acquiring first characteristic information and second characteristic information which respectively correspond to the target image in the first image to be detected and the second image to be detected;
respectively determining a first pixel coordinate and a second pixel coordinate of the target object in a first image to be detected and a second image to be detected according to the first characteristic information and the second characteristic information;
and determining first position information and second position information which correspond to the target object in the first image to be detected and the second image to be detected respectively according to the first pixel coordinate and the second pixel coordinate.
Optionally, the determining the moving speed of the target object according to the preset time, the image position difference of the target image between the two images to be detected, the real height information of the target object in the physical world, and the height information of the target object in the images to be detected includes:
obtaining the ratio of the real height of the target object in the physical world to the height information of the target object in the image to be detected;
calculating the product of the ratio and the image position difference, and determining the real displacement of the target object in the physical world corresponding to the preset time;
and determining the moving speed of the target object according to the real displacement and the preset time.
Optionally, the method further includes: presetting a moving speed threshold;
and comparing the moving speed threshold with the moving speed, and if the moving speed threshold is greater than the moving speed, triggering an alarm mechanism.
Optionally, the obtaining an image set including a target object includes:
obtaining a plurality of images containing a target object;
preprocessing the plurality of images to obtain a plurality of candidate images;
the plurality of candidate images are labeled to obtain the set of images.
Optionally, the constructing a target object detection model according to the image set includes:
constructing an initial object detection model, initializing parameters of the initial object detection model, and inputting training images in the image set into the initial object detection model;
obtaining a prediction result of the initial object detection model on the training image and a loss value of a marked image;
updating parameters of the initial object detection model by using a back propagation algorithm;
inputting other training images in the image set into the initial object detection model for iterative training, and obtaining the updated initial object detection model as a candidate object detection model when the loss value is minimum;
inputting the test images in the image set into the candidate object detection model to obtain the test result of the candidate object detection model on the test images and the loss value of the marked images;
comparing the loss value with a preset loss value, and if the loss value meets the preset loss value, taking the candidate object detection model as a target object detection model; and otherwise, continuously inputting other training images in the image set into the initial object detection model for iterative training.
The embodiment of the present application further provides a device for detecting a moving speed of a target object, including:
an image set obtaining unit for obtaining an image set containing a target object;
the target object detection model construction unit is used for constructing a target object detection model according to the image set;
an image position difference obtaining unit, configured to obtain, through the target object detection model, a target image including a target object in two images to be detected at a preset time interval, and obtain an image position difference of the target image corresponding to the two images to be detected;
a height information obtaining unit for obtaining real height information of the target object in the physical world and height information in the image to be detected;
and the moving speed determining unit is used for determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected.
An embodiment of the present application further provides an electronic device, where the electronic device includes: a processor; a memory for storing a computer program for execution by the processor to perform the method of any one of the above.
An embodiment of the present application further provides a computer storage medium, where a computer program is stored, and the computer program is executed by a processor to perform any one of the methods described above.
Compared with the prior art, the method has the following advantages:
the embodiment of the application provides a method for detecting the moving speed of a target object, which comprises the steps of obtaining an image set containing the target object; constructing a target object detection model according to the image set; obtaining a target image containing a target object in two images to be detected at preset time intervals through the target object detection model, and obtaining the corresponding image position difference of the target image on the two images to be detected; obtaining real height information of the target object in a physical world and height information of the target object in an image to be detected; and determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected. According to the first embodiment of the application, the target object detection model is constructed through the obtained image set, the image to be detected is detected through the target object detection model, the target image containing the target object in the image to be detected can be determined firstly, the moving speed of the target object is determined through the corresponding image position difference of the obtained target image on the two images to be detected, the real height information of the target object in the physical world, the height information of the target object in the image to be detected and the time for obtaining the two images to be detected, the detection accuracy is improved, manual self-detection is not needed, and therefore the input cost for detecting the moving speed of the object is reduced.
Drawings
Fig. 1 is a flowchart of a method for detecting a moving speed of a target object according to a first embodiment of the present application.
Fig. 2 is a flowchart for constructing a target object detection model according to a first embodiment of the present application.
Fig. 3 is a schematic diagram of an apparatus for detecting a moving speed of a target object according to a second embodiment of the present application.
Fig. 4 is a schematic view of an electronic device according to a third embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the present application. The embodiments of this application are capable of embodiments in many different forms than those described herein and can be similarly generalized by those skilled in the art without departing from the spirit and scope of the embodiments of this application and, therefore, the embodiments of this application are not limited to the specific embodiments disclosed below.
In order to make those skilled in the art better understand the solution of the present application, a detailed description is given below of a specific application scenario of an embodiment of the present application based on the method for detecting the moving speed of the target object provided by the present application, where the application scenario is an application scenario
The scene is specifically a scene for detecting the moving speed of the gas cylinder. In the scene, the moving speed of the gas cylinder is automatically detected mainly in an image detection mode. Specifically, a plurality of images including the gas cylinder in operation can be obtained through the monitoring camera, and iterative training is performed on the target object detection model by taking the plurality of images as image samples, so that the target object detection model is constructed. When the image to be detected is obtained through the monitoring camera, the image to be detected can be input into the target object detection model, and therefore the moving speed of the gas cylinder image in the image to be detected, which corresponds to the gas cylinder in the physical world, can be determined.
The technical solution of the present application will be illustrated by specific examples below.
A first embodiment of the present application provides a method for detecting a moving speed of a target object, and fig. 1 is a flowchart of the method for detecting a moving speed of a target object according to the first embodiment of the present application. As shown in fig. 1, the method includes the following steps.
Step S101, an image set including a target object is obtained.
In this step, the target object refers to a target object image and corresponds to the scene, the target object in this step refers to an image of an object (for example, a gas cylinder) that is likely to be displaced, and the corresponding image including the target object refers to an image including an image of an object that is likely to be displaced. The image set including the target object obtained in this step is a set including a plurality of object images that are likely to be displaced. For example, a first image containing the target object, a second image containing the second target object, a third image containing the target object, etc., wherein the plurality of images containing the target object form an image set.
In this step, obtaining an image set including a target object specifically includes the following steps:
step 1, obtaining a plurality of images containing target objects, wherein the target objects are objects which are easy to move. In this step, images with the target object may be loaded and unloaded from the network, and images with the target object may also be obtained by the monitoring cameras disposed in different environmental positions. After the image with the target object is obtained, the image needs to be processed, as described in step 2.
Step 2, preprocessing the plurality of images to obtain a plurality of candidate images, specifically, in this step, performing at least the following operations on the plurality of images in a Mosaic data enhancement mode: and randomly overturning, randomly zooming and randomly cutting the plurality of images to obtain a plurality of initial images, randomly splicing the plurality of initial images to obtain a plurality of candidate images, wherein the plurality of candidate images are used as new images with target objects. After obtaining a plurality of candidate images, step 3 is performed.
And 3, marking the candidate images to obtain the image set, specifically, marking the image of the target object in the candidate images, taking the candidate image completely being the target object as a first image, and taking the candidate image partially containing the target object as a second image. A portion of the first image and the second image are taken as training images, another portion of the first image and the second image are taken as test images, and the training images and the test images form the image set.
After the image set is obtained, a target object detection model may be constructed from the image set, as detailed in step S102.
And S102, constructing a target object detection model according to the image set.
In this step, the target object detection model is used to detect the image to be detected, so as to obtain a target image with the target object, and obtain the corresponding image position difference of the target image on the two images to be detected. Specifically, the step of constructing the target object detection model according to the image set includes the following steps, which are detailed in fig. 2, and fig. 2 is a flowchart of constructing the target object detection model according to the first embodiment of the present application.
Step 1021, constructing an initial object detection model, initializing parameters of the initial object detection model, and inputting training images in the image set into the initial object detection model.
In this step, the initial object detection model is an initial model of the target object detection model, and parameters of the initial object detection model are continuously iteratively trained through training images in an image set to obtain the target object detection model.
Step 1022, obtaining the prediction result of the initial object detection model on the training image and the loss value of the labeled image.
The method comprises the following steps of firstly, obtaining a prediction result of a target object image pixel by pixel in a training image through an initial object detection model. Specifically, a training image of an image set is input into an initial object detection model to obtain feature information of the training image, an image category in the training image is obtained according to the feature information, and upsampling, downsampling and feature fusion processing are performed on the feature information by combining the image category to obtain a prediction result of a target object image pixel by pixel.
In the step, the feature information of the image is extracted through the Focus slice and the feature extraction network. The feature fusion processing is mainly completed through a feature fusion Network, and the feature fusion Network mainly adopts a Network structure of FPN (feature Pyramid Network) + PAN (Pyramid Attention model). And performing up-sampling, down-sampling and feature fusion processing on the feature information by adopting a network structure of FPN + PAN to obtain a prediction result of the target object image pixel by pixel.
Specifically, the FPN layer adopts a top-down sampling process, the resolution of the low-resolution features of the top layer is improved in an up-sampling mode, the low-resolution features are amplified to the same size as the features of the previous stage, and then the low-resolution features and the features of the previous stage are added and combined. Through the operation, the top-level features containing more semantic information and the lower-level features containing more detail features are integrated together, and the expression capability of the features is improved. The PAN layer is next to the FPN layer, and the PAN adopts a bottom-up sampling process to transmit the characteristic information contained in the bottom layer to the characteristics of the upper layer, and reduces the size of the characteristics to be the same as the size of the characteristics of the upper stage in a down-sampling mode in the characteristic transmission process, which is opposite to the FPN structure. Through the combination, the FPN transmits strong semantic features from top to bottom, the feature pyramid transmits strong positioning features from bottom to top, and the two features are combined with each other to carry out integration operation on different features so as to obtain a prediction result of the target object image pixel by pixel.
Then, the prediction result of the target object image of each pixel is compared with the actual result of the target object image marked by the marking frame in the training image, and the loss value of the prediction result and the actual result is calculated. Specifically, the target object image in the prediction result corresponds to the target object image in the actual result, and each pixel on the target object image in the prediction result corresponds to the grid area of the target object image in the actual result according to different sizes and lengths, so as to generate a multi-scale prior frame. And then, screening according to the size and the length and the width of the target object image in the actual result and the size and the length and the width of the prior frame in the same grid area to obtain a positive sample prediction frame. And finally, performing loss calculation according to the position offset of the positive sample prediction frame and the actual marking frame to obtain the prediction result of the initial object detection model on the training image and the loss value of the marked image.
In the step, the obtained positive sample prediction box may be obtained by a GIOU (Generalized Intersection over unit, Loss of bounding box prediction) Loss algorithm. The calculation of GIOU is as follows:
Figure BDA0003194433150000071
Figure BDA0003194433150000072
the method comprises the following steps that IOU (Intersection over Union, border prediction algorithm) represents the ratio of an area of an A, B Intersection region to an area of a A, B total occupied region, A represents a positive sample prediction box, B represents a prior box, A ^ B represents an Intersection overlapping region of the positive sample prediction box and the prior box, and A ^ B represents an area occupied by the positive sample prediction box and the prior box; c represents the smallest rectangular frame region surrounding both A and B, and C \ Aomeu B represents the region remaining from the region of C except the region occupied by A, B in total.
And step 1023, updating parameters of the initial object detection model by using a back propagation algorithm.
And 1024, inputting other training images in the image set into the initial object detection model for iterative training, and obtaining the updated initial object detection model with the smallest loss value as a candidate object detection model.
In this step, the initial object detection model may be trained through multiple iterations, and the corresponding initial object detection model with the smallest loss value is used as the candidate object detection model. The candidate object detection model is obtained to further verify whether the model can obtain a target image with a target object through a test image (described below), and to obtain the similarity of images of adjacent frames in the target image. See step 1205 for details.
And 1025, inputting the test images in the image set into the candidate object detection model, and obtaining the test results of the candidate object detection model on the test images and the loss values of the marked images.
Firstly, a prediction result of a target object image pixel by pixel in a test image is obtained through a candidate object detection model. Specifically, a test image of an image set is input into a candidate object detection model to obtain feature information of the test image, an image category in the test image is obtained according to the feature information, and upsampling, downsampling and feature fusion processing are performed on the feature information in combination with the image category to obtain a prediction result of a target object image pixel by pixel.
Then, the prediction result of the pixel-by-pixel target object image is compared with the actual result of the target object image marked by the marking frame in the test image, and the loss value of the prediction result and the actual result is calculated. Specifically, the target object image in the prediction result corresponds to the target object image in the actual result, and each pixel on the target object image in the prediction result corresponds to the grid area of the target object image in the actual result according to different sizes and lengths, so as to generate a multi-scale prior frame. And then, screening according to the size and the length and the width of the target object image in the actual result and the size and the length and the width of the prior frame in the same grid area to obtain a positive sample prediction frame. And finally, performing loss calculation according to the position offset of the positive sample prediction frame and the actual marking frame to obtain the prediction result of the initial object detection model on the test image and the loss value of the marked image.
Step 1026, comparing the loss value with a preset loss value, and if the loss value meets the preset loss value, taking the candidate object detection model as a target object detection model; and otherwise, continuously inputting other training images in the image set into the initial object detection model for iterative training.
Specifically, after the prediction result of the initial object detection model on the test image and the loss value of the marked image are obtained, the loss value is compared with a preset loss value, and if the loss value meets the preset loss value, the candidate object detection model is used as the target object detection model. And otherwise, continuously inputting other training images in the image set into the initial object detection model for iterative training until the obtained loss value meets the preset loss value, and then taking the corresponding candidate object detection model as the target object detection model.
Step S103, obtaining a target image containing a target object in two images to be detected with a preset time interval through the target object detection model, and obtaining the corresponding image position difference of the target image on the two images to be detected.
After a target object detection model is obtained, an image to be detected is obtained, the image to be detected is detected through the target object detection model to obtain a target image containing a target object in two images to be detected with a preset time interval, and the corresponding image position difference of the target image on the two images to be detected is obtained. In this step, the two images to be detected separated by the preset time are two images shot according to the preset time interval when the images to be detected are obtained, for example, the preset time is 1min, then the two images to be detected separated by the preset time are respectively a first image to be detected at the starting time of the preset time and a second image to be detected at the ending time of the preset time (1min), wherein the first image to be detected is called a first image to be detected, and the second image to be detected is called a second image to be detected. Therefore, when the target object detection model is passed, a first image to be detected and a second image to be detected which correspond to the preset time interval are obtained.
Of course, in the first embodiment of the present application, the preset interval time corresponding to the two images to be detected obtained by the target object detection model may not correspond to the preset interval time when the images are captured, for example, the preset interval time when the images are captured is 1min, and the preset interval time corresponding to the two images to be detected obtained by the target object detection model may be 2min or 50 s.
In the first embodiment of the present application, obtaining a target image including a target object in two images to be detected separated by a preset time includes: and setting preset time, and obtaining a first image to be detected corresponding to the starting moment of the preset time and a second image to be detected corresponding to the ending moment of the preset time through a target object detection model. Meanwhile, a first prediction result of the target object image pixel by pixel in the first image to be detected is obtained, a second prediction result of the target object image pixel by pixel in the second image to be detected is obtained, the first prediction result and the second prediction result are respectively compared with the actual result of the target object image marked by the marked frame in the image set, and a first loss value of the first prediction result and the actual result and a second loss value of the second prediction result and the actual result are respectively calculated. And finally, determining the image corresponding to the minimum first loss value as a target image containing the target object in the first image to be detected, and determining the image corresponding to the minimum second loss value as a target image containing the target object in the second image to be detected.
After the target image containing the target object in the first image to be detected and the target image containing the target object in the second image to be detected are obtained, the corresponding image position difference of the target image on the two images to be detected can be obtained.
Specifically, a first position information and a second position information, which respectively correspond to the target image on the first image to be detected and the second image to be detected, are obtained, and the method specifically includes the following steps: first, first feature information and second feature information, corresponding to the target image in the first image to be detected and the second image to be detected respectively, are obtained, that is, the first feature information corresponding to the target image in the first image to be detected and the second feature information corresponding to the target image in the second image to be detected are obtained. Then, a first pixel coordinate and a second pixel coordinate of the target object in the first image to be detected and the second image to be detected are respectively determined according to the first characteristic information and the second characteristic information, namely, the first pixel coordinate of the target object in the first image to be detected is determined according to the first characteristic information, and the second pixel coordinate of the target object in the second image to be detected is determined according to the second characteristic information. The feature information of each part of the target object corresponds to a corresponding pixel, and each pixel has corresponding coordinate information. After the characteristic information of the target object is determined to be matched with the pixels, the position information of the characteristic information of the target object can be determined. And finally, determining first position information and second position information which correspond to the target object in the first image to be detected and the second image to be detected respectively according to the first pixel coordinate and the second pixel coordinate, namely determining the first position information of the target object in the first image to be detected according to the first pixel coordinate and determining the second position information of the target object in the second image to be detected according to the second pixel coordinate, so that the corresponding image position difference of the target object on the two images to be detected can be obtained according to the first position information and the second position information.
Further, in the first embodiment of the present application, through a target object detection model, a first bounding box and a second bounding box of a target object in a target image in a first image to be detected and a second image to be detected respectively can be obtained, where the first bounding box includes an upper left corner (x1, y1) and a lower right corner (x2, y2), and the second bounding box includes an upper left corner (x3, y3) and a lower right corner (x4, y4), and a difference between vertical coordinates of the upper left corners corresponding to the first bounding box and the second bounding box respectively is subtracted, so as to obtain an image position difference corresponding to the target image in the two images to be detected.
And step S104, obtaining the real height information of the target object in the physical world and the height information in the image to be detected.
In this step, the actual height of the target object in the physical world can be obtained by means of direct measurement. The height information of the target object in the image to be detected can be obtained through the pixel information corresponding to the target object in the image to be detected and the coordinate value corresponding to the pixel information. Specifically, the feature information of the target object in the image to be detected is obtained through the target object detection model, a plurality of pixel information corresponding to the feature information is obtained, a plurality of coordinate values corresponding to the pixel information respectively are determined, a maximum coordinate value and a minimum coordinate value are screened out from the coordinate values, and the height information of the target object in the image to be detected is determined according to the maximum coordinate value and the minimum coordinate value. The maximum coordinate value and the minimum coordinate value are two extreme values with the farthest distance on the same latitude.
It should be noted that the height information of the target object in the to-be-detected image obtained in this step may be height information corresponding to the target object in any one of the two to-be-detected images.
And S105, determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected.
After the image position difference of the target image corresponding to the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected are obtained, the time, the image position difference of the target image corresponding to the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected can be preset, and the moving speed of the target object is determined.
Specifically, firstly, a ratio of a real height of the target object in the physical world to height information of the target object in the image to be detected is obtained, a product of the ratio and the image position difference is calculated, and a real displacement of the target object in the physical world corresponding to a preset time is determined. The real displacement of the target object in the physical world corresponding to the preset time is the image position difference of the target object in the image to be detected. For example, the real height of the obtained target object in the physical world is 1.5 m, and the height information x of the target object in the image to be detected is obtained, so that the ratio of the real height of the target object to the height information x of the target object in the image to be detected is 1.5/x; and if the image position difference of the target image on the two images to be detected is y, the real displacement of the target object in the physical world corresponding to the preset time is as follows: l ═ 1.5/x × y. And finally, determining the moving speed of the target object according to the real displacement and the preset time. Based on the two images to be detected which are obtained at intervals of preset time, the moving distance of the target object is necessarily carried out at the preset time, and therefore, after the real displacement of the target object in the physical world is obtained, the moving speed of the target object in the physical world can be determined by combining the preset time.
Further, in the first embodiment of the present application, in order to timely remind a worker whether the moving speed of the target object meets the requirement, a moving speed threshold may be preset, the moving speed threshold is compared with the moving speed, and if the moving speed threshold is greater than the moving speed, an alarm mechanism is triggered to remind the worker to timely process the target object.
A first embodiment of the present application provides a method for detecting a moving speed of a target object, including: obtaining a set of images containing a target object; constructing a target object detection model according to the image set; obtaining a target image containing a target object in two images to be detected at preset time intervals through the target object detection model, and obtaining the corresponding image position difference of the target image on the two images to be detected; obtaining real height information of the target object in a physical world and height information of the target object in an image to be detected; and determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected. According to the first embodiment of the application, the target object detection model is constructed through the obtained image set, the image to be detected is detected through the target object detection model, the target image containing the target object in the image to be detected can be determined firstly, the moving speed of the target object is determined through the corresponding image position difference of the obtained target image on the two images to be detected, the real height information of the target object in the physical world, the height information of the target object in the image to be detected and the time for obtaining the two images to be detected, the detection accuracy is improved, manual self-detection is not needed, and therefore the input cost for detecting the moving speed of the object is reduced.
In addition, a target object detection model is constructed based on the obtained image set, and the target object detection model can be applied to other target detection networks with multi-scale feature maps, namely the target object detection model has strong detection universality on target objects.
In the first embodiment described above, a method for detecting the moving speed of a target object is provided, and correspondingly, the present application provides an apparatus for detecting the moving speed of the target object. Fig. 3 is a schematic diagram of an apparatus for detecting a moving speed of a target object according to a second embodiment of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
A second embodiment of the present application provides an apparatus for detecting a moving speed of a target object, including: an image set obtaining unit 301, configured to obtain an image set including a target object; a target object detection model construction unit 302, configured to construct a target object detection model according to the image set; an image position difference obtaining unit 303, configured to obtain, through the target object detection model, a target image including a target object in two images to be detected at a preset time interval, and obtain an image position difference of the target image corresponding to the two images to be detected; a height information obtaining unit 304, configured to obtain real height information of the target object in the physical world and height information in the image to be detected; a moving speed determining unit 305, configured to determine a moving speed of the target object according to the preset time, the corresponding image position difference of the target image between the two images to be detected, and the actual height information of the target object in the physical world and the height information in the images to be detected.
Optionally, the obtaining a target image including a target object in two images to be detected separated by a preset time includes:
setting preset time, and acquiring a first image to be detected corresponding to the starting time of the preset time and a second image to be detected corresponding to the ending time of the preset time through the target object detection model;
obtaining a first prediction result of a target object image pixel by pixel in a first image to be detected, and obtaining a second prediction result of the target object image pixel by pixel in a second image to be detected;
respectively comparing the first prediction result and the second prediction result with the actual result of the target object image marked by the marked frame in the image set, and respectively calculating a first loss value of the first prediction result and the actual result, and a second loss value of the second prediction result and the actual result;
determining the corresponding image when the first loss value is minimum as a target image containing a target object in a first image to be detected; and determining the corresponding image when the second loss value is minimum as a target image containing a target object in the second image to be detected.
Optionally, the obtaining of the image position difference of the target image corresponding to the two images to be detected includes:
acquiring first position information and second position information which respectively correspond to a target image on a first image to be detected and a second image to be detected;
and obtaining the corresponding image position difference of the target image on the two images to be detected according to the first position information and the second position information.
Optionally, the obtaining first position information and second position information of the target image respectively corresponding to the first image to be detected and the second image to be detected includes:
acquiring first characteristic information and second characteristic information which respectively correspond to the target image in the first image to be detected and the second image to be detected;
respectively determining a first pixel coordinate and a second pixel coordinate of the target object in a first image to be detected and a second image to be detected according to the first characteristic information and the second characteristic information;
and determining first position information and second position information which correspond to the target object in the first image to be detected and the second image to be detected respectively according to the first pixel coordinate and the second pixel coordinate.
Optionally, the determining the moving speed of the target object according to the preset time, the image position difference of the target image between the two images to be detected, the real height information of the target object in the physical world, and the height information of the target object in the images to be detected includes:
obtaining the ratio of the real height of the target object in the physical world to the height information of the target object in the image to be detected;
calculating the product of the ratio and the image position difference, and determining the real displacement of the target object in the physical world corresponding to the preset time;
and determining the moving speed of the target object according to the real displacement and the preset time.
Optionally, the method further includes: presetting a moving speed threshold;
and comparing the moving speed threshold with the moving speed, and if the moving speed threshold is greater than the moving speed, triggering an alarm mechanism.
Optionally, the obtaining an image set including a target object includes:
obtaining a plurality of images containing a target object;
preprocessing the plurality of images to obtain a plurality of candidate images;
the plurality of candidate images are labeled to obtain the set of images.
Optionally, the constructing a target object detection model according to the image set includes:
constructing an initial object detection model, initializing parameters of the initial object detection model, and inputting training images in the image set into the initial object detection model;
obtaining a prediction result of the initial object detection model on the training image and a loss value of a marked image;
updating parameters of the initial object detection model by using a back propagation algorithm;
inputting other training images in the image set into the initial object detection model for iterative training, and obtaining the updated initial object detection model as a candidate object detection model when the loss value is minimum;
inputting the test images in the image set into the candidate object detection model to obtain the test result of the candidate object detection model on the test images and the loss value of the marked images;
comparing the loss value with a preset loss value, and if the loss value meets the preset loss value, taking the candidate object detection model as a target object detection model; and otherwise, continuously inputting other training images in the image set into the initial object detection model for iterative training.
The embodiment of the present application further provides a device for detecting a moving speed of a target object, including:
an image set obtaining unit for obtaining an image set containing a target object;
the target object detection model construction unit is used for constructing a target object detection model according to the image set;
an image position difference obtaining unit, configured to obtain, through the target object detection model, a target image including a target object in two images to be detected at a preset time interval, and obtain an image position difference of the target image corresponding to the two images to be detected;
a height information obtaining unit for obtaining real height information of the target object in the physical world and height information in the image to be detected;
and the moving speed determining unit is used for determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected.
The first embodiment of the present application provides a method for detecting a moving speed of a target object, and the third embodiment of the present application provides an electronic device corresponding to the method of the first embodiment. Reference is made to fig. 4, which shows a schematic diagram of the electronic device of the present embodiment. A third embodiment of the present application provides an electronic device, including: a processor 401; the memory 402 is used for storing a computer program, which is executed by the processor, and executes the method for detecting the moving speed of the target object according to the first embodiment of the present application.
A fourth embodiment of the present application provides a computer storage medium corresponding to the method of the first embodiment. A fourth embodiment of the present application provides a computer storage medium, which stores a computer program executed by a processor to perform the method for detecting the moving speed of the target object provided in the first embodiment of the present application.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (10)

1. A method of detecting a moving speed of a target object, comprising:
obtaining a set of images containing a target object;
constructing a target object detection model according to the image set;
obtaining a target image containing a target object in two images to be detected at preset time intervals through the target object detection model, and obtaining the corresponding image position difference of the target image on the two images to be detected;
obtaining real height information of the target object in a physical world and height information of the target object in an image to be detected;
and determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected.
2. The method for detecting the moving speed of the target object according to claim 1, wherein the obtaining of the target image including the target object from the two images to be detected separated by the preset time comprises:
setting preset time, and acquiring a first image to be detected corresponding to the starting time of the preset time and a second image to be detected corresponding to the ending time of the preset time through the target object detection model;
obtaining a first prediction result of a target object image pixel by pixel in a first image to be detected, and obtaining a second prediction result of the target object image pixel by pixel in a second image to be detected;
respectively comparing the first prediction result and the second prediction result with the actual result of the target object image marked by the marked frame in the image set, and respectively calculating a first loss value of the first prediction result and the actual result, and a second loss value of the second prediction result and the actual result;
determining the corresponding image when the first loss value is minimum as a target image containing a target object in a first image to be detected; and determining the corresponding image when the second loss value is minimum as a target image containing a target object in the second image to be detected.
3. The method for detecting the moving speed of the target object according to claim 2, wherein the obtaining the corresponding image position difference of the target image on the two images to be detected comprises:
acquiring first position information and second position information which respectively correspond to a target image on a first image to be detected and a second image to be detected;
and obtaining the corresponding image position difference of the target image on the two images to be detected according to the first position information and the second position information.
4. The method of claim 3, wherein obtaining the first position information and the second position information corresponding to the target image on the first image to be detected and the second image to be detected respectively comprises:
acquiring first characteristic information and second characteristic information which respectively correspond to the target image in the first image to be detected and the second image to be detected;
respectively determining a first pixel coordinate and a second pixel coordinate of the target object in a first image to be detected and a second image to be detected according to the first characteristic information and the second characteristic information;
and determining first position information and second position information which correspond to the target object in the first image to be detected and the second image to be detected respectively according to the first pixel coordinate and the second pixel coordinate.
5. The method for detecting the moving speed of the target object according to claim 4, wherein the determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected comprises:
obtaining the ratio of the real height of the target object in the physical world to the height information of the target object in the image to be detected;
calculating the product of the ratio and the image position difference, and determining the real displacement of the target object in the physical world corresponding to the preset time;
and determining the moving speed of the target object according to the real displacement and the preset time.
6. The method of detecting a moving speed of a target object according to claim 5, further comprising:
presetting a moving speed threshold;
and comparing the moving speed threshold with the moving speed, and if the moving speed threshold is greater than the moving speed, triggering an alarm mechanism.
7. The method of claim 1, wherein obtaining the image set containing the target object comprises:
obtaining a plurality of images containing a target object;
preprocessing the plurality of images to obtain a plurality of candidate images;
the plurality of candidate images are labeled to obtain the set of images.
8. The method of claim 7, wherein constructing a target object detection model from the image set comprises:
constructing an initial object detection model, initializing parameters of the initial object detection model, and inputting training images in the image set into the initial object detection model;
obtaining a prediction result of the initial object detection model on the training image and a loss value of a marked image;
updating parameters of the initial object detection model by using a back propagation algorithm;
inputting other training images in the image set into the initial object detection model for iterative training, and obtaining the updated initial object detection model as a candidate object detection model when the loss value is minimum;
inputting the test images in the image set into the candidate object detection model to obtain the test result of the candidate object detection model on the test images and the loss value of the marked images;
comparing the loss value with a preset loss value, and if the loss value meets the preset loss value, taking the candidate object detection model as a target object detection model; and otherwise, continuously inputting other training images in the image set into the initial object detection model for iterative training.
9. An apparatus for detecting a moving speed of a target object, comprising:
an image set obtaining unit for obtaining an image set containing a target object;
the target object detection model construction unit is used for constructing a target object detection model according to the image set;
an image position difference obtaining unit, configured to obtain, through the target object detection model, a target image including a target object in two images to be detected at a preset time interval, and obtain an image position difference of the target image corresponding to the two images to be detected;
a height information obtaining unit for obtaining real height information of the target object in the physical world and height information in the image to be detected;
and the moving speed determining unit is used for determining the moving speed of the target object according to the preset time, the corresponding image position difference of the target image on the two images to be detected, the real height information of the target object in the physical world and the height information of the target object in the images to be detected.
10. An electronic device, characterized in that the electronic device comprises: a processor; a memory for storing a computer program for execution by the processor to perform the method of any one of claims 1 to 9.
CN202110886652.XA 2021-08-03 2021-08-03 Method and device for detecting moving speed of target object and electronic equipment Active CN113808200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110886652.XA CN113808200B (en) 2021-08-03 2021-08-03 Method and device for detecting moving speed of target object and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110886652.XA CN113808200B (en) 2021-08-03 2021-08-03 Method and device for detecting moving speed of target object and electronic equipment

Publications (2)

Publication Number Publication Date
CN113808200A true CN113808200A (en) 2021-12-17
CN113808200B CN113808200B (en) 2023-04-07

Family

ID=78942680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110886652.XA Active CN113808200B (en) 2021-08-03 2021-08-03 Method and device for detecting moving speed of target object and electronic equipment

Country Status (1)

Country Link
CN (1) CN113808200B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934805A (en) * 2024-03-25 2024-04-26 腾讯科技(深圳)有限公司 Object screening method and device, storage medium and electronic equipment
WO2024099068A1 (en) * 2022-11-09 2024-05-16 上海高德威智能交通系统有限公司 Image-based speed determination method and apparatus, and device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282020A (en) * 2014-09-22 2015-01-14 中海网络科技股份有限公司 Vehicle speed detection method based on target motion track
US20170248971A1 (en) * 2014-11-12 2017-08-31 SZ DJI Technology Co., Ltd. Method for detecting target object, detection apparatus and robot
CN108556876A (en) * 2018-04-18 2018-09-21 北京交通大学 A kind of new type train tests the speed distance-measuring equipment and method
CN109190470A (en) * 2018-07-27 2019-01-11 北京市商汤科技开发有限公司 Pedestrian recognition methods and device again
CN109765397A (en) * 2019-01-29 2019-05-17 天津美腾科技有限公司 Speed-measuring method, apparatus and system
CN111723860A (en) * 2020-06-17 2020-09-29 苏宁云计算有限公司 Target detection method and device
CN112101169A (en) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 Road image target detection method based on attention mechanism and related equipment
US20210019503A1 (en) * 2018-09-30 2021-01-21 Tencent Technology (Shenzhen) Company Limited Face detection method and apparatus, service processing method, terminal device, and storage medium
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model
CN112418278A (en) * 2020-11-05 2021-02-26 中保车服科技服务股份有限公司 Multi-class object detection method, terminal device and storage medium
CN113192646A (en) * 2021-04-25 2021-07-30 北京易华录信息技术股份有限公司 Target detection model construction method and different target distance monitoring method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282020A (en) * 2014-09-22 2015-01-14 中海网络科技股份有限公司 Vehicle speed detection method based on target motion track
US20170248971A1 (en) * 2014-11-12 2017-08-31 SZ DJI Technology Co., Ltd. Method for detecting target object, detection apparatus and robot
CN108556876A (en) * 2018-04-18 2018-09-21 北京交通大学 A kind of new type train tests the speed distance-measuring equipment and method
CN109190470A (en) * 2018-07-27 2019-01-11 北京市商汤科技开发有限公司 Pedestrian recognition methods and device again
US20210019503A1 (en) * 2018-09-30 2021-01-21 Tencent Technology (Shenzhen) Company Limited Face detection method and apparatus, service processing method, terminal device, and storage medium
CN109765397A (en) * 2019-01-29 2019-05-17 天津美腾科技有限公司 Speed-measuring method, apparatus and system
CN111723860A (en) * 2020-06-17 2020-09-29 苏宁云计算有限公司 Target detection method and device
CN112101169A (en) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 Road image target detection method based on attention mechanism and related equipment
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model
CN112418278A (en) * 2020-11-05 2021-02-26 中保车服科技服务股份有限公司 Multi-class object detection method, terminal device and storage medium
CN113192646A (en) * 2021-04-25 2021-07-30 北京易华录信息技术股份有限公司 Target detection model construction method and different target distance monitoring method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FABRIZIO LAMBERTI等: "Improving Robustness of Infrared Target Tracking Algorithms Based on Template Matching", 《IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS》 *
罗继曼等: "基于双目视觉原理的混联机器人初始点坐标研究", 《沈阳建筑大学学报(自然科学版)》 *
聂鑫等: "复杂场景下基于增强YOLOv3的船舶目标检测", 《计算机应用》 *
钱森: "基于边缘的活动轮廓模型算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
黄玉玺: "未知环境下基于视觉的无人机目标跟随与着陆方法研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024099068A1 (en) * 2022-11-09 2024-05-16 上海高德威智能交通系统有限公司 Image-based speed determination method and apparatus, and device and storage medium
CN117934805A (en) * 2024-03-25 2024-04-26 腾讯科技(深圳)有限公司 Object screening method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113808200B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN113808200B (en) Method and device for detecting moving speed of target object and electronic equipment
US20130034305A1 (en) Image-based crack quantification
CN101477616B (en) Human face detecting and tracking process
US10366497B2 (en) Image/video editor with automatic occlusion detection and cropping
CN115147403A (en) Method and device for detecting liquid pollutants, electronic equipment and medium
CN111553298B (en) Fire disaster identification method and system based on block chain
CN110737785A (en) picture labeling method and device
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN113298130B (en) Method for detecting target image and generating target object detection model
CN112378333A (en) Method and device for measuring warehoused goods
CN115797350A (en) Bridge disease detection method and device, computer equipment and storage medium
CN109598723B (en) Image noise detection method and device
US20120249837A1 (en) Methods and Systems for Real-Time Image-Capture Feedback
US11423611B2 (en) Techniques for creating, organizing, integrating, and using georeferenced data structures for civil infrastructure asset management
CN110909685A (en) Posture estimation method, device, equipment and storage medium
CN114022804A (en) Leakage detection method, device and system and storage medium
CN111062415B (en) Target object image extraction method and system based on contrast difference and storage medium
CN116152685B (en) Pedestrian detection method and system based on unmanned aerial vehicle visual field
CN115205793B (en) Electric power machine room smoke detection method and device based on deep learning secondary confirmation
JP2009009539A (en) Circular shape detector
CN113643368A (en) Method and device for determining real distance between objects and electronic equipment
CN113807389A (en) Method and device for determining target object dynamic state and electronic equipment
CN115146686B (en) Method, device, equipment and medium for determining installation position of target object
CN114119594A (en) Oil leakage detection method and device based on deep learning
CN114549613A (en) Structural displacement measuring method and device based on deep super-resolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100000 rooms 206 and 207 on the ground floor of office building 9, Chaolai high tech Industrial Park, No. a, Laiguangying Middle Road, Chaoyang District, Beijing

Applicant after: Jiayang Smart Security Technology (Beijing) Co.,Ltd.

Address before: 100000 rooms 206 and 207 on the ground floor of office building 9, Chaolai high tech Industrial Park, No. a, Laiguangying Middle Road, Chaoyang District, Beijing

Applicant before: PETROMENTOR INTERNATIONAL EDUCATION (BEIJING) CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant