CN113537071B - Static and dynamic target detection method and equipment based on event camera - Google Patents

Static and dynamic target detection method and equipment based on event camera Download PDF

Info

Publication number
CN113537071B
CN113537071B CN202110811885.3A CN202110811885A CN113537071B CN 113537071 B CN113537071 B CN 113537071B CN 202110811885 A CN202110811885 A CN 202110811885A CN 113537071 B CN113537071 B CN 113537071B
Authority
CN
China
Prior art keywords
data
event camera
dynamic
static
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110811885.3A
Other languages
Chinese (zh)
Other versions
CN113537071A (en
Inventor
张世雄
魏文应
龙仕强
陈智敏
李楠楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bohua Ultra Hd Innovation Center Co ltd
Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Original Assignee
Guangdong Bohua Ultra Hd Innovation Center Co ltd
Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bohua Ultra Hd Innovation Center Co ltd, Instritute Of Intelligent Video Audio Technology Longgang Shenzhen filed Critical Guangdong Bohua Ultra Hd Innovation Center Co ltd
Priority to CN202110811885.3A priority Critical patent/CN113537071B/en
Publication of CN113537071A publication Critical patent/CN113537071A/en
Application granted granted Critical
Publication of CN113537071B publication Critical patent/CN113537071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A method of detection based on static and dynamic object recognition by an event camera, comprising the steps of: s1, initializing; s2, sampling data; s3, dynamic data evaluation: evaluating whether the sampled event camera data belongs to dynamic data; s4, if the evaluation display in the step S3 does not belong to the dynamic data, the data are static data, and sampling processing is carried out on the static data through exposure sampling; s5, converting the sampled event camera data into matrix data capable of carrying out feature extraction through data conversion; s6, extracting target characteristics of the event camera data by using a neural network from the converted event camera data; s7, inputting the extracted target features to a final full-connection layer to predict a detection result; and S8, outputting a result. The method solves the problem that the event camera cannot acquire the static and dynamic targets simultaneously, and the problem that the neural network is utilized to effectively identify the dynamic and static targets acquired by the event camera.

Description

Static and dynamic target detection method and equipment based on event camera
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a detection method and detection equipment for static and dynamic target recognition based on an event camera.
Background
With the continuous development of chip technology and brain inspiring technology, a sensor chip based on neuroscience is continuously proposed, and an event camera is the most representative bionic device. The event camera is a novel sensor device, and is different from a traditional camera which shoots frame by frame, the event camera focuses on capturing an event, namely focusing on the change of brightness of a pixel, and the update of the event camera does not update the fixed frame rate according to the exposure time, but updates the event-triggered unfixed frame rate. Event triggering refers to the fact that light changes in a scene or an object moves in the scene, and can be regarded as event triggering, when the accumulation of event triggers at a certain point of a sensing device of an event camera reaches a certain threshold, the event camera outputs an event. Compared with the traditional camera, the event camera has the following three advantages: (1) There is little motion blur, traditional cameras capture video images at a fixed frame rate, there is a certain exposure event in the middle of the frames, if the speed of a moving object is fast, a certain motion blur is generated, a section of ghost is left, and detection and identification of the fast moving object are not facilitated. The event camera has extremely high update speed, the update speed of the adjacent events can be less than 1 microsecond, the change (2) of a remote moving target can be obtained, the dynamic range of the traditional camera is lower, when a scene with darker light or too bright light is encountered, a piece of blur is often caused, the event camera can overcome the condition that the light is too dark or too bright, and the dynamic range of the event camera is twice as low as that of a common camera (3); the power of the event camera is far lower than that of the common device, which leads to a wide range of cameras with unusual applicable scenes.
However, in the application scenario of the artificial intelligence technology, we need to detect not only moving targets, but also static targets, and aims at the disadvantage that only event cameras can be used for detecting moving targets at present.
Disclosure of Invention
The invention provides a detection method and equipment for static and dynamic target recognition based on an event camera, which utilizes a deep learning technology, combines an improved event camera, utilizes training and learning of an artificial neural network, and can effectively recognize static and dynamic targets by utilizing the event camera at the same time. The method solves the problem that the event camera cannot acquire the static and dynamic targets simultaneously, and the problem that the neural network is utilized to effectively identify the dynamic and static targets acquired by the event camera.
The technical scheme of the invention is as follows:
according to one aspect, the present invention provides a method for detecting static and dynamic object recognition based on an event camera, comprising the steps of: s1, initializing: the method comprises the steps of performing starting initialization operation on an event camera, fixing the event camera at a position, keeping the event camera stationary, and starting the event camera; s2, data sampling: after the event camera is initialized, sampling the acquired event camera data of the event camera; s3, dynamic data evaluation: evaluating whether the sampled event camera data belongs to dynamic data; s4, exposure sampling: if the evaluation in the step S3 shows that the data do not belong to dynamic data, the data are static data, and sampling processing is carried out on the static data through exposure sampling; s5, data conversion: converting the sampled event camera data into matrix data capable of performing feature extraction through data conversion; s6, extracting features: extracting target characteristics of the event camera data by using a neural network; s7, full connection prediction: inputting the extracted target features into a final full-connection layer for predicting a detection result; s8, outputting a result: and outputting the final result with the highest scoring of the prediction result.
Preferably, in the above detection method based on static and dynamic object recognition of the event camera, in step S1, data generated at the time of startup is deleted and discarded, and information acquired at the time of startup of the event camera is discarded in the initialization stage.
Preferably, in the above detection method based on static and dynamic object recognition of the event camera, in step S2, event camera data of random time length within 0 to 1S is selected for sampling.
Preferably, in the above detection method for static and dynamic object recognition based on an event camera, in step S3, sampled event camera data is input into a dynamic scene detection module to detect a dynamic model, and the dynamic scene detection module is mainly used to distinguish a dynamic scene from a static scene, if the dynamic scene is a dynamic scene, the data is transmitted to step S5, if the dynamic scene is not a dynamic scene, the data is judged to belong to the static scene, and the step S4 is entered.
Preferably, in the above detection method based on static and dynamic object recognition of the event camera, in step S4, when the detection result in the detection step S3 is displayed as belonging to a dynamic scene, the exposure sampling module is used to perform exposure sampling on the static object, and the flash event camera of the exposure lamp can collect information in the static scene.
Preferably, in the above detection method based on static and dynamic object recognition of the event camera, in step S6, the trained feature extraction module is used to extract the feature of the object of the event camera data.
Preferably, in the above detection method based on static and dynamic object recognition of the event camera, in step S8, after full connection calculation, the probability and the position of each object belonging to the category are output, and the highest probability is selected as the final prediction result to be output.
Preferably, the detection method based on the static and dynamic target recognition of the event camera comprises an exposure sampling module, a dynamic scene detection module and a data conversion module, wherein: and an exposure sampling module: the system is used for carrying out exposure sampling on a static target, and the information in a static scene can be acquired through a scintillation event camera of an exposure lamp; dynamic scene detection module: the dynamic data evaluation method is used for carrying out dynamic data evaluation so as to distinguish dynamic scenes from static scenes; and a data conversion module: for converting event camera data into a matrix form that can be feature extracted.
According to the technical scheme of the invention, the beneficial effects are that:
the method and the device for detecting the static and dynamic targets based on the event camera solve the problem of single application scene of the event camera, realize a set of method and device for identifying the event camera which can be simultaneously applied to the dynamic scene and the static scene for target identification, solve the problem that the event camera can only conduct single dynamic scene sensing in the past, improve the original target identification frame flow, enable the original target identification frame flow to be effectively applied to target identification in the event camera, effectively solve the defect that the event camera cannot sense static information, effectively improve the problem of limited application scene of the event camera, expand the application scene of the event camera, provide a low-power-consumption, wide-dynamic and high-speed all-weather hardware sensing device for the industry, develop an artificial-intelligence-based neural network identification algorithm on the basis, and finally realize the target detection method and device with integrated dynamic and static states.
For a better understanding and explanation of the conception, working principle and inventive effect of the present invention, the present invention is described in detail below by way of specific examples with reference to the accompanying drawings, in which:
drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flow chart of detection of static and dynamic object recognition based on an event camera of the present invention;
fig. 2 is a schematic diagram of an exposure lamp and an event camera lens for the detection apparatus of the present invention based on static and dynamic object recognition of the event camera.
Detailed Description
The method provided by the invention can provide a method and equipment for detecting static and dynamic targets of an event camera, for the dynamic targets, the existing event camera can acquire the information of the dynamic targets, and then the information is input into a neural network for calculation after data conversion processing, and then the result is output; for static state, information is acquired through an improved event camera, then the information is input into a neural network for calculation after data conversion processing, and finally, the result is output: the neural network comprises two parts: feature extraction and full-join prediction.
The principle of the invention is as follows: the event camera is modified by using the exposure lamp, specifically, the exposure lamp equipment is additionally arranged around the lens of the event camera, and the flicker of the exposure lamp is controlled according to the current state of the event camera through an algorithm. The event camera can effectively obtain the information of the static scene, the defect that the conventional event camera can only obtain dynamic information is overcome, and the problem of data identification of the event camera can be effectively solved by analyzing the information obtained by the event camera and designing an identification algorithm based on a deep neural network.
Specifically, the method for effectively identifying the collected data of the event camera mainly utilizes a deep learning neural network method, and can achieve the purposes of collecting dynamic and static targets at the same time and identifying the targets by improving the existing event camera.
As shown in fig. 1, a flow chart of the detection method of the present invention based on static and dynamic object recognition of an event camera is shown. The embodiment of the invention discloses a static and dynamic target detection method based on an event camera, which comprises the following specific steps:
s1, initializing: in this step, a start-up initialization operation is performed on the event camera, the event camera is fixed in a position, the event camera is kept stationary, and the event camera is started up.
And when the application deployment is started, initializing the event camera, wherein the specific initialization operation is to delete and discard the data generated during the starting, and the information acquired during the starting of the event camera is discarded during the initialization stage because the event camera is started and all the data at the first time can cause the response of the event camera from the absence to the interference of the later judgment.
S2, data sampling: after the event camera initialization is completed, the acquired event camera data of the event camera is sampled.
In the process of data sampling, a time period is selected to sample the event camera data, the design time of the time period is random, the event camera data with random time length within 0 to 1s is selected to sample, and the event camera data of different motion scenes can be sampled by designing the random time period.
S3. dynamic data evaluation (i.e., dynamic evaluation in fig. 1): it is evaluated by a specific method whether the sampled event camera data belongs to dynamic data.
The collected event camera data is input into a dynamic scene detection model (namely, a dynamic scene detection module) to detect a dynamic model, and the dynamic scene detection module is mainly utilized to distinguish a dynamic scene from a static scene. If the dynamic scene is the dynamic scene, the data is transmitted to the step S5, if the dynamic scene is not the dynamic scene, the static scene is judged, and the step S4 is entered.
S4, exposure sampling: if the evaluation in step S3 shows that the data does not belong to dynamic data, the data is static data, and the static data is sampled by exposure sampling.
When the detection result in the detection step S3 shows that the static target belongs to a dynamic scene, the exposure sampling module is utilized to perform exposure sampling on the static target, and the information in the static scene can be acquired through the scintillation event camera of the exposure lamp.
S5, data conversion: the sampled event camera data is converted into matrix data capable of performing feature extraction through data conversion.
After the event camera data is collected in step S3 and step S4, the event camera data is converted, and the event camera data is converted into a matrix form capable of performing feature extraction by using a data conversion module.
S6, extracting features: and extracting target characteristics of the event camera data by using the neural network.
After the event camera data conversion in step S5 is completed, feature extraction of the data can be performed, and feature extraction of the data mainly uses the trained feature extraction module to extract object features of the event camera data.
S7, full connection prediction: and inputting the extracted target features into a final full-connection layer for predicting a detection result.
After the feature is extracted, the feature is input into a fully-connected network for prediction, and the fully-connected prediction is calculated by utilizing a fully-connected module trained in neural network model training and application.
S8, outputting a result: and outputting the final result with the highest scoring of the prediction result.
After full connection calculation, the probability and the position of each object belonging to the category are output, the highest probability is selected as the final prediction result to be output, and at the moment, the targets in the event camera can be effectively identified no matter in static or dynamic scenes.
The invention relates to a detection device based on static and dynamic target recognition of an event camera, which comprises an exposure sampling module, a dynamic scene detection module and a data conversion module, wherein:
and an exposure sampling module: the system is used for carrying out exposure sampling on a static target, and the flash event camera of the exposure lamp can be used for collecting information in a static scene. The prior event camera can only be applied to a relatively dynamic scene, and the target always generates relative motion with the event camera, so that the event camera can acquire motion information; the relative motion is generally divided into two cases, one is that the event camera is fixed and stationary while the object to be detected is moving, and the other is that the object is stationary and the event camera is moving, and both cases require that the event camera and the object to be detected relatively far away, which limits the application scenario of the event camera. In the present invention, it is proposed to use an algorithmically controlled exposure light to vary the light of the environment so that the event camera can perceive the environment in a relatively stationary environment. As shown in fig. 2, the exposure lamp and the event camera lens of the detection device based on static and dynamic object recognition of the event camera according to the present invention are shown in fig. 2, but the position of the exposure lamp is not limited to the position shown in fig. 2, when the static object needs to be detected, the exposure lamp will flash according to a certain frequency, at this time, the external light changes, and the event camera starts to acquire external information. Since the frequency of the flicker of the exposure lamps is far higher than the tele-moving speed of the moving object, the tele-moving object information containing the background can be obtained when all the exposure lamps flicker.
Dynamic scene detection module: for dynamic data evaluation to distinguish dynamic scenes from static scenes. The invention provides a detection algorithm of a dynamic scene, which is used for distinguishing the dynamic scene from a static scene. The principle mainly applies the size of the image block to detect the dynamic scene, and when the exposure lamp is in a closed state, if large-area continuous data blocks can be detected, the movement of a target is indicated. The method is that the acquired event camera data are detected on a time sequence, whether a large-area continuous data block appears in the fixed time period is counted, if so, the moving object in the scene is indicated, if the data are discontinuous, a certain-area image block is not formed, the static scene is indicated, and the generated data are only noise data of the event camera.
And a data conversion module: for converting event camera data into a matrix form that can be feature extracted. Because the event camera collects a series of discrete points on a time axis, training and feature extraction cannot be effectively performed, and in order to be able to extract effective event camera features, it is necessary to convert the event camera data into a matrix form. The method specifically comprises the steps of selecting a specific time period, superposing all perceived data of an event camera in the time period, forming a matrix similar to an image by the superposed data, and then training and extracting features according to the matrix.
Neural network model training and application: inputting the converted data into a neural network for learning and training, wherein the network mainly comprises a characteristic extraction module and a full-connection module, the specific training process is similar to training in image recognition, the data of an event camera are marked, and the neural network is used for training after the marking is completed; the training process mainly comprises the steps of calculating a loss function, wherein two main components of the function are predicted values of the neural network, one is a true value considered to be marked by the invention, and the distance between the predicted values and the true value is calculated continuously through the loss function so as to minimize the distance, so that the network is fitted to the optimal state, and the neural network deployment can be applied, and the deployment flow is shown in figure 2.
The method provided by the invention is used for improving and applying the event camera, the performance of the event camera is effectively improved by improving the event camera, the application range of the event camera is expanded, the event camera can effectively perceive dynamic and static information by improving and improving the event camera, and the perceived information is combined with a deep learning method to carry out target identification.
The above description is of the best mode of carrying out the inventive concept and principles of operation. The above examples should not be construed as limiting the scope of the claims, but other embodiments and combinations of implementations according to the inventive concept are within the scope of the invention.

Claims (8)

1. A method for detecting static and dynamic object recognition based on an event camera, comprising the steps of:
s1, initializing: performing starting initialization operation on an event camera, fixing the event camera at a position, keeping the event camera stationary, and starting the event camera;
s2, data sampling: sampling acquired event camera data of the event camera after the event camera is initialized;
s3, dynamic data evaluation: evaluating whether the sampled event camera data belongs to dynamic data;
s4, exposure sampling: if the evaluation in the step S3 shows that the data do not belong to dynamic data, the data are static data, and sampling processing is carried out on the static data through exposure sampling;
s5, data conversion: converting the sampled event camera data into matrix data capable of performing feature extraction through data conversion;
s6, extracting features: extracting target characteristics of the converted event camera data by using a neural network;
s7, full connection prediction: inputting the extracted target features into a final full-connection layer for predicting a detection result; and
s8, outputting a result: and outputting the final result with the highest scoring of the prediction result.
2. The method according to claim 1, wherein in step S1, the data generated at the time of start-up is pruned and discarded, and the information acquired at the time of start-up of the event camera is discarded at the time of initialization.
3. The method for detecting static and dynamic object recognition based on an event camera according to claim 1, wherein in step S2, event camera data of random time length within 0 to 1S is selected for sampling.
4. The method according to claim 1, wherein in step S3, the sampled event camera data is input to a dynamic scene detection module to detect a dynamic model, the dynamic scene detection module is mainly used to distinguish a dynamic scene from a static scene, if the dynamic scene is a dynamic scene, the data is transmitted to step S5, if the dynamic scene is not a dynamic scene, the data is judged to belong to the static scene, and step S4 is entered.
5. The method according to claim 1, wherein in step S4, when the detection result in the detection step S3 indicates that the object belongs to a dynamic scene, the exposure sampling module is used to perform exposure sampling on the static object, and the flash event camera of the exposure lamp can collect information in the static scene.
6. The method for detecting static and dynamic object recognition based on an event camera according to claim 1, wherein in step S6, the extraction of the event camera data object features is performed using a trained feature extraction module.
7. The method for detecting static and dynamic object recognition based on an event camera according to claim 1, wherein in step S8, after full connection calculation, the probability and the position of each object belonging to the category are output, and the highest probability is selected as the final prediction result to be output.
8. A detection apparatus for static and dynamic object recognition based on an event camera, for implementing the detection method for static and dynamic object recognition based on an event camera according to any one of claims 1 to 7, characterized by comprising an exposure sampling module, a dynamic scene detection module, and a data conversion module, wherein:
and an exposure sampling module: the system is used for carrying out exposure sampling on a static target, and the information in a static scene can be acquired through a scintillation event camera of an exposure lamp;
dynamic scene detection module: the dynamic data evaluation method is used for carrying out dynamic data evaluation so as to distinguish dynamic scenes from static scenes; and
and a data conversion module: for converting event camera data into a matrix form that can be feature extracted.
CN202110811885.3A 2021-07-19 2021-07-19 Static and dynamic target detection method and equipment based on event camera Active CN113537071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110811885.3A CN113537071B (en) 2021-07-19 2021-07-19 Static and dynamic target detection method and equipment based on event camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110811885.3A CN113537071B (en) 2021-07-19 2021-07-19 Static and dynamic target detection method and equipment based on event camera

Publications (2)

Publication Number Publication Date
CN113537071A CN113537071A (en) 2021-10-22
CN113537071B true CN113537071B (en) 2023-08-11

Family

ID=78128627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110811885.3A Active CN113537071B (en) 2021-07-19 2021-07-19 Static and dynamic target detection method and equipment based on event camera

Country Status (1)

Country Link
CN (1) CN113537071B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115014293A (en) * 2022-04-20 2022-09-06 中国电子科技南湖研究院 Device and method for adaptive static imaging dynamic sensing of event camera
CN114777764B (en) * 2022-04-20 2023-06-30 中国科学院光电技术研究所 High-dynamic star sensor star point extraction method based on event camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021228A (en) * 2016-10-31 2018-05-11 意美森公司 Dynamic haptic based on the Video Events detected produces
CN111445414A (en) * 2020-03-27 2020-07-24 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111537072A (en) * 2020-04-22 2020-08-14 中国人民解放军国防科技大学 Polarization information measuring system and method of array type polarization camera
CN111582300A (en) * 2020-03-20 2020-08-25 北京航空航天大学 High-dynamic target detection method based on event camera
CN111868737A (en) * 2018-01-24 2020-10-30 苹果公司 Event camera based gaze tracking using neural networks
CN111931752A (en) * 2020-10-13 2020-11-13 中航金城无人系统有限公司 Dynamic target detection method based on event camera
CN112037269A (en) * 2020-08-24 2020-12-04 大连理工大学 Visual moving target tracking method based on multi-domain collaborative feature expression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3340103A1 (en) * 2016-12-21 2018-06-27 Axis AB Method for identifying events in a motion video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021228A (en) * 2016-10-31 2018-05-11 意美森公司 Dynamic haptic based on the Video Events detected produces
CN111868737A (en) * 2018-01-24 2020-10-30 苹果公司 Event camera based gaze tracking using neural networks
CN111582300A (en) * 2020-03-20 2020-08-25 北京航空航天大学 High-dynamic target detection method based on event camera
CN111445414A (en) * 2020-03-27 2020-07-24 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111537072A (en) * 2020-04-22 2020-08-14 中国人民解放军国防科技大学 Polarization information measuring system and method of array type polarization camera
CN112037269A (en) * 2020-08-24 2020-12-04 大连理工大学 Visual moving target tracking method based on multi-domain collaborative feature expression
CN111931752A (en) * 2020-10-13 2020-11-13 中航金城无人系统有限公司 Dynamic target detection method based on event camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
动态特征和静态特征自适应融合的目标跟踪算法;张立朝;毕笃彦;查宇飞;汪云飞;马时平;西安电子科技大学学报;第42卷(第6期);164-172 *

Also Published As

Publication number Publication date
CN113537071A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN110363140B (en) Human body action real-time identification method based on infrared image
CN109697726B (en) Event camera-based end-to-end target motion estimation method
CN113537071B (en) Static and dynamic target detection method and equipment based on event camera
CN108038452B (en) Household appliance gesture rapid detection and identification method based on local image enhancement
EP2959454B1 (en) Method, system and software module for foreground extraction
Li et al. Foreground object detection in changing background based on color co-occurrence statistics
CN108734107B (en) Multi-target tracking method and system based on human face
CN110427823B (en) Joint target detection method and device based on video frame and pulse array signal
CN105741319B (en) Improvement visual background extracting method based on blindly more new strategy and foreground model
CN107133969A (en) A kind of mobile platform moving target detecting method based on background back projection
EP2549759A1 (en) Method and system for facilitating color balance synchronization between a plurality of video cameras as well as method and system for obtaining object tracking between two or more video cameras
CN111626090B (en) Moving target detection method based on depth frame difference convolutional neural network
CN111191535B (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
CN110414558A (en) Characteristic point matching method based on event camera
CN114022823A (en) Shielding-driven pedestrian re-identification method and system and storable medium
CN103096117A (en) Video noise detecting method and device
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN113688761B (en) Pedestrian behavior category detection method based on image sequence
CN106780544B (en) The method and apparatus that display foreground extracts
CN114613006A (en) Remote gesture recognition method and device
CN112487926A (en) Scenic spot feeding behavior identification method based on space-time diagram convolutional network
WO2023001110A1 (en) Neural network training method and apparatus, and electronic device
CN110602411A (en) Method for improving quality of face image in backlight environment
CN117561540A (en) System and method for performing computer vision tasks using a sequence of frames
CN114913086A (en) Face image quality enhancement method based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant