CN103747189A - Digital image processing method - Google Patents

Digital image processing method Download PDF

Info

Publication number
CN103747189A
CN103747189A CN201310722906.XA CN201310722906A CN103747189A CN 103747189 A CN103747189 A CN 103747189A CN 201310722906 A CN201310722906 A CN 201310722906A CN 103747189 A CN103747189 A CN 103747189A
Authority
CN
China
Prior art keywords
video
image
sample
training
extract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310722906.XA
Other languages
Chinese (zh)
Inventor
杨新锋
杨艳燕
刘文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Institute of Technology
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201310722906.XA priority Critical patent/CN103747189A/en
Publication of CN103747189A publication Critical patent/CN103747189A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a digital image processing method which comprises the following steps: successively carrying out video enhancement, video analysis and video understanding on highway video images acquired by a camera. As for the video enhancement, multiple low-resolution images of the same scene and with different exposure parameters are reconstructed into high-quality images with highlight brightness dynamic range and high resolution, thus providing high-quality video images for the video analysis level and raising reliability of video processing results. As for the video analysis, detection is carried out through moving objects, and space-time object characteristics of video ground and middle levels are extracted by motion estimation and target tracking video analytic algorithms so as to provide inferential basis for event identification in high-level video processing. As for the video understanding, identification of surveillance video events is completed by analyzing and understanding the space-time object characteristics provided by the video analysis level. According to the invention, types of detection events are broader and precision is higher. Through correct identification of events, automatic control of highways is ensured.

Description

A kind of digital image processing method
Technical field
The present invention relates to a kind of digital image processing method, relate in particular to HDR and the HR image rebuilding method of a kind of highway video based on sample prediction.
Background technology
Affect a lot of because have of picture quality, as spatial resolution, luminance contrast, noise etc.High-quality image, when effectively showing high contrast scene, also should have higher spatial resolution.For the high dynamic range images demonstration of image and the Problems of Reconstruction of spatial resolution, many scholars have carried out some fruitful research work, but they independently carry out respectively substantially.Existing Super-Resolution Restoration from Image Sequences supposes that the exposure parameter of multiple image is that parameter and noise parameter constant, camera response function are known conventionally.But the image obtaining in real world is difficult to meet to above assumed condition conventionally.Therefore, in unified technological frame, rebuild high dynamic range and high-definition picture, the theoretical foundation to image co-registration and practical application all have certain value.
Video traffic event detection technology is generally divided into " Indirect Detecting Method " and " direct detecting method " two large classes.Front a kind of be the existence that indirectly judges traffic events according to the variation of traffic flow, this method, due to the complexity of data error and traffic conditions, causes the time of event detection longer, and is not suitable for using in the situation that the volume of traffic is lower.In addition, the low image collecting that causes with poor contrast of the common resolution of image of camera acquisition cannot meet the accuracy that " Indirect Detecting Method " requires at present.
Second method is directly by the video image collecting, and by image processing techniques, finds the method that Vehicle Driving Cycle is abnormal, also can have good testing result in the situation that the volume of traffic is lower.But the method is also limited by the impact of the low and poor contrast of the common resolution of image of current camera acquisition.
Summary of the invention
One of object of digital image processing method of the present invention is: by Same Scene, several low-resolution images with different exposure parameters are redeveloped into and have high brightness dynamic range and high-resolution high quality graphic.
Two of the object of digital image processing method of the present invention is: existing highway video system is outputting high quality and Video Events accurately directly.
A kind of digital image processing method, comprise the following steps: camera acquisition is carried out to video enhancing, video analysis and video successively to highway video image and understand, wherein, it is that several low-resolution images with different exposure parameters are redeveloped into and have high brightness dynamic range and high-resolution high quality graphic by Same Scene that described video strengthens, for video analysis layer provides high-quality video image, thus the reliability of raising Video processing result; Described video analysis is to detect by moving target, and estimation and target following video analysis algorithm extract the space-time characteristics of objects in video bottom and middle level, for the Identification of events in high-rise Video processing provides deduction foundation; The space-time object low-level image feature that described video is understood by analyzing and understand video analysis layer and providing completes the identification to monitor video event.
Further, it is that HDR and HR image rebuilding method based on sample prediction realized that described video strengthens, and this method for reconstructing comprises step 1, off-line training part and step 2, rebuilds part online.
Further, described step 1, off-line training part consist of following sub-step: step 1.1, collection training sample: several that training image is Same Scene have low-resolution image and the target image corresponding to a width of different exposure parameters; In training image, extract corresponding LDR-LR and HDR-HR image information piece to as training sample; Step 1.2, tissue training's sample: adopt clustering method to carry out taxonomic organization to the sample set from Different background illumination district; Step 1.3, be a fallout predictor of each cluster sample set training; Obtain three classification fallout predictors in corresponding clear zone, dark space and moderate district.
Further, step 2, the online part of rebuilding consist of following sub-step: step 2.1, the scene brightness of input image sequence is cut apart, formed three different regions of exposure; Step 2.2, the basic layer of input image sequence is estimated; Step 2.3, according to the brightness classification results of input picture, by the classification fallout predictor training, the detailed information of each position image block is predicted, obtain the levels of detail of scene; Step 2.4, the basic layer of general and levels of detail estimated result additive fusion; Step 2.5, fused images is carried out to the constraint of low-resolution image observation model, obtain reconstructed results.
Further, step 1.1, gather training sample step and comprise: step 1.1.1, according to scene brightness difference, sample is divided into clear zone, dark space and San Ge region, moderate district; Step 1.1.2, detailed information are calculated: the detailed information that employing two-sided filter extracts every width sample image is as sample data; Step 1.1.3, sample collection: sample is the paired image information piece that input LDR-LR and target HDR-HR detail pictures correspondence position extract; According to scene brightness classification results, in the clear zone sample detail view corresponding with target image at short LDR-LR of time for exposure, extract; In the dark space sample detail view corresponding with target image at long LDR-LR of time for exposure, extract; The moderate district of brightness, selects to extract in detail view that moderate LDR-LR of time for exposure is corresponding with target image; For three brightness region, extract three corresponding training sample sets.
Further, in described video understanding, Video Events comprises output road traffic accident (comprise parking, queuing, hypervelocity and drive in the wrong direction) and traffic parameter (comprising flow, car speed, vehicle classification).
Based on HDR and the HR image rebuilding method of sample prediction, the method is divided into off-line training part and the online two parts of rebuilding; Off-line part comprises learning sample collection, tissue and classification fallout predictor training part.The online part of rebuilding is that to carry out HDR-HR be high dynamic range and high resolution image reconstruction for low-dynamic range and low-resolution image to several LDR-LR with little same exposure parameter of input.First, by the average image of input picture, carry out the background luminance classification of scene; Then, according to brightness classification results, utilize the classification fallout predictor that off-line part trains to carry out high dynamic range and the prediction of high-resolution detailed information to input picture, finally rebuild HDR-HR image.
This digital image processing method has following beneficial effect:
(1) the present invention is realized and is made the type of the event that detects wider by video enhancing, video analysis and video understanding, and precision is higher, and the automatic control of highway has been guaranteed in the correct identification of event.
(2) the present invention, by the study to example sample, sets up the mapping relations between LDR-LR (Low Dynamic Range-Low Resolution) and HDR-HR (High Dynamic Range-High Resolution).By rationalization's sample, the strategies such as classification based training learning model are without the artificial reconstruction of combining that realizes HDR-HR image mutual in the situation that.
(3) the inventive method is divided into off-line and online two parts.Off-line part mainly completes the training of collection, tissue and the classification fallout predictor of example sample; Online part trains the classification fallout predictor obtaining to complete the reconstruction of combining of image by off-line part.The reconstruction of combining of carrying out image high dynamic range and super-resolution in the framework based on sample prediction study is provided, and the method can be rebuild high dynamic range and high-resolution target image simultaneously.
Accompanying drawing explanation
Fig. 1: the flow chart of digital image processing method of the present invention;
Fig. 2: off-line training part flow chart of the present invention;
Fig. 3: the online reconstruction portion of the present invention is divided flow chart;
Fig. 4: sample extraction mode of the present invention (corresponding relation); A) b) LDR-LR detail pictures of HDR-HR detail pictures.
Embodiment
Below in conjunction with Fig. 1 to Fig. 4, the present invention will be further described:
As shown in Figure 1, a kind of digital image processing method, comprise the following steps: camera acquisition is carried out to video enhancing, video analysis and video successively to highway video image and understand, wherein, it is that several low-resolution images with different exposure parameters are redeveloped into and have high brightness dynamic range and high-resolution high quality graphic by Same Scene that described video strengthens, for video analysis layer provides high-quality video image, thus the reliability of raising Video processing result; Described video analysis is to detect by moving target, and estimation and target following video analysis algorithm extract the space-time characteristics of objects in video bottom and middle level, for the Identification of events in high-rise Video processing provides deduction foundation; The space-time object low-level image feature that described video is understood by analyzing and understand video analysis layer and providing completes the identification to monitor video event.
Put forward the methods of the present invention is divided into off-line training and the online two parts of rebuilding.Off-line training part flow chart as shown in Figure 2, comprises learning sample collection, tissue and classification fallout predictor training part.Sample collection process is divided three classes and carries out respectively according to scene brightness difference.Adopt clustering method to organize sample file.Then, by linearity or nonlinear prediction device learning method, classification fallout predictor is trained.
Online reconstruction portion divides flow chart as shown in Figure 3, and the LR-LDR image 3 width of input to different exposure parameters carries out HDR-HR reconstruction.First, by the average image of input picture, carry out the background luminance classification of scene; Then, according to brightness classification results, utilize the classification fallout predictor that off-line training part trains input picture to be carried out to the prediction of high dynamic range and high-resolution detailed information, finally rebuild high-frequency information.
Below in conjunction with example, the method is elaborated.
(1) off-line training part;
Training image is chosen many group HDR scene image compositions.Each HDR scene training plan is by the excessive I that exposes 1, moderate T exposes 0with the too small T of exposure -1three width LDR-LR images and the target image IHDR-HR composition of HDR-HR scene corresponding to a width.When sample collection, first HDR-HR scene is carried out to background luminance classification.Background luminance classification can adopt kinds of schemes, as can be to the average image I of three width LDR-LR images averagecarry out K mean cluster, be divided into three classes, thereby image is divided into clear zone, moderate district and San Ge region, dark space.According to scene brightness classification results, in the clear zone sample detail view corresponding with target image at short LDR-LR of time for exposure, extract; In the dark space sample detail view corresponding with target image at long LDR-LR of time for exposure, extract; The moderate district of brightness, selects to extract in detail view that moderate LDR-LR of time for exposure is corresponding with target image; Acquisition example sample in each region, forms three training sample sets.
Example sample is comprised of paired image information piece, i.e. HDR-HR image block and corresponding LDR-LR image block.Before sample extraction, respectively LDR-LR and HDR-HR training image are carried out to two-sided filter filtering, then original image is deducted to filtered image and obtain detailed information.On corresponding detailed information image, according to the paired example sample of the corresponding relation collection shown in Fig. 4.In Fig. 4, take sampling multiple, equal 2 as example.What corresponding sample extracted respectively is the vector of 16 dimensions.
Three training sample database that collect are carried out respectively the sample tissue based on cluster.Can adopt K mean cluster, the LDR-LR part in sample is carried out to cluster.
For each Sample Storehouse, train a classification fallout predictor.Classification fallout predictor consists of one group of sub-fallout predictor, the corresponding sub-fallout predictor of sample set of each cluster classification.In the training of sub-fallout predictor, all samples of corresponding classification are training sample.Wherein LDR-LR part is input, and HDR-HR part is target.The object of fallout predictor is to describe the mapping relations of similar sample LDR-LR and HDR-HR.This mapping relations are for instructing the HDR-HR image reconstruction of non-training sample LDR-LR image sequence.Sub-fallout predictor can adopt simple least mean-square error (Least Mean Squares, LMS) fallout predictor.
The object of off-line training part is training and the classification fallout predictor of the corresponding number of background luminance classification quantity, represents the mapping relations between the concentrated LDR-LR of different training samples and HDR-HR.Classification fallout predictor is for the detailed information prediction of online process of reconstruction.
(2) rebuild online part.
The input picture that does not belong to training image take three width is example, I --1be a shorter image of time for exposure, I 1the image of growing for the time for exposure, I 0for time for exposure normal picture.In order to keep scene overall brightness dynamic range, select the average image of three width experiment input pictures as LDR-LR initial pictures, initial pictures is amplified to target image size through bilinear interpolation, as basic tomographic image.To I 0gray level image carry out K mean cluster and obtain scene brightness classification, be partitioned into the moderate district of clear zone, dark space and brightness.
To I --1, I 0and T 1carry out respectively detailed information extraction, be about to the difference image of former figure and the filtered smoothed image of two-sided filter as LDR-LR detail pictures.
According to brightness classification results, in the process of rebuilding in the pixel in what region, adopt respectively the classification fallout predictor of corresponding classification to predict.During prediction, the code book that first input data produce by sample classification process is encoded, i.e. classification; Then according to its classification, select corresponding sub-fallout predictor to carry out detailed information prediction.
Corresponding to three width input pictures, for clear zone, I -1the corresponding details of image is often for dark space, I -1the corresponding details of epigraph is many.Corresponding normal region I 0the corresponding details of image is many.Prisoner this, when being used for carrying out the prediction of high-frequency information with classification fallout predictor, correspond respectively to different luminance areas, adopt different input pictures to instruct the prediction of detailed information, estimate the stacked HDR-HR image that forms fusion in initial estimation image that is added to of the detailed information obtaining most.
Finally, by image observation model, the basic tomographic image that adopts the value of taking out to amplify carries out model constrained to fused images, by iteration optimization, obtain reconstructed results image.
The high dynamic range and the super-resolution associating method for reconstructing that the present invention is based on study, can high contrast scene carry out effective imaging, reaches the target of simultaneously rebuilding high-resolution and high dynamic range images.Off-line training process can once be trained, repeatedly application.Online reconstruction is effective, fast operation.
By reference to the accompanying drawings the present invention has been carried out to exemplary description above; obvious realization of the present invention is not subject to the restrictions described above; as long as the various improvement that adopted method design of the present invention and technical scheme to carry out; or without improving, design of the present invention and technical scheme are directly applied to other occasion, all in protection scope of the present invention.

Claims (6)

1. a digital image processing method, comprise the following steps: camera acquisition is carried out to video enhancing, video analysis and video successively to highway video image and understand, wherein, it is that several low-resolution images with different exposure parameters are redeveloped into and have high brightness dynamic range and high-resolution high quality graphic by Same Scene that described video strengthens, for video analysis layer provides high-quality video image, thus the reliability of raising Video processing result; Described video analysis is to detect by moving target, and estimation and target following video analysis algorithm extract the space-time characteristics of objects in video bottom and middle level, for the Identification of events in high-rise Video processing provides deduction foundation; The space-time object low-level image feature that described video is understood by analyzing and understand video analysis layer and providing completes the identification to monitor video event.
2. digital image processing method according to claim 1, is characterized in that: it is that HDR and HR image rebuilding method based on sample prediction realized that described video strengthens, and this method for reconstructing comprises step 1, off-line training part and step 2, rebuilds part online.
3. digital image processing method according to claim 2, is characterized in that: described step 1, off-line training part consist of following sub-step: step 1.1, gather training sample: several that training image is Same Scene have low-resolution image and the target image corresponding to a width of different exposure parameters; In training image, extract corresponding LDR-LR and HDR-HR image information piece to as training sample; Step 1.2, tissue training's sample: adopt clustering method to carry out taxonomic organization to the sample set from Different background illumination district; Step 1.3, be a fallout predictor of each cluster sample set training; Obtain three classification fallout predictors in corresponding clear zone, dark space and moderate district.
4. digital image processing method according to claim 3, is characterized in that: step 2, the online part of rebuilding consist of following sub-step: step 2.1, the scene brightness of input image sequence is cut apart, formed three different regions of exposure; Step 2.2, the basic layer of input image sequence is estimated; Step 2.3, according to the brightness classification results of input picture, by the classification fallout predictor training, the detailed information of each position image block is predicted, obtain the levels of detail of scene; Step 2.4, the basic layer of general and levels of detail estimated result additive fusion; Step 2.5, fused images is carried out to the constraint of low-resolution image observation model, obtain reconstructed results.
5. according to digital image processing method described in claim 3 or 4, it is characterized in that: step 1.1, gather training sample step and comprise: step 1.1.1, according to scene brightness difference, sample is divided into clear zone, dark space and San Ge region, moderate district; Step 1.1.2, detailed information are calculated: the detailed information that employing two-sided filter extracts every width sample image is as sample data; Step 1.1.3, sample collection: sample is the paired image information piece that input LDR-LR and target HDR-HR detail pictures correspondence position extract; According to scene brightness classification results, in the clear zone sample detail view corresponding with target image at short LDR-LR of time for exposure, extract; In the dark space sample detail view corresponding with target image at long LDR-LR of time for exposure, extract; The moderate district of brightness, selects to extract in detail view that moderate LDR-LR of time for exposure is corresponding with target image; For three brightness region, extract three corresponding training sample sets.
6. according to digital image processing method described in any one in claim 1 to 5, it is characterized in that: in described video understanding, Video Events comprises that output road traffic accident (comprise parking, queuing, hypervelocity and drive in the wrong direction) and traffic parameter (comprise flow, car speed, vehicle classification).
CN201310722906.XA 2013-11-27 2013-12-25 Digital image processing method Pending CN103747189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310722906.XA CN103747189A (en) 2013-11-27 2013-12-25 Digital image processing method

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201310609601.8 2013-11-27
CN201310609601 2013-11-27
CN201310722906.XA CN103747189A (en) 2013-11-27 2013-12-25 Digital image processing method

Publications (1)

Publication Number Publication Date
CN103747189A true CN103747189A (en) 2014-04-23

Family

ID=50504175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310722906.XA Pending CN103747189A (en) 2013-11-27 2013-12-25 Digital image processing method

Country Status (1)

Country Link
CN (1) CN103747189A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570849A (en) * 2016-10-12 2017-04-19 成都西纬科技有限公司 Image optimization method
CN108960084A (en) * 2018-06-19 2018-12-07 清华大学深圳研究生院 Target tracking method, system, readable storage medium storing program for executing and electronic equipment
CN109360163A (en) * 2018-09-26 2019-02-19 深圳积木易搭科技技术有限公司 A kind of fusion method and emerging system of high dynamic range images
CN110475072A (en) * 2017-11-13 2019-11-19 Oppo广东移动通信有限公司 Shoot method, apparatus, terminal and the storage medium of image
CN110620857A (en) * 2018-06-20 2019-12-27 Zkw集团有限责任公司 Method and apparatus for creating high contrast images
WO2020093694A1 (en) * 2018-11-07 2020-05-14 华为技术有限公司 Method for generating video analysis model, and video analysis system
CN113228660A (en) * 2018-12-18 2021-08-06 杜比实验室特许公司 Machine learning based dynamic synthesis in enhanced standard dynamic range video (SDR +)
CN113273180A (en) * 2019-02-27 2021-08-17 华为技术有限公司 Image processing apparatus and method
WO2024179510A1 (en) * 2023-02-28 2024-09-06 华为技术有限公司 Image processing method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724941B1 (en) * 1998-09-30 2004-04-20 Fuji Photo Film Co., Ltd. Image processing method, image processing device, and recording medium
CN102945603A (en) * 2012-10-26 2013-02-27 青岛海信网络科技股份有限公司 Method for detecting traffic event and electronic police device
CN103208184A (en) * 2013-04-03 2013-07-17 昆明联诚科技有限公司 Traffic incident video detection method for highway

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724941B1 (en) * 1998-09-30 2004-04-20 Fuji Photo Film Co., Ltd. Image processing method, image processing device, and recording medium
CN102945603A (en) * 2012-10-26 2013-02-27 青岛海信网络科技股份有限公司 Method for detecting traffic event and electronic police device
CN103208184A (en) * 2013-04-03 2013-07-17 昆明联诚科技有限公司 Traffic incident video detection method for highway

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李晓光: "高分辨率与高动态范围图像联合重建研究进展", 《测控技术》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570849A (en) * 2016-10-12 2017-04-19 成都西纬科技有限公司 Image optimization method
CN110475072B (en) * 2017-11-13 2021-03-09 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for shooting image
CN110475072A (en) * 2017-11-13 2019-11-19 Oppo广东移动通信有限公司 Shoot method, apparatus, terminal and the storage medium of image
US11412153B2 (en) 2017-11-13 2022-08-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Model-based method for capturing images, terminal, and storage medium
CN108960084A (en) * 2018-06-19 2018-12-07 清华大学深圳研究生院 Target tracking method, system, readable storage medium storing program for executing and electronic equipment
CN110620857A (en) * 2018-06-20 2019-12-27 Zkw集团有限责任公司 Method and apparatus for creating high contrast images
CN109360163A (en) * 2018-09-26 2019-02-19 深圳积木易搭科技技术有限公司 A kind of fusion method and emerging system of high dynamic range images
WO2020093694A1 (en) * 2018-11-07 2020-05-14 华为技术有限公司 Method for generating video analysis model, and video analysis system
CN113228660A (en) * 2018-12-18 2021-08-06 杜比实验室特许公司 Machine learning based dynamic synthesis in enhanced standard dynamic range video (SDR +)
CN113228660B (en) * 2018-12-18 2023-12-12 杜比实验室特许公司 Machine learning based dynamic synthesis in enhanced standard dynamic range video (SDR+)
US12086969B2 (en) 2018-12-18 2024-09-10 Dolby Laboratories Licensing Corporation Machine learning based dynamic composing in enhanced standard dynamic range video (SDR+)
CN113273180A (en) * 2019-02-27 2021-08-17 华为技术有限公司 Image processing apparatus and method
WO2024179510A1 (en) * 2023-02-28 2024-09-06 华为技术有限公司 Image processing method and related device

Similar Documents

Publication Publication Date Title
CN103747189A (en) Digital image processing method
CN103413286A (en) United reestablishing method of high dynamic range and high-definition pictures based on learning
CN103413285A (en) HDR and HR image reconstruction method based on sample prediction
Scheerlinck et al. CED: Color event camera dataset
Chen et al. Lidar-video driving dataset: Learning driving policies effectively
DE102020214863A1 (en) SELF-MONITORED PROCEDURE AND SYSTEM FOR DEPTH ESTIMATION
WO2018168539A1 (en) Learning method and program
CN105684412A (en) Calendar mechanism for a clock movement
CN104704812A (en) Conditional-reset, multi-bit read-out image sensor
DE112016007131T5 (en) Object detection device and object determination method
CN112084928B (en) Road traffic accident detection method based on visual attention mechanism and ConvLSTM network
CN103279935A (en) Method and system of thermal infrared remote sensing image super-resolution reconstruction based on MAP algorithm
CN104700405A (en) Foreground detection method and system
Scott et al. Kalman filter based video background estimation
CN111222522A (en) Neural network training, road surface detection and intelligent driving control method and device
CN111539888A (en) Neural network image defogging method based on pyramid channel feature attention
Zhiwei et al. New method of background update for video-based vehicle detection
CN114429596A (en) Traffic statistical method and device, electronic equipment and storage medium
CN107590782A (en) A kind of spissatus minimizing technology of high-resolution optical image based on full convolutional network
CN118116204A (en) Algorithm selection system based on linkage camera shooting
CN104376316A (en) License plate image acquisition method and device
CN102129692A (en) Method and system for detecting motion target in double threshold scene
Chen et al. DBNet: A large-scale dataset for driving behavior learning
CN112927139B (en) Binocular thermal imaging system and super-resolution image acquisition method
CN115249357A (en) Bagged citrus detection method based on semi-supervised SPM-YOLOv5

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Free format text: FORMER OWNER: YANG YANYAN LIU WENJIE

Effective date: 20140530

Owner name: NANYANG INSTITUTE OF TECHNOLOGY

Free format text: FORMER OWNER: YANG XINFENG

Effective date: 20140530

C41 Transfer of patent application or patent right or utility model
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Yang Xinfeng

Inventor after: Yang Yanyan

Inventor after: Liu Wenjie

Inventor after: Lu Yingying

Inventor after: Quan Shangke

Inventor after: Liu Xiaohui

Inventor before: Yang Xinfeng

Inventor before: Yang Yanyan

Inventor before: Liu Wenjie

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: YANG XINFENG YANG YANYAN LIU WENJIE TO: YANG XINFENG YANG YANYAN LIU WENJIE LU YINGYING QUAN SHANGKE LIU XIAOHUI

TA01 Transfer of patent application right

Effective date of registration: 20140530

Address after: Changjiang Road, Nanyang city Henan province 473004 Wancheng District No. 80

Applicant after: Nanyang Science Technology College

Address before: Changjiang Road, Nanyang city Henan province 473004 Wancheng District No. 80

Applicant before: Yang Xinfeng

Applicant before: Yang Yanyan

Applicant before: Liu Wenjie

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140423