CN110458025A - A kind of personal identification and localization method based on binocular camera - Google Patents
A kind of personal identification and localization method based on binocular camera Download PDFInfo
- Publication number
- CN110458025A CN110458025A CN201910625272.3A CN201910625272A CN110458025A CN 110458025 A CN110458025 A CN 110458025A CN 201910625272 A CN201910625272 A CN 201910625272A CN 110458025 A CN110458025 A CN 110458025A
- Authority
- CN
- China
- Prior art keywords
- personnel
- information
- picture
- identification
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention discloses a kind of target identification of binocular camera and localization methods.Using face picture tranining database, the classification learning based on convolutional neural networks (CNN) is carried out, the disaggregated model based on recognition of face is obtained;Using the depth information of picture, the recurrence learning based on support vector machines (SVM) is carried out, obtains constructing picture depth information and the relationship with target range camera distance apart from regression function.By camera photographic subjects, using the human face photo of acquisition, personnel targets identification is completed by face classification model.Simultaneously using the depth information of shooting picture, the distance of target range camera is calculated, realizes personnel positioning.This method has recognition time expense small, the high advantage of accuracy of identification.
Description
Technical field:
The present invention relates to a kind of target identification of binocular camera and localization methods, belong to location and navigation technology field.
Background technique:
In recent years, the demand of indoor location service constantly increases, and has expedited the emergence of the continuous development of indoor positioning technologies.Conventional satellite is fixed
Position system possesses higher positioning accuracy in outdoor spacious environment such as global positioning system (GPS), BEI-DOU position system, but
It is that satellite positioning signal is highly susceptible to cover or interfere, leads to the global position system inaccurate even nothing of positioning in environment indoors
Method positioning.Therefore, indoors in environment, it is contemplated that the advantage of image information, with no electromagnetic interference, it is environmentally protective the features such as quilt
Extensive concern.
The prior art include: it is a kind of based on images match and fingerprint base LED visible light indoor orientation method (patent No.:
CN201610125773.1), this method on-line stage uses sift algorithm, and the online processing time is long, location estimation time overhead
It is bigger.And on-line stage of the present invention is small using trained model, location estimation time overhead, it is fixed to significantly improve
Time required for position.
Currently, both at home and abroad to monocular cam or binocular camera shooting can be used in the research of the indoor positioning technologies based on image
Head.Wherein, there are more advantages using binocular camera.Before stereoscopic vision, monocular vision because operation efficiency is high and
The small advantage of information content has overwhelming superiority in the identification of personnel and locating and tracking field.With the continuous hair of stereoscopic vision
Exhibition, the defect that monocular vision can not obtain image depth information and lead to not accurately identify and position personnel targets are all the more bright
Aobvious, binocular stereo vision extracts the parallax information of binocular image, in the picture according to object by binocular camera bionics human eyes
Depth information and respective feature, further identify and position processing.Wherein, the left and right binocular figure that binocular camera obtains
As in, the horizontal distance of two match block center pixels is parallax.Same disparity (i.e. same color) represents object from camera shooting
Head position is identical.Currently, as hardware system is especially Embedded development, in the field of image processing of moving object, binocular
The advantage of technology is more and more obvious.Therefore, being applied on intelligent transportation, video monitoring using binocular vision technology has great meaning
Justice.
The information disclosed in the background technology section is intended only to increase the understanding to general background of the invention, without answering
When being considered as recognizing or imply that the information constitutes the prior art already known to those of ordinary skill in the art in any form.
Summary of the invention:
The purpose of the present invention is to provide a kind of personal identification based on binocular camera and localization methods, to overcome above-mentioned existing
There is the defects of technology.
To achieve the above object, the present invention provides a kind of target identification of binocular camera and localization method, this method
Include the following steps: step 1: using face picture tranining database, carrying out the classification learning based on convolutional neural networks CNN,
Obtain the disaggregated model based on recognition of face;
Step 2: using the depth information of picture, carrying out the recurrence learning based on support vector machines, obtain distance and return letter
Number, building picture depth information and the relationship with target range camera distance;
Step 3: by camera photographic subjects, using the human face photo of acquisition, personnel targets being completed by face classification model
Identification;
Step 4: while using the depth information of shooting picture, the distance of target range camera is calculated, realizes personnel positioning.
The technical solution that the present invention further limits are as follows:
Preferably, in above-mentioned technical proposal, the target identification of 2. binocular cameras according to claim 1 and positioning side
Method, which is characterized in that this method specifically:
Step 1: shooting personnel's human face image information to be identified using binocular camera, establish personnel targets label and face picture
Database;Offline classification learning is carried out to personnel targets label and face picture database using convolutional neural networks, obtains people
Member's target identification disaggregated model;
Step 2: records photographing dot position information shoots depth map from binocular camera using image processing techniques in OpenCV
The depth information that shooting picture is extracted in piece, establishes location information and depth picture information database;Utilize support vector machines pair
Location information and depth picture information database are trained, and learn the corresponding pass between location information and depth pictorial information
System, obtains the regression model based on location information;
Step 3: using the motion information of binocular camera acquisition target, after binocular camera takes personnel's picture, user
Face detection algorithm handles the facial image detected by cutting, is input to 1 personnel targets of above-mentioned steps identification disaggregated model
In, realize the identification of personnel;
Step 4: after system detection is to target, the depth pictorial information of collected target can be automatically processed, be then input to
In the regression model based on location information of above-mentioned steps 2, specific location of the target from camera is obtained, and then obtains target position
It sets, realizes the positioning of personnel.
Preferably, step 1 and step 2 are off-line phase, and step 3 and step 4 are on-line stage.
Preferably, Face datection algorithm is cascade classifier Cascade Classifier algorithm.
A kind of personal identification and positioning system based on binocular camera, including personal identification system and personnel positioning system
System, it is characterised in that:
Wherein each process includes two stages, i.e. off-line phase and on-line stage again.
(1) identification process of personnel:
This functions of modules: calling face detection module first cuts the facial image detected after detecting face
Processing is then input in trained personnel targets identification disaggregated model, which can the detected people of automatic identification
Member, obtains recognition result.Wherein, the personnel targets identification disaggregated model based on convolutional neural networks is divided into two stages: offline
Stage and on-line stage.
Off-line phase: shooting personnel's human face image information to be identified using binocular camera, establish (personnel targets label,
Face picture) database.Offline taxology is carried out to (personnel targets label, face picture) database using convolutional neural networks
It practises, obtains personnel targets identification disaggregated model.
On-line stage: after binocular camera takes personnel's picture, user's face detection algorithm (cascade classifier
Cascade Classifier algorithm), the facial image detected is handled by cutting, identifies classification mould using personnel targets
Type realizes the identification of personnel.
(2) position fixing process of personnel:
Personnel positioning regression model based on support vector machines is divided into two stages: off-line phase and on-line stage.
Off-line phase: records photographing dot position information extracts shooting picture from binocular camera shooting depth picture
Depth information establishes (location information, depth pictorial information) database.Using support vector machines to (location information, depth picture
Information) database is trained, and learns the corresponding relationship between location information and depth pictorial information, it obtains based on location information
Regression model.
On-line stage: video information is acquired using binocular camera in unknown position, after detecting personnel, processing is corresponded to
Depth information picture, its Pixel Information is normalized, using the regression model of location information, obtains target from taking the photograph
As the specific location of head, and then obtain target position.
Personal identification and positioning system based on binocular camera, including personal identification system and personnel location system,
It is characterized by: the hardware device that the present invention includes mainly has: image capture device, algorithm process equipment and display equipment.
Image capture device: this project completes Image Acquisition work, model S series using small binocular camera of looking for
(S1030-IR-120/MONO), camera parameter are as follows: Replacable Standard M12 camera lens, USB3.0 interface.Equipment
It is as shown in Figure 6:
Fig. 6: small to look for binocular camera
Algorithm process equipment: the image data after acquisition needs the processing that identifies and positions of further personnel, and emulation experiment is all
It is to realize on computers.Video processing equipment is computer, specific to configure: Intel (R) Core (TM) i5-7300HQ
CPU 2.50GHz, inside saves as 7.89G.
System display device: result needs to show and export after algorithm process, is actually computer in this project
Display completes this function.
The technical solution that the present invention further limits are as follows:
Preferably, in above-mentioned technical proposal,
Compared with prior art, the invention has the following beneficial effects:
1, personnel targets identification problem is converted the classification problem based on convolutional neural networks by the present invention.On-line stage utilize from
The personnel targets of line stage-training identify disaggregated model, realize the identification of personnel.Small with recognition time expense, accuracy of identification is high
The advantages of.
2, personnel positions estimation problem is converted the regression problem based on support vector machines by the present invention.On-line stage utilizes
The depth information of image realizes target position estimation by the regression model of location information.With location estimation time overhead
It is small, the high advantage of positioning accuracy.
3, in the position estimation procedure of the present invention staff's target, positioning, therefore, energy are realized using the depth information of picture
It is enough to reduce the complexity based on image position method, while positioning accuracy being provided.
Detailed description of the invention:
Fig. 1 is description of the invention abstract.
Fig. 2 is personnel targets identification process figure of the invention.
Fig. 3 is convolutional neural networks off-line learning flow chart.
Fig. 4 is personnel targets location estimation flow chart of the invention.
Fig. 5 is the image depth information of binocular camera shooting.
Fig. 6 is personnel's Place object estimated result.
Fig. 7 is personnel's target identification result.
Fig. 8 is personnel's Place object evaluated error simulation result.
Fig. 9 is personnel's target identification accuracy simulation result.
Specific embodiment:
Specific embodiments of the present invention will be described in detail below, it is to be understood that protection scope of the present invention is not had
The limitation of body embodiment.
Unless otherwise explicitly stated, otherwise in entire disclosure and claims, term " includes " or its change
Changing such as "comprising" or " including " etc. will be understood to comprise stated element or component, and not exclude other members
Part or other component parts.
As shown in Figure 1, the invention mainly comprises two aspects, i.e. personal identification and personnel positioning.Personal identification and personnel
Positioning can be divided into two stages, it may be assumed that off-line phase and on-line stage.Off-line phase is shot to be identified using binocular camera
Personnel's human face image information establishes (personnel targets label, face picture) database;Records photographing dot position information, from binocular
Camera shoots the depth information that shooting picture is extracted in depth picture, establishes (location information, depth pictorial information) database.
Offline classification learning is carried out to (personnel targets label, face picture) database using convolutional neural networks, obtains personnel targets
Identify disaggregated model.(location information, image depth information) database is trained using support vector machines, study position letter
Corresponding relationship between breath and depth information, obtains the regression model based on location information.On-line stage, binocular camera shooting
To after personnel's picture, user's face detection algorithm after being pocessed to the facial image detected, utilizes personnel targets identification point
Class model realizes the identification of personnel.Corresponding depth information picture is handled, is input in the regression model of location information, obtains
Specific location of the target from camera, and then obtain target position.
As shown in Fig. 2, off-line phase of the present invention collects human face data, using convolutional neural networks to (personnel targets label,
Face picture) the offline classification learning of database progress, obtain personnel targets identification disaggregated model.On-line stage is detected when PC machine
After human face data, collected human face data is pre-processed, is then delivered to trained personnel targets identification classification mould
In type, personal identification is realized.
As shown in figure 3, this is the off-line learning flow chart of personnel targets identification disaggregated model.It is shot using binocular camera
Personnel's human face image information to be identified establishes (personnel targets label, face picture) database, then by inputting after pretreatment
The training into convolutional neural networks obtains personnel targets identification disaggregated model.Wherein convolutional neural networks certain applications four
Convolutional layer, three pond layers and a full articulamentum.
As shown in figure 4, off-line phase records photographing dot position information of the present invention, using image processing techniques in OpenCV,
The depth information that shooting picture is extracted from binocular camera shooting depth picture, is established (location information, depth pictorial information)
Database.(location information, depth pictorial information) database is trained using support vector machines, learns location information and depth
The corresponding relationship between pictorial information is spent, the regression model based on location information is obtained.On-line stage uses double in unknown position
Mesh camera acquisition video information handles corresponding depth information picture, its Pixel Information is returned after detecting personnel
One change processing, using the regression model of location information, obtains specific location of the target from camera, and then obtain target position.
As shown in figure 5, the image depth information picture that the present invention uses binocular camera to shoot.
As shown in fig. 6, personnel positions target state estimator result in the end PC of the present invention.Displaying target is at a distance from camera.
As shown in fig. 7, the result of the end PC of the present invention personal identification and personnel positions target state estimator.The knowledge of displaying target personnel
Other result and target are at a distance from camera.
As shown in figure 8, the present invention maximum number of training now distance estimations mean error be not more than 5CM.
As shown in figure 9, the present invention is in maximum number of training, personal identification accuracy reaches 91% now.
The aforementioned description to specific exemplary embodiment of the invention is in order to illustrate and illustration purpose.These descriptions
It is not wishing to limit the invention to disclosed precise forms, and it will be apparent that according to the above instruction, can much be changed
Become and changes.The purpose of selecting and describing the exemplary embodiment is that explaining specific principle of the invention and its actually answering
With so that those skilled in the art can be realized and utilize a variety of different exemplary implementation schemes of the invention and
Various chooses and changes.The scope of the present invention is intended to be limited by claims and its equivalents.
Claims (7)
1. the target identification and localization method of a kind of binocular camera, which is characterized in that this method comprises the following steps: step 1:
Using face picture tranining database, the classification learning based on convolutional neural networks CNN is carried out, obtains point based on recognition of face
Class model;
Step 2: using the depth information of picture, carrying out the recurrence learning based on support vector machines, obtain distance and return letter
Number, building picture depth information and the relationship with target range camera distance;
Step 3: by camera photographic subjects, using the human face photo of acquisition, personnel targets being completed by face classification model
Identification;
Step 4: while using the depth information of shooting picture, the distance of target range camera is calculated, realizes personnel positioning.
2. the target identification and localization method of binocular camera according to claim 1, which is characterized in that this method is specific
Are as follows:
Step 1: shooting personnel's human face image information to be identified using binocular camera, establish personnel targets label and face picture
Database;Offline classification learning is carried out to personnel targets label and face picture database using convolutional neural networks, obtains people
Member's target identification disaggregated model;
Step 2: records photographing dot position information shoots depth map from binocular camera using image processing techniques in OpenCV
The depth information that shooting picture is extracted in piece, establishes location information and depth picture information database;Utilize support vector machines pair
Location information and depth picture information database are trained, and learn the corresponding pass between location information and depth pictorial information
System, obtains the regression model based on location information;
Step 3: using the motion information of binocular camera acquisition target, after binocular camera takes personnel's picture, user
Face detection algorithm handles the facial image detected by cutting, is input to 1 personnel targets of above-mentioned steps identification disaggregated model
In, realize the identification of personnel;
Step 4: after system detection is to target, the depth pictorial information of collected target can be automatically processed, be then input to
In the regression model based on location information of above-mentioned steps 2, specific location of the target from camera is obtained, and then obtains target position
It sets, realizes the positioning of personnel.
3. the target identification and localization method of binocular camera according to claim 1, which is characterized in that step 1 and step
Rapid 2 be off-line phase, and step 3 and step 4 are on-line stage.
4. the target identification and localization method of binocular camera according to claim 1, which is characterized in that Face datection is calculated
Method is cascade classifier Cascade Classifier algorithm.
5. the target identification and localization method of binocular camera according to claim 3, which is characterized in that off-line phase and
On-line stage specifically:
(1) identification process of personnel:
This functions of modules: calling face detection module first cuts the facial image detected after detecting face
Processing is then input in trained personnel targets identification disaggregated model, which can the detected people of automatic identification
Member, obtains recognition result;Wherein, the personnel targets identification disaggregated model based on convolutional neural networks is divided into two stages: offline
Stage and on-line stage;
Off-line phase: personnel's human face image information to be identified is shot using binocular camera, establishes personnel targets label, face figure
Sheet data library;Using convolutional neural networks to personnel targets label, face picture database carries out offline classification learning, obtains people
Member's target identification disaggregated model;
On-line stage: after binocular camera takes personnel's picture, user's face detection algorithm is logical to the facial image detected
Cutting processing is crossed, disaggregated model is identified using personnel targets, realizes the identification of personnel;
(2) position fixing process of personnel:
Personnel positioning regression model based on support vector machines is divided into two stages: off-line phase and on-line stage.
6. off-line phase: records photographing dot position information extracts the depth of shooting picture from binocular camera shooting depth picture
Information is spent, location information, depth picture information database are established;Using support vector machines to location information, depth pictorial information
Database is trained, and learns the corresponding relationship between location information and depth pictorial information, obtains returning based on location information
Return model;
On-line stage: corresponding depth is handled after detecting personnel using binocular camera acquisition video information in unknown position
Information picture is spent, its Pixel Information is normalized, using the regression model of location information, obtains target from camera
Specific location, and then obtain target position.
7. personal identification and positioning system based on binocular camera, including personal identification system and personnel location system,
Be characterized in that: hardware device mainly has: image capture device, algorithm process equipment and display equipment;
Image capture device: Image Acquisition work, model S series, camera parameter are completed using small binocular camera of looking for are as follows:
Replacable Standard M12 camera lens, USB3.0 interface;
Algorithm process equipment: the image data after acquisition needs the processing that identifies and positions of further personnel, and emulation experiment is all
It is to realize on computers, specific to configure: Intel Core i5-7300HQ CPU 2.50GHz inside saves as 7.89G;
System display device: result needs to show and export after algorithm process, this function is completed using the display of computer
Energy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910625272.3A CN110458025B (en) | 2019-07-11 | 2019-07-11 | Target identification and positioning method based on binocular camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910625272.3A CN110458025B (en) | 2019-07-11 | 2019-07-11 | Target identification and positioning method based on binocular camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458025A true CN110458025A (en) | 2019-11-15 |
CN110458025B CN110458025B (en) | 2022-10-14 |
Family
ID=68482685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910625272.3A Active CN110458025B (en) | 2019-07-11 | 2019-07-11 | Target identification and positioning method based on binocular camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458025B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461092A (en) * | 2020-06-19 | 2020-07-28 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for brushing face, measuring temperature and checking body |
CN111462227A (en) * | 2020-03-27 | 2020-07-28 | 海信集团有限公司 | Indoor personnel positioning device and method |
CN111476126A (en) * | 2020-03-27 | 2020-07-31 | 海信集团有限公司 | Indoor positioning method and system and intelligent equipment |
CN112153736A (en) * | 2020-09-14 | 2020-12-29 | 南京邮电大学 | Personnel action identification and position estimation method based on channel state information |
CN112164111A (en) * | 2020-09-10 | 2021-01-01 | 南京邮电大学 | Indoor positioning method based on image similarity and BPNN regression learning |
CN112184705A (en) * | 2020-10-28 | 2021-01-05 | 成都智数医联科技有限公司 | Human body acupuncture point identification, positioning and application system based on computer vision technology |
CN113705376A (en) * | 2021-08-11 | 2021-11-26 | 中国科学院信息工程研究所 | Personnel positioning method and system based on RFID and camera |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN107145546A (en) * | 2017-04-26 | 2017-09-08 | 北京环境特性研究所 | Monitor video personnel's fuzzy retrieval method based on deep learning |
CN107844744A (en) * | 2017-10-09 | 2018-03-27 | 平安科技(深圳)有限公司 | With reference to the face identification method, device and storage medium of depth information |
CN108038455A (en) * | 2017-12-19 | 2018-05-15 | 中国科学院自动化研究所 | Bionic machine peacock image-recognizing method based on deep learning |
-
2019
- 2019-07-11 CN CN201910625272.3A patent/CN110458025B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN107145546A (en) * | 2017-04-26 | 2017-09-08 | 北京环境特性研究所 | Monitor video personnel's fuzzy retrieval method based on deep learning |
CN107844744A (en) * | 2017-10-09 | 2018-03-27 | 平安科技(深圳)有限公司 | With reference to the face identification method, device and storage medium of depth information |
CN108038455A (en) * | 2017-12-19 | 2018-05-15 | 中国科学院自动化研究所 | Bionic machine peacock image-recognizing method based on deep learning |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462227A (en) * | 2020-03-27 | 2020-07-28 | 海信集团有限公司 | Indoor personnel positioning device and method |
CN111476126A (en) * | 2020-03-27 | 2020-07-31 | 海信集团有限公司 | Indoor positioning method and system and intelligent equipment |
CN111476126B (en) * | 2020-03-27 | 2024-02-23 | 海信集团有限公司 | Indoor positioning method, system and intelligent device |
CN111461092A (en) * | 2020-06-19 | 2020-07-28 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for brushing face, measuring temperature and checking body |
CN112164111A (en) * | 2020-09-10 | 2021-01-01 | 南京邮电大学 | Indoor positioning method based on image similarity and BPNN regression learning |
CN112164111B (en) * | 2020-09-10 | 2022-09-06 | 南京邮电大学 | Indoor positioning method based on image similarity and BPNN regression learning |
CN112153736A (en) * | 2020-09-14 | 2020-12-29 | 南京邮电大学 | Personnel action identification and position estimation method based on channel state information |
CN112184705A (en) * | 2020-10-28 | 2021-01-05 | 成都智数医联科技有限公司 | Human body acupuncture point identification, positioning and application system based on computer vision technology |
CN113705376A (en) * | 2021-08-11 | 2021-11-26 | 中国科学院信息工程研究所 | Personnel positioning method and system based on RFID and camera |
CN113705376B (en) * | 2021-08-11 | 2024-02-06 | 中国科学院信息工程研究所 | Personnel positioning method and system based on RFID and camera |
Also Published As
Publication number | Publication date |
---|---|
CN110458025B (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110458025A (en) | A kind of personal identification and localization method based on binocular camera | |
CN109506658B (en) | Robot autonomous positioning method and system | |
CN107545302B (en) | Eye direction calculation method for combination of left eye image and right eye image of human eye | |
CN110135249B (en) | Human behavior identification method based on time attention mechanism and LSTM (least Square TM) | |
CN108109174A (en) | A kind of robot monocular bootstrap technique sorted at random for part at random and system | |
CN107660039B (en) | A kind of lamp control system of identification dynamic gesture | |
CN109048926A (en) | A kind of intelligent robot obstacle avoidance system and method based on stereoscopic vision | |
CN108573221A (en) | A kind of robot target part conspicuousness detection method of view-based access control model | |
CN109598242B (en) | Living body detection method | |
CN110443898A (en) | A kind of AR intelligent terminal target identification system and method based on deep learning | |
CN110969644B (en) | Personnel track tracking method, device and system | |
Choi et al. | Human body orientation estimation using convolutional neural network | |
Chiang et al. | A stereo vision-based self-localization system | |
CN103162682A (en) | Indoor path navigation method based on mixed reality | |
CN110414381A (en) | Tracing type face identification system | |
CN112207821B (en) | Target searching method of visual robot and robot | |
US11361534B2 (en) | Method for glass detection in real scenes | |
CN109409250A (en) | A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning | |
CN110276251A (en) | A kind of image-recognizing method, device, equipment and storage medium | |
CN108259764A (en) | Video camera, image processing method and device applied to video camera | |
Li et al. | Deep-trained illumination-robust precision positioning for real-time manipulation of embedded objects | |
CN111832542A (en) | Three-eye visual identification and positioning method and device | |
CN109492513B (en) | Face space duplication eliminating method for light field monitoring | |
CN113449566A (en) | Intelligent image tracking method and system for low-speed small target in human-in-loop | |
Wu et al. | Multi-video temporal synchronization by matching pose features of shared moving subjects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |