CN105700402A - Personnel detection-based embedded control system, device and method - Google Patents

Personnel detection-based embedded control system, device and method Download PDF

Info

Publication number
CN105700402A
CN105700402A CN201610133706.4A CN201610133706A CN105700402A CN 105700402 A CN105700402 A CN 105700402A CN 201610133706 A CN201610133706 A CN 201610133706A CN 105700402 A CN105700402 A CN 105700402A
Authority
CN
China
Prior art keywords
personnel
image
model
embedded
image sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610133706.4A
Other languages
Chinese (zh)
Other versions
CN105700402B (en
Inventor
杨铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Horizon Robotics Technology Co Ltd
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201610133706.4A priority Critical patent/CN105700402B/en
Publication of CN105700402A publication Critical patent/CN105700402A/en
Application granted granted Critical
Publication of CN105700402B publication Critical patent/CN105700402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Abstract

The invention discloses a personnel detection-based embedded control system, a device and a method. The system comprises a monocular camera used for acquiring the image sequence of a current scene and sending the image sequence to a personnel detection unit; the personnel detection unit connected with the monocular camera and used for detecting the presence/absence of a person in the above scene represented by the image sequence and the location information of the person through the image recognition process of a multi-layer deep neural network, and then sending the detecting result to a device control unit; the device control unit connected with the personnel detection unit and used for generating an operation instruction according to the detecting result and a preset control policy, and then controlling an embedded intelligent device to conduct the operation instruction. According to the technical scheme of the invention, the system, the device and the method are applied to the field of intelligent appliances and household equipment, such as an intelligent air-conditioner, a household camera and the like. Through the image recognition process of the multi-layer deep neural network, the image sequence acquired by the monocular camera can be detected. Therefore, at the cost of a relatively low hardware cost, a person can be quickly and accurately detected and the device can be controlled and adjusted.

Description

Based on the embedded control system of personnel's detection, equipment and method
Technical field
The application relates to embedded intelligent equipment technical field, is specifically related to a kind of based on the embedded control system of personnel's detection, equipment and method。
Background technology
Current embedded system is all expected to create on the equipment such as home appliance the Consumer's Experience of innovation, to realize fast and accurately person sensitive's system, according to the positional distance of user carry out adaptive control adjust, technical difficult points is in that:
One, it is difficult to detect rapidly and accurately the personnel of various different attitude;
Two, the photographic head adopted in current hardware solution is relatively costly。Such as some game machines and household electrical appliances scheme, adopt depth camera such as kinect, or binocular camera shooting head system, it is thus achieved that the degree of depth of people or steric information identify to carry out detection, photographic head with high costs and need to take more calculating resource。And based on the image processing algorithm of monocular cam, it usually needs running the model of multiple complexity, the image of the personnel processing different attitude and circumstance of occlusion detects, and committed memory is high, and computation complexity is very high。
Summary of the invention
In view of drawbacks described above of the prior art or deficiency, it is desirable to provide a kind of personnel's overall hardware cost simultaneously that can detect different attitude rapidly and accurately relatively low based on the embedded control system of personnel's detection, equipment and method。
First aspect, the present invention provides a kind of embedded control system based on personnel's detection, and described system includes:
Monocular cam, for gathering the image sequence of current scene, and detects unit by the transmission of acquired image sequence to personnel;
Personnel detect unit, it is connected with described monocular cam, for by the image recognition of multilamellar deep neural network to whether scene shown in described image sequence exists personnel, the positional information that there are personnel detects, and sends testing result to device control cell;
Device control cell, detects unit with described personnel and is connected, and for generating operational order according to described testing result and the control strategy preset, and controls the described embedded intelligent equipment described operational order of execution。
Second aspect, the present invention provides a kind of embedded intelligent equipment, is provided with the above-mentioned embedded control system based on personnel's detection in described equipment。
The third aspect, the present invention provides a kind of embedded Control method based on personnel's detection suitable in said system or equipment, and described method includes:
S10: gather the image sequence of current scene;
S30: by the image recognition of multilamellar deep neural network to whether scene shown in described image sequence exists personnel, the positional information that there are personnel detect, obtain testing result;
S50: generate operational order according to described testing result and the control strategy preset, and control the described embedded intelligent equipment described operational order of execution。
What the many embodiments of the present invention provided detects monocular cam acquired image sequence based on the embedded control system of personnel's detection, equipment and method by the image recognition of multilamellar deep neural network, operational order is generated further according to testing result and the control strategy preset, the final embedded intelligent equipment that controls performs operational order, thus achieving person sensitive be controlled adjusting fast and accurately to equipment with relatively low hardware cost;
Some embodiments of the invention provide based on personnel detection embedded control system, equipment and the personnel of different attitudes and angle are detected by method further by multistage detection convolutional neural networks model, by multi-model classify convolutional neural networks personnel's model of different attitudes and angle is sorted out, it is achieved that detect rapidly and accurately various difference attitude personnel;
What some embodiments of the invention provided passes through based on the embedded control system of personnel's detection, equipment and method to achieve, to acquired image is down-sampled, the calculating pressure significantly alleviating system under having substantially no effect on the premise of accuracy rate of testing result further;
What some embodiments of the invention provided passes through based on the embedded control system of personnel's detection, equipment and method to pre-set multiple mode of operation one to one and control strategy further, it is achieved that system can carry out the flexible control of Different Strategies automatically under the different working modes of equipment。
Accompanying drawing explanation
By reading the detailed description that non-limiting example is made made with reference to the following drawings, other features, purpose and advantage will become more apparent upon:
Fig. 1 is the structural representation of the embedded control system in one embodiment of the invention based on personnel's detection。
Fig. 2 is the workflow schematic diagram detecting subelement in one embodiment of the present invention。
Fig. 3 is personnel's model schematic under different attitudes and angle in one embodiment of the present invention。
Fig. 4 is the flow chart of the embedded Control method in one embodiment of the invention based on personnel's detection。
Fig. 5 be embedded Control method shown in Fig. 4 a preferred implementation in the flow chart of step S30。
Fig. 6 is the flow chart of a preferred implementation of step S30 shown in Fig. 5。
Description of reference numerals:
10 embedded control systems
30 embedded intelligent equipments
11 monocular cams
13 personnel detect unit
15 device control cells
51 head models
52 bust forms
53 whole body models
Detailed description of the invention
Below in conjunction with drawings and Examples, the application is described in further detail。It is understood that specific embodiment described herein is used only for explaining related invention, but not the restriction to this invention。It also should be noted that, for the ease of describing, accompanying drawing illustrate only and invent relevant part。
It should be noted that when not conflicting, the embodiment in the application and the feature in embodiment can be mutually combined。Describe the application below with reference to the accompanying drawings and in conjunction with the embodiments in detail。
The structural representation of the embedded control system based on personnel's detection that Fig. 1 provides for one embodiment of the invention。
As it is shown in figure 1, in the present embodiment, the embedded control system 10 based on personnel's detection provided by the invention is arranged in embedded intelligent equipment 30, and described system includes:
Monocular cam 11, for gathering the image sequence of current scene, and detects unit 13 by the transmission of acquired image sequence to personnel;
Personnel detect unit 13, it is connected with monocular cam 11, for by the image recognition of multilamellar deep neural network to whether scene shown in described image sequence exists personnel, the positional information that there are personnel detects, and sends testing result to device control cell 15;
Device control cell 15, detects unit 13 and is connected with personnel, for generating operational order according to described testing result and the control strategy preset, and controls embedded intelligent equipment 30 and performs described operational order。
The above-described embodiment image recognition detection monocular cam acquired image sequence by multilamellar deep neural network, operational order is generated further according to testing result and the control strategy preset, the final embedded intelligent equipment that controls performs operational order, thus achieving person sensitive be controlled adjusting fast and accurately to equipment with relatively low hardware cost。
In a preferred embodiment, personnel detect unit 13 include detection subelement and classification subelement。
Described detection subelement, for each image of described image sequence is carried out sliding window search, obtains and is likely to comprise the alternative area of personnel's department pattern or complete model。The retrieval of described sliding window includes at interval of the pixel of predetermined number, namely described image is extracted an image block presetting size, described image block inputs multistage detection convolutional neural networks model to detect whether to comprise personnel's department pattern or complete model。Described multistage detection convolutional neural networks model is the convolutional neural networks model that everyone's model shares。
Described classification subelement and is classified with the type of personnel's model accurately judging described alternative area and comprising for described alternative area inputs multi-model classification convolutional neural networks, type according to described personnel's model, the relative size in described image and position carry out merger, obtain the testing result in units of personnel, and the calibration information obtained in advance according to described monocular cam, such as photographic head setting height(from bottom) and inside and outside parameter matrix, calculate the positional information of detected personnel, as to the distance of photographic head and relative angle。。
Specifically, for 720P high definition monocular cam, the resolution of each image of described image sequence is 1280*720, detection subelement in image at interval of the image block of one 40*40 of 3 pixel extraction。The multistage detection convolutional neural networks model preset is simultaneously used for 3 personnel's department patterns of detection and 1 personnel's complete model, each image block is inputted multistage detection convolutional neural networks model to detect whether to comprise personnel's model by detection subelement, as comprised, then this image block is alternative area。
Each alternative area is inputted multi-model classification convolutional neural networks by classification subelement respectively, thus being accurately judged to, it is any that personnel's model that alternative area comprises belongs in head model, above the waist model, lower part of the body model and whole body model, and classifies with this。Then carry out merger further according to the type of described personnel's model, the relative size in described image and position, obtain the testing result in units of personnel。Such as model a is head model, model b is model above the waist, model g is lower part of the body model, model h is whole body model, and relative size in image A of model a, b, g, h and location matches, thus confirming to comprise in the testing result of image A personnel α。Calibration information that classification subelement obtains in advance always according to monocular cam and personnel α position calculation in image A go out the positional information of personnel α。
In more embodiment; monocular cam pixel in above-described embodiment, the resolution of image, the size of image block, sliding window retrieve the type of the pixel count at institute interval, personnel's model; all can adopt according to the actual requirements and different parameter is set; as long as detection subelement have selected the alternative area being likely to comprise personnel's model by the principle that sliding window is searched for; identical technique effect can be realized, be not out design philosophy and the protection domain of technique scheme。
The personnel of different attitudes and angle are detected by above-described embodiment further by multistage detection convolutional neural networks model, personnel's model of different attitudes and angle is sorted out by convolutional neural networks of being classified by multi-model, it is achieved that detect the personnel of various different attitude rapidly and accurately。
Fig. 2 is the workflow schematic diagram detecting subelement in one embodiment of the present invention。
As in figure 2 it is shown, in a preferred embodiment, described detection subelement is additionally operable to each image of described image sequence is carried out down-sampled to obtain comprising the image pyramid of different resolution image, and each image in different resolution carries out described sliding window retrieval。
Specifically, for the monocular cam that resolution is higher, the high-definition camera of such as 720P, 1080P or even the super clear photographic head of higher resolution, acquired image resolution is bigger, accordingly, the image block quantity that detection subelement extracts increases substantially, it is judged that the calculation times of alternative area increases substantially, and the calculating pressure of system is very big。By down-sampled reduction image resolution ratio, it is possible to the quantity of image block is greatly decreased, the calculation times of alternative area is greatly reduced accordingly, alleviates the calculating pressure of system。Ensured the high-accuracy of testing result by the judgement classification of classification subelement and merger, therefore reducing the image resolution ratio accuracy rate on final detection result affects very small simultaneously。
Above-described embodiment is further by achieving, to acquired image is down-sampled, the calculating pressure significantly alleviating system under having substantially no effect on the premise of accuracy rate of testing result。
Fig. 3 is personnel's model schematic under different attitudes and angle in one embodiment of the present invention。
In a preferred embodiment, the type of described personnel's model includes described personnel department pattern under different attitudes and angle or complete model。As shown in Figure 3, contain the stance (also comprising lying posture etc. after translation-angle) in front and the sitting posture (also comprising kneeling position etc. after translation-angle) of side in the present embodiment simultaneously, more embodiments may also include the model of more different attitude, different angles。
In a preferred embodiment, described department pattern includes head model or bust form, and described complete model includes whole body model。Also shown in FIG. 3, each personnel's model includes head model 51, up/down bust form 52 and whole body model 53 respectively。In more embodiments, different sorting techniques also can be set according to the actual requirements。
In a preferred embodiment, the positional information of described personnel includes described personnel relative to the distance of described monocular cam and angle。Specifically, in the present embodiment, the positional information of described personnel have chosen the monocular cam object of reference as position, in more specific embodiments, can choose what other position was determined according to the actual requirements, or the object of reference determined with monocular cam relative position。
In one embodiment, the present invention also provides for a kind of embedded intelligent equipment, is provided with any of the above-described kind of embedded control system based on personnel's detection in described equipment。
Described equipment is preset with several and described control strategy mode of operation one to one, and described device control cell is additionally operable to the current operation mode according to described equipment and selects corresponding control strategy, to generate described operational order。
Specifically, for different working modes such as a kind of embedded intelligence fan, predeterminable rotary mode, energy-conservation rotary mode, tracing mode, air vent modes;Accordingly, when embedded control system controls personnel in scene to be detected, select different control strategies respectively, and generate corresponding operational order:
Under rotary mode, fan continues to blow in predetermined rotation angle range;
Under energy-conservation rotary mode, fan is in predetermined rotation angle range in rotary course, and the fan normal power air-supply when fan rotates towards personnel, when fan is rotated away from personnel, reduction power is blown;
In the tracking mode, the blowing towards lasting tracking individuals of fan;
Under air vent mode, blowing towards Persistent avoidance personnel of fan。
In more embodiment, different mode of operations and control strategy can be set according to the actual requirements for different embedded intelligent equipments。
Above-described embodiment is further by pre-seting multiple mode of operation one to one and control strategy, it is achieved that system can carry out the flexible control of Different Strategies automatically under the different working modes of equipment。
For intelligent air condition air-supply, wind people's pattern can be designed: when scene is not detected by personnel, reduce wind speed air-supply;When a people being detected, adjust wind direction and blow people;When many individuals being detected, adjust and sweep wind angle, inswept all personnel。Additionally can design wind and keep away people's pattern, reduce wind speed according to personnel's testing result and avoid directly blowing people。Further, according to personnel's testing result magnitude estimation personnel in the picture distance from air-conditioning, and then air-supply wind speed is adjusted。
For domestic monitoring camera, after being set to the pattern of leaving home, after personnel being detected in scene, to user mobile phone sending out notice or warning, sectional drawing or one section of small video of preservation upload to cloud storage server reservation record simultaneously。
Fig. 4 is the flow chart of the embedded Control method in one embodiment of the invention based on personnel's detection。
As shown in Figure 4, in the present embodiment, the embedded Control method based on personnel's detection provided by the invention includes:
S10: gather the image sequence of current scene;
S30: by the image recognition of multilamellar deep neural network to whether scene shown in described image sequence exists personnel, the positional information that there are personnel detect, obtain testing result;
S50: generate operational order according to described testing result and the control strategy preset, and control the described embedded intelligent equipment described operational order of execution。
Fig. 5 be embedded Control method shown in Fig. 4 a preferred implementation in the flow chart of step S30。
As it is shown in figure 5, in a preferred embodiment, step S30 specifically includes:
S33: each image of described image sequence is carried out sliding window search, obtains and be likely to comprise the alternative area of personnel's department pattern or complete model;
S35: described alternative area is inputted multi-model classification convolutional neural networks and with the type of personnel's model accurately judging described alternative area and comprising and classifies;
S37: carry out merger according to the type of described personnel's model, relative size in described image and position, obtain the testing result in units of personnel;
S39: the calibration information obtained in advance according to described monocular cam calculates the positional information of detected personnel。
Fig. 6 is the flow chart of a preferred implementation of step S30 shown in Fig. 5。
As shown in Figure 6, in a preferred embodiment, also include before step S33:
S31: each image of described image sequence is carried out down-sampled to obtain comprising the image pyramid of different resolution image。
Step S33 includes accordingly: each image in different resolution of each described image pyramid is carried out sliding window search, obtains and be likely to comprise the alternative area of personnel's department pattern or complete model。
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle。Skilled artisan would appreciate that, invention scope involved in the application, it is not limited to the technical scheme of the particular combination of above-mentioned technical characteristic, when also should be encompassed in without departing from described inventive concept simultaneously, other technical scheme being carried out combination in any by above-mentioned technical characteristic or its equivalent feature and being formed。Such as features described above and (but not limited to) disclosed herein have the technical characteristic of similar functions and replace mutually and the technical scheme that formed。

Claims (10)

1., based on an embedded control system for personnel's detection, it is arranged in embedded intelligent equipment, it is characterised in that described system includes:
Monocular cam, for gathering the image sequence of current scene, and detects unit by the transmission of acquired image sequence to personnel;
Personnel detect unit, it is connected with described monocular cam, for by the image recognition of multilamellar deep neural network to whether scene shown in described image sequence exists personnel, the positional information that there are personnel detects, and sends testing result to device control cell;
Device control cell, detects unit with described personnel and is connected, and for generating operational order according to described testing result and the control strategy preset, and controls the described embedded intelligent equipment described operational order of execution。
2. embedded control system according to claim 1, it is characterised in that described personnel detect unit and include detection subelement and classification subelement;
Described detection subelement, for each image of described image sequence is carried out sliding window search, obtains and is likely to comprise the alternative area of personnel's department pattern or complete model;The retrieval of described sliding window includes at interval of the pixel of predetermined number, namely described image is extracted an image block presetting size, described image block inputs multistage detection convolutional neural networks model to detect whether to comprise personnel's department pattern or complete model;Described multistage detection convolutional neural networks model is the convolutional neural networks model that everyone's model shares;
Described classification subelement and is classified with the type of personnel's model accurately judging described alternative area and comprising for described alternative area inputs multi-model classification convolutional neural networks, type according to described personnel's model, the relative size in described image and position carry out merger, obtain the testing result in units of personnel, and the calibration information obtained in advance according to described monocular cam calculates the positional information of detected personnel。
3. embedded control system according to claim 2, it is characterized in that, described detection subelement is additionally operable to each image of described image sequence is carried out down-sampled to obtain comprising the image pyramid of different resolution image, and each image in different resolution carries out described sliding window retrieval。
4. embedded control system according to claim 2, it is characterised in that the type of described personnel's model includes described personnel department pattern under different attitudes and angle or complete model。
5. embedded control system according to claim 4, it is characterised in that described department pattern includes head model or bust form, and described complete model includes whole body model。
6. embedded control system according to claim 2, it is characterised in that the positional information of described personnel includes described personnel relative to the distance of described monocular cam and angle。
7. an embedded intelligent equipment, it is characterised in that be provided with the embedded control system based on personnel's detection described in any one of claim 1-6 in described equipment;
Described equipment is preset with several and described control strategy mode of operation one to one, and described device control cell is additionally operable to the current operation mode according to described equipment and selects corresponding control strategy, to generate described operational order。
8. the control method suitable in the embedded control system based on personnel's detection described in any one of claim 1-6 or the embedded intelligent equipment described in claim 7, it is characterised in that described method includes:
S10: gather the image sequence of current scene;
S30: by the image recognition of multilamellar deep neural network to whether scene shown in described image sequence exists personnel, the positional information that there are personnel detect, obtain testing result;
S50: generate operational order according to described testing result and the control strategy preset, and control the described embedded intelligent equipment described operational order of execution。
9. control method according to claim 8, it is characterised in that step S30 includes:
S33: each image of described image sequence is carried out sliding window search, obtains and be likely to comprise the alternative area of personnel's department pattern or complete model;
S35: described alternative area is inputted multi-model classification convolutional neural networks and with the type of personnel's model accurately judging described alternative area and comprising and classifies;
S37: carry out merger according to the type of described personnel's model, relative size in described image and position, obtain the testing result in units of personnel;
S39: the calibration information obtained in advance according to described monocular cam calculates the positional information of detected personnel。
10. control method according to claim 9, it is characterised in that also include before step S33:
S31: each image of described image sequence is carried out down-sampled to obtain comprising the image pyramid of different resolution image;
Step S33 includes: each image in different resolution of each described image pyramid is carried out sliding window search, obtains and be likely to comprise the alternative area of personnel's department pattern or complete model。
CN201610133706.4A 2016-03-09 2016-03-09 Embedded control system, apparatus and method based on personnel's detection Active CN105700402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610133706.4A CN105700402B (en) 2016-03-09 2016-03-09 Embedded control system, apparatus and method based on personnel's detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610133706.4A CN105700402B (en) 2016-03-09 2016-03-09 Embedded control system, apparatus and method based on personnel's detection

Publications (2)

Publication Number Publication Date
CN105700402A true CN105700402A (en) 2016-06-22
CN105700402B CN105700402B (en) 2018-03-27

Family

ID=56220194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610133706.4A Active CN105700402B (en) 2016-03-09 2016-03-09 Embedded control system, apparatus and method based on personnel's detection

Country Status (1)

Country Link
CN (1) CN105700402B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106152408A (en) * 2016-07-06 2016-11-23 北京地平线机器人技术研发有限公司 Intelligent air conditioner controller, control method and air-conditioner
CN110345610A (en) * 2019-07-23 2019-10-18 珠海格力电器股份有限公司 The control method and device of air conditioner, air-conditioning equipment
CN111623490A (en) * 2020-05-29 2020-09-04 重庆大学 Intelligent wind sweeping system and method based on SLAM technology and human body recognition technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1423228A (en) * 2002-10-17 2003-06-11 南开大学 Apparatus and method for identifying gazing direction of human eyes and its use
CN1838848A (en) * 2006-01-20 2006-09-27 大连水产学院 Classroom electricity-saving system based on image recognition technique
CN101969718A (en) * 2010-09-08 2011-02-09 无锡中星微电子有限公司 Intelligent lighting control system and control method
CN105100724A (en) * 2015-08-13 2015-11-25 电子科技大学 Remote and safe intelligent household monitoring method and device based on visual analysis
CN105353634A (en) * 2015-11-30 2016-02-24 北京地平线机器人技术研发有限公司 Household appliance and method for controlling operation by gesture recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1423228A (en) * 2002-10-17 2003-06-11 南开大学 Apparatus and method for identifying gazing direction of human eyes and its use
CN1838848A (en) * 2006-01-20 2006-09-27 大连水产学院 Classroom electricity-saving system based on image recognition technique
CN101969718A (en) * 2010-09-08 2011-02-09 无锡中星微电子有限公司 Intelligent lighting control system and control method
CN105100724A (en) * 2015-08-13 2015-11-25 电子科技大学 Remote and safe intelligent household monitoring method and device based on visual analysis
CN105353634A (en) * 2015-11-30 2016-02-24 北京地平线机器人技术研发有限公司 Household appliance and method for controlling operation by gesture recognition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106152408A (en) * 2016-07-06 2016-11-23 北京地平线机器人技术研发有限公司 Intelligent air conditioner controller, control method and air-conditioner
CN106152408B (en) * 2016-07-06 2019-11-05 北京地平线机器人技术研发有限公司 Intelligent air conditioner controller, control method and air conditioner
CN110345610A (en) * 2019-07-23 2019-10-18 珠海格力电器股份有限公司 The control method and device of air conditioner, air-conditioning equipment
CN111623490A (en) * 2020-05-29 2020-09-04 重庆大学 Intelligent wind sweeping system and method based on SLAM technology and human body recognition technology

Also Published As

Publication number Publication date
CN105700402B (en) 2018-03-27

Similar Documents

Publication Publication Date Title
Shih A robust occupancy detection and tracking algorithm for the automatic monitoring and commissioning of a building
JP6295645B2 (en) Object detection method and object detection apparatus
EP3029604B1 (en) Area information estimating device, area information estimating method, and air conditioning apparatus
CN109076198B (en) Video-based object tracking occlusion detection system, method and equipment
CN108269269A (en) Method for tracking target and device
CN108805900B (en) Method and device for determining tracking target
Varcheie et al. Adaptive fuzzy particle filter tracker for a PTZ camera in an IP surveillance system
CN105276760A (en) Room information inferring apparatus, room information inferring method, and air conditioning apparatus
WO2019129255A1 (en) Target tracking method and device
EP2915333A1 (en) Depth map generation from a monoscopic image based on combined depth cues
CN108121332A (en) Indoor mobile robot positioner and method based on Quick Response Code
CN104574321A (en) Image correction method and device and video system
CN105678241B (en) A kind of cascade two dimensional image face pose estimation
Vosters et al. Background subtraction under sudden illumination changes
CN103593641B (en) Object detecting method and device based on stereo camera
CN105022999A (en) Man code company real-time acquisition system
CN105427345B (en) Three-dimensional stream of people's method of motion analysis based on camera projection matrix
CN109981972A (en) A kind of method for tracking target of robot, robot and storage medium
CN110264495A (en) A kind of method for tracking target and device
CN106682619A (en) Object tracking method and device
CN105700402A (en) Personnel detection-based embedded control system, device and method
Lu et al. Thermal Fault Diagnosis of Electrical Equipment in Substations Based on Image Fusion.
CN111240217B (en) State detection method and device, electronic equipment and storage medium
Bo et al. PhD forum: Illumination-robust foreground detection for multi-camera occupancy mapping
CN107194954A (en) The sportsman's method for tracing and device of multi-angle video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Yang Ming

Inventor after: Huang Chang

Inventor after: Yu Dienan

Inventor after: Yu Kai

Inventor before: Yang Ming

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20181101

Address after: 100085 No. 1 Shangdi Information Road, Haidian District, Beijing (No. 1-1, No. 1-2, Beijing Shichuang High-Tech Development Corporation), No. 02-114, 2-storey Building A, 1-1

Co-patentee after: Nanjing horizon Robot Technology Co., Ltd.

Patentee after: BEIJING HORIZON ROBOTICS TECHNOLOGY RESEARCH AND DEVELOPMENT CO., LTD.

Address before: 100085 No. 1 Shangdi Information Road, Haidian District, Beijing (No. 1-1, No. 1-2, Beijing Shichuang High-Tech Development Corporation) No. 02-114, 1-1, 2-storey Building A

Patentee before: BEIJING HORIZON ROBOTICS TECHNOLOGY RESEARCH AND DEVELOPMENT CO., LTD.

TR01 Transfer of patent right