CN105979230A - Monitoring method and device realized through images by use of robot - Google Patents

Monitoring method and device realized through images by use of robot Download PDF

Info

Publication number
CN105979230A
CN105979230A CN201610516972.5A CN201610516972A CN105979230A CN 105979230 A CN105979230 A CN 105979230A CN 201610516972 A CN201610516972 A CN 201610516972A CN 105979230 A CN105979230 A CN 105979230A
Authority
CN
China
Prior art keywords
image
image information
profile
robot
target
Prior art date
Application number
CN201610516972.5A
Other languages
Chinese (zh)
Inventor
黄�俊
白艳君
朱孔斌
王立涛
Original Assignee
上海思依暄机器人科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海思依暄机器人科技股份有限公司 filed Critical 上海思依暄机器人科技股份有限公司
Priority to CN201610516972.5A priority Critical patent/CN105979230A/en
Publication of CN105979230A publication Critical patent/CN105979230A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00664Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed circuit television systems, i.e. systems in which the signal is not broadcast
    • H04N7/183Closed circuit television systems, i.e. systems in which the signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed circuit television systems, i.e. systems in which the signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control

Abstract

The invention discloses a monitoring method and device realized through images by use of a robot. The method comprises the steps of obtaining first image information and second image information of a same position at a preset time interval; carrying out image preprocessing on the first image information and the second image information; comparing the first image information with the second image information, and extracting a different part in the information as third image information; and extracting the outline of a target in the third image information, and identifying the target in the outline. A monitoring camera can be mounted at the head, face or breast of the household smart robot; and the robot can move in the room. According to the method and the device provided by the invention, after a user goes out or falls asleep, the robot can move towards the positions of a veranda and a window according to a preset instruction; a monitoring mode is started; a stranger is prevented from prying and intruding into the room; and the life and property security of the user can be guaranteed.

Description

The method and device that a kind of robot is monitored by image

Technical field

The present invention relates to monitor technical field, particularly relate to a kind of robot and carried out by image The method and device of monitoring.

Background technology

Monitoring system is to apply one of most system in safety-protection system, and video monitoring is to supervise now The mainstream technology of control.In recent years, the intellectuality of Household monitor robot has become as security protection industry One indispensable part.

It is this area that the image information how utilizing robot to obtain automatically and efficiently monitors Technical problem urgently to be resolved hurrily.

Summary of the invention

It is an object of the invention to provide method and dress that a kind of robot is monitored by image Put, it is therefore intended that utilize the image information of robot collection to carry out automatic effective monitoring, in case footpath between fields Life is such as spied upon, is swarmed into, thus ensures the life of user, property safety.

For solving above-mentioned technical problem, the present invention provides a kind of robot to be monitored by image Method, including:

Obtain same position and be separated by the first image information and second image of prefixed time interval Information;

Described first image information and described second image information are carried out Image semantic classification;

Relatively described first image information and described second image information, extracts in the two different Part as the 3rd image information;

Extract the profile of target in described 3rd image information, the target in described profile is carried out Identify.

Alternatively, described described first image information and described second image information are carried out figure As pretreatment includes:

Described first image information and described second image information are transformed to by coloured image Gray level image;

Described gray level image is carried out Gaussian Blur process.

Alternatively, described also wrap before the profile of target extracting in described 3rd image information Include:

By the brightness value of each pixel in described 3rd image information compared with predetermined luminance threshold value Relatively, when described brightness value is less than described predetermined luminance threshold value, filter corresponding pixel, Obtain filtered image.

Alternatively, described also wrap before the profile of target extracting in described 3rd image information Include:

Described filtered image is carried out expansive working.

Alternatively, also wrap after the profile of target in described 3rd image information of described extraction Include:

Calculate the area of described profile;

By the area of described profile compared with predetermined threshold value, when the area of described profile is more than institute When stating predetermined threshold value, determine that the target of correspondence is monitoring objective.

Present invention also offers the device that a kind of robot is monitored by image, including:

Acquisition module, is separated by the first image information of prefixed time interval for obtaining same position And second image information;

Pretreatment module, for entering described first image information and described second image information Row Image semantic classification;

Extraction module, for relatively described first image information and described second image information, Extract parts different in the two as the 3rd image information;

Profile extraction module, for extracting the profile of target in described 3rd image information, to institute The target stated in profile is identified.

Alternatively, described pretreatment module includes:

Converter unit, for by described first image information and described second image information by coloured silk Color image is transformed to gray level image;

Gaussian Blur processing unit, for carrying out Gaussian Blur process to described gray level image.

Alternatively, also include:

Filtering module, before the profile of target in extracting described 3rd image information, will In described 3rd image information, the brightness value of each pixel is compared with predetermined luminance threshold value, works as institute When stating brightness value less than described predetermined luminance threshold value, filter corresponding pixel, obtain and filter Rear image.

Alternatively, also include:

Expansive working module, for extract in described 3rd image information the profile of target it Before, described filtered image is carried out expansive working.

Alternatively, also include:

Profile screening module, for extract in described 3rd image information the profile of target it After, calculate the area of described profile;By the area of described profile compared with predetermined threshold value, when When the area of described profile is more than described predetermined threshold value, determine that the target of correspondence is monitoring objective.

Method that robot provided by the present invention is monitored by image and device, pass through Obtain same position and be separated by the first image information and second image information of prefixed time interval; First image information and the second image information are carried out Image semantic classification;Relatively first image letter Breath and the second image information, extract parts different in the two as the 3rd image information;Carry Take the profile of target in the 3rd image information, the target in profile is identified.Domestic intelligent Monitoring camera can be installed in the crown of robot, face or front, it is possible to move in a room Dynamic.Method provided by the present invention and device, after owner goes out or falls asleep, it is possible to according to Preset instructions, mobile to towards balcony, the position of window, open monitoring mode, to prevent footpath between fields Stranger spies upon, swarms into, and has ensured the life of user, property safety.

Accompanying drawing explanation

For the clearer explanation embodiment of the present invention or the technical scheme of prior art, below will The accompanying drawing used required in embodiment or description of the prior art is briefly described, aobvious and easy Insight, the accompanying drawing in describing below is only some embodiments of the present invention, general for this area From the point of view of logical technical staff, on the premise of not paying creative work, it is also possible to attached according to these Figure obtains other accompanying drawing.

Fig. 1 is a kind of tool of the method that robot provided by the present invention is monitored by image The flow chart of body embodiment;

Fig. 2 is the another kind of the method that robot provided by the present invention is monitored by image The flow chart of detailed description of the invention;

Fig. 3 is another of the method that is monitored by image of robot provided by the present invention The flow chart of detailed description of the invention;

The knot of the device that the robot that Fig. 4 provides for the embodiment of the present invention is monitored by image Structure block diagram.

Detailed description of the invention

In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with the accompanying drawings The present invention is described in further detail with detailed description of the invention.Obviously, described enforcement Example is only a part of embodiment of the present invention rather than whole embodiments.Based in the present invention Embodiment, those of ordinary skill in the art are obtained under not making creative work premise Every other embodiment, broadly fall into the scope of protection of the invention.

The one of the method that robot provided by the present invention is monitored by image is concrete real Execute the flow chart of mode as it is shown in figure 1, the method includes:

Step S101: obtain same position be separated by prefixed time interval the first image information and Second image information;

Specifically, robot can be when towards balcony, window or other regions to be monitored, first The picture of one original state of shooting, as the first image information;Afterwards between the Preset Time of interval Every rear, again shoot a pictures, as the second image information.

Step S102: described first image information and described second image information are carried out image Pretreatment;

Carrying out Image semantic classification can make the picture of shooting after algorithm process, and becoming can be direct The image processed.This process can specifically include:

Ashing processes, and by coloured image, the image of shooting is converted to gray level image;Denoising, Remove the noise spot of image, to improve the accuracy that successive image processes.

Step S103: relatively described first image information and described second image information, extracts Parts different in Er Zhe is as the 3rd image information;

Relatively the first image information and the second image information, compare first by the second image information The content that image information is different plucks out, and i.e. difference section figure is as the 3rd image information.Such as, The picture front and back photographed is all the image of 1024*768 resolution, and is all to protect in robot Hold same place and the image of French window balcony that angle shot is arrived.Latter one occurs in that a people And doggie.Second image information is compared with the first image information, and regional area shows humanoid and other The doggie on limit, then the result obtained in this step is humanoid and doggie picture.

Step S104: extract the profile of target in described 3rd image information, in described profile Target be identified.

The method that robot provided by the present invention is monitored by image, same by obtaining Position is separated by the first image information and second image information of prefixed time interval;To the first figure As information and the second image information carry out Image semantic classification;Relatively the first image information and the Two image informations, extract parts different in the two as the 3rd image information;Extract the 3rd figure As the profile of target in information, the target in profile is identified.Household intelligent robot Monitoring camera can be installed in the crown, face or front, it is possible to move in a room.This Bright provided method, after owner goes out or falls asleep, it is possible to according to preset instructions, moves extremely Towards balcony, the position of window, open monitoring mode, to prevent stranger from spying upon, to swarm into, The life of user, property safety are ensured.

On the basis of above-described embodiment, robot provided by the present invention is supervised by image The method of control carries out the process of Image semantic classification to the first image information and the second image information can To specifically include:

First image information and the second image information are transformed to gray level image by coloured image;

Gray level image is carried out Gaussian Blur process.

Wherein, the benefit carrying out ashing process is to be converted to saturate object in gray scale without exception Value roughly the same in the range of value, it is to avoid the difference of result when value of color different colours calculates Different.As one is navy blue, another is Dark grey, and the value in color domain is probably different, Value in gray scale territory then relatively, thus follow-up may be analyzed and judge further.

The process carrying out Gaussian Blur is the first image information and the second image information to be carried out Fuzzy Processing, this process can be removed the pixel of some details, remove noise spot, improves follow-up The accuracy of comparison process.

On the basis of a upper embodiment, robot provided by the present invention is supervised by image The method of control also includes before the profile of target in extracting the 3rd image information: to the 3rd image Information carries out the operation filtered and expand.

The another kind of the method that robot as provided by the present invention in Fig. 2 is monitored by image Shown in the flow chart of detailed description of the invention, the method includes:

Step S201: obtain same position be separated by prefixed time interval the first image information and Second image information;

Step S202: described first image information and described second image information are carried out image Pretreatment;

Step S203: relatively described first image information and described second image information, extracts Parts different in Er Zhe is as the 3rd image information;

Step S204: by the brightness value of each pixel in the 3rd image information and predetermined luminance threshold value Compare, when brightness value is less than predetermined luminance threshold value, filter corresponding pixel, obtain Filtered image.

This process can be removed the part of pixel value high (the whitest), leaves saturate pixel Point, to retain the part that in two width images, difference is bigger, and filters distinguishing less part Remove.

Step S205: described filtered image is carried out expansive working.

It is pointed out that the operation being expanded to ask local maximum.Expansive working is exactly will figure As (or a part of region A of image) and core (referred to as B) carry out convolution.Core can be Any shapes and sizes, it has independent definition reference point out, is called anchor point (anchorpoint).In most cases, core be a little centre with reference point and solid just Square or disk.Core can be considered as template or mask.

And expand the operation seeking local maximum exactly, and core B and figure convolution, i.e. calculate core B The maximum of the pixel in the region covered, and this maximum is assigned to what reference point was specified Pixel.The highlight regions in image thus can be made gradually to increase.

Trickle pocket-handkerchief can be filtered by carrying out expansive working, characteristic point is amplified, makes The part that must distinguish becomes apparent from.

Step S206: extract the profile of target in described 3rd image information, in described profile Target be identified.

Prior art is to judge any object occurred in image mostly, it is impossible to bigger The object of profile judges.Such as, in two pictures before and after photographing, a rear figure There is people and doggie in sheet, and prior art is all to be identified people and doggie.And it practice, one The object that for as, large wheel is wide may be that the probability of people is higher, the thing that doggie etc profile is less Body can be ignored.

In consideration of it, on the basis of any of the above-described embodiment, robot provided by the present invention leads to Cross method that image is monitored in extracting described 3rd image information after the profile of target also May further include: the process that the profile that area is little is filtered.

Another of the method that robot as provided by the present invention in Fig. 3 is monitored by image Shown in the flow chart of detailed description of the invention, the method includes:

Step S301: obtain same position be separated by prefixed time interval the first image information and Second image information;

Step S302: described first image information and described second image information are carried out image Pretreatment;

Step S303: relatively described first image information and described second image information, extracts Parts different in Er Zhe is as the 3rd image information;

Step S304: extract the profile of target in described 3rd image information, in described profile Target be identified;

Step S305: calculate the area of described profile;

Step S306: by the area of described profile compared with predetermined threshold value, when described profile When area is more than predetermined threshold value, determine that the target of correspondence is monitoring objective.

The present embodiment, by calculating the area of profile, extracts big for area orientating as the knot of profile Really, for little then the filtering of area, wheel big in the image monitored has been finally given Wide object.

The device that the robot provided the embodiment of the present invention below is monitored by image enters Row is introduced, the device that robot described below is monitored by image and above-described machine Device people can be mutually to should refer to by the method that image is monitored.

The knot of the device that the robot that Fig. 4 provides for the embodiment of the present invention is monitored by image Structure block diagram, the device being monitored by image with reference to Fig. 4 robot be may include that

Acquisition module 100, is separated by the first image of prefixed time interval for obtaining same position Information and the second image information;

Pretreatment module 200, for described first image information and described second image letter Breath carries out Image semantic classification;

Extraction module 300, for relatively described first image information and described second image letter Breath, extracts parts different in the two as the 3rd image information;

Profile extraction module 400, for extracting the profile of target in described 3rd image information, Target in described profile is identified.

The device that robot provided by the present invention is monitored by image, same by obtaining Position is separated by the first image information and second image information of prefixed time interval;To the first figure As information and the second image information carry out Image semantic classification;Relatively the first image information and the Two image informations, extract parts different in the two as the 3rd image information;Extract the 3rd figure As the profile of target in information, the target in profile is identified.Household intelligent robot Monitoring camera can be installed in the crown, face or front, it is possible to move in a room.This Bright provided device, after owner goes out or falls asleep, it is possible to according to preset instructions, moves extremely Towards balcony, the position of window, open monitoring mode, to prevent stranger from spying upon, to swarm into, The life of user, property safety are ensured

As a kind of detailed description of the invention, robot provided by the present invention is supervised by image In the device of control, above-mentioned pretreatment module 200 specifically includes:

Converter unit, for by described first image information and described second image information by coloured silk Color image is transformed to gray level image;

Gaussian Blur processing unit, for carrying out Gaussian Blur process to described gray level image.

As a kind of detailed description of the invention, robot provided by the present invention is supervised by image The device of control can further include:

Filtering module, before the profile of target in extracting described 3rd image information, will In described 3rd image information, the brightness value of each pixel is compared with predetermined luminance threshold value, works as institute When stating brightness value less than described predetermined luminance threshold value, filter corresponding pixel, obtain and filter Rear image.

As a kind of detailed description of the invention, robot provided by the present invention is supervised by image The device of control can further include:

Expansive working module, for extract in described 3rd image information the profile of target it Before, described filtered image is carried out expansive working.

On the basis of any of the above-described embodiment, robot provided by the present invention is entered by image The device of row monitoring can further include:

Profile screening module, for extract in described 3rd image information the profile of target it After, calculate the area of described profile;By the area of described profile compared with predetermined threshold value, when When the area of described profile is more than predetermined threshold value, determine that the target of correspondence is monitoring objective.

In this specification, each embodiment uses the mode gone forward one by one to describe, and each embodiment emphasis is said Bright is all the difference with other embodiments, same or similar part between each embodiment See mutually.For device disclosed in embodiment, disclosed in itself and embodiment Method is corresponding, so describe is fairly simple, relevant part sees method part and illustrates.

Professional further appreciates that, describes in conjunction with the embodiments described herein The unit of each example and algorithm steps, it is possible to electronic hardware, computer software or the two Be implemented in combination in, in order to clearly demonstrate the interchangeability of hardware and software, in described above In generally described composition and the step of each example according to function.These functions are actually Perform with hardware or software mode, depend on application-specific and the design constraint of technical scheme Condition.Each specifically should being used for can be used different methods to realize institute by professional and technical personnel The function described, but this realization is it is not considered that beyond the scope of this invention.

The method described in conjunction with the embodiments described herein or the step of algorithm can be direct Implement with hardware, the software module of processor execution, or the combination of the two.Software module Random access memory (RAM), internal memory, read only memory (ROM), electrically programmable can be placed in ROM, electrically erasable ROM, depositor, hard disk, moveable magnetic disc, CD-ROM, Or in any other form of storage medium well known in technical field.

The method above robot provided by the present invention being monitored by image and dress Put and be described in detail.Specific case used herein is to the principle of the present invention and embodiment party Formula is set forth, the explanation of above example be only intended to help to understand the method for the present invention and Its core concept.It should be pointed out that, for those skilled in the art, not On the premise of departing from the principle of the invention, it is also possible to the present invention is carried out some improvement and modification, this A little improvement and modification also fall in the protection domain of the claims in the present invention.

Claims (10)

1. the method that a robot is monitored by image, it is characterised in that including:
Obtain same position and be separated by the first image information and second image of prefixed time interval Information;
Described first image information and described second image information are carried out Image semantic classification;
Relatively described first image information and described second image information, extracts in the two different Part as the 3rd image information;
Extract the profile of target in described 3rd image information, the target in described profile is carried out Identify.
2. the method that robot as claimed in claim 1 is monitored by image, it is special Levying and be, described that described first image information and described second image information are carried out image is pre- Process includes:
Described first image information and described second image information are transformed to by coloured image Gray level image;
Described gray level image is carried out Gaussian Blur process.
3. the method that robot as claimed in claim 2 is monitored by image, it is special Levy and be, described also include before the profile of target extracting in described 3rd image information:
By the brightness value of each pixel in described 3rd image information compared with predetermined luminance threshold value Relatively, when described brightness value is less than described predetermined luminance threshold value, filter corresponding pixel, Obtain filtered image.
4. the method that robot as claimed in claim 3 is monitored by image, it is special Levy and be, described also include before the profile of target extracting in described 3rd image information:
Described filtered image is carried out expansive working.
5. the robot as described in any one of Claims 1-4 is monitored by image Method, it is characterised in that in described 3rd image information of described extraction after the profile of target Also include:
Calculate the area of described profile;
By the area of described profile compared with predetermined threshold value, when the area of described profile is more than institute When stating predetermined threshold value, determine that the target of correspondence is monitoring objective.
6. the device that a robot is monitored by image, it is characterised in that including:
Acquisition module, is separated by the first image information of prefixed time interval for obtaining same position And second image information;
Pretreatment module, for entering described first image information and described second image information Row Image semantic classification;
Extraction module, for relatively described first image information and described second image information, Extract parts different in the two as the 3rd image information;
Profile extraction module, for extracting the profile of target in described 3rd image information, to institute The target stated in profile is identified.
7. the device that robot as claimed in claim 6 is monitored by image, it is special Levying and be, described pretreatment module includes:
Converter unit, for by described first image information and described second image information by coloured silk Color image is transformed to gray level image;
Gaussian Blur processing unit, for carrying out Gaussian Blur process to described gray level image.
8. the device that robot as claimed in claim 7 is monitored by image, it is special Levy and be, also include:
Filtering module, before the profile of target in extracting described 3rd image information, will In described 3rd image information, the brightness value of each pixel is compared with predetermined luminance threshold value, works as institute When stating brightness value less than described predetermined luminance threshold value, filter corresponding pixel, obtain and filter Rear image.
9. the device that robot as claimed in claim 8 is monitored by image, it is special Levy and be, also include:
Expansive working module, for extract in described 3rd image information the profile of target it Before, described filtered image is carried out expansive working.
10. the robot as described in any one of claim 6 to 9 is monitored by image Device, it is characterised in that also include:
Profile screening module, for extract in described 3rd image information the profile of target it After, calculate the area of described profile;By the area of described profile compared with predetermined threshold value, when When the area of described profile is more than described predetermined threshold value, determine that the target of correspondence is monitoring objective.
CN201610516972.5A 2016-07-04 2016-07-04 Monitoring method and device realized through images by use of robot CN105979230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610516972.5A CN105979230A (en) 2016-07-04 2016-07-04 Monitoring method and device realized through images by use of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610516972.5A CN105979230A (en) 2016-07-04 2016-07-04 Monitoring method and device realized through images by use of robot

Publications (1)

Publication Number Publication Date
CN105979230A true CN105979230A (en) 2016-09-28

Family

ID=56954408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610516972.5A CN105979230A (en) 2016-07-04 2016-07-04 Monitoring method and device realized through images by use of robot

Country Status (1)

Country Link
CN (1) CN105979230A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087139A (en) * 2017-03-31 2017-08-22 思依暄机器人科技(深圳)有限公司 A kind of removable monitoring system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090010546A1 (en) * 2005-12-30 2009-01-08 Telecom Italia S P.A. Edge-Guided Morphological Closing in Segmentation of Video Sequences
CN102521578A (en) * 2011-12-19 2012-06-27 中山爱科数字科技股份有限公司 Method for detecting and identifying intrusion
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN104836990A (en) * 2015-04-30 2015-08-12 武汉理工大学 Pier anti-collision image monitoring system and monitoring method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090010546A1 (en) * 2005-12-30 2009-01-08 Telecom Italia S P.A. Edge-Guided Morphological Closing in Segmentation of Video Sequences
CN102521578A (en) * 2011-12-19 2012-06-27 中山爱科数字科技股份有限公司 Method for detecting and identifying intrusion
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN104836990A (en) * 2015-04-30 2015-08-12 武汉理工大学 Pier anti-collision image monitoring system and monitoring method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087139A (en) * 2017-03-31 2017-08-22 思依暄机器人科技(深圳)有限公司 A kind of removable monitoring system

Similar Documents

Publication Publication Date Title
KR101808587B1 (en) Intelligent integration visual surveillance control system by object detection and tracking and detecting abnormal behaviors
Park et al. Continuous localization of construction workers via integration of detection and tracking
CN106022209B (en) A kind of method and device of range estimation and processing based on Face datection
EP3043329B1 (en) Image processing apparatus, image processing method, and program
CN106331492B (en) A kind of image processing method and terminal
EP1683105B1 (en) Object detection in images
US7903141B1 (en) Method and system for event detection by multi-scale image invariant analysis
Toreyin et al. Contour based smoke detection in video using wavelets
EP1395945B1 (en) Method for detecting falsity in fingerprint recognition by classifying the texture of grey-tone differential values
JP4746050B2 (en) Method and system for processing video data
CN101236606B (en) Shadow cancelling method and system in vision frequency monitoring
US9633265B2 (en) Method for improving tracking in crowded situations using rival compensation
CN103714648B (en) A kind of monitoring and early warning method and apparatus
JP4708343B2 (en) How to model background and foreground regions
US20140369567A1 (en) Authorized Access Using Image Capture and Recognition System
JP4569190B2 (en) Suspicious person countermeasure system and suspicious person detection device
CN104052905B (en) Method and apparatus for handling image
CN103558996B (en) Photo processing method and system
Huddar et al. Novel algorithm for segmentation and automatic identification of pests on plants using image processing
US20130163823A1 (en) Image Capture and Recognition System Having Real-Time Secure Communication
JP3714350B2 (en) Human candidate region extraction method, human candidate region extraction system, and human candidate region extraction program in image
JP6273685B2 (en) Tracking processing apparatus, tracking processing system including the tracking processing apparatus, and tracking processing method
CN104504369B (en) A kind of safety cap wear condition detection method
JP2009031939A (en) Image processing apparatus, method and program
AU2011201953B2 (en) Fault tolerant background modelling

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201207 (Shanghai) Pudong New Area free trade zone fanchun Road No. 400 Building 1 layer 3

Applicant after: SHANGHAI SIYIXUAN ROBOT TECHNOLOGY CO., LTD.

Address before: 200233 room F21-22, room 4, building No. 18, Guiping Road, Xuhui District, Xuhui District, Shanghai

Applicant before: SHANGHAI SIYIXUAN ROBOT TECHNOLOGY CO., LTD.

Address after: 201207 (Shanghai) Pudong New Area free trade zone fanchun Road No. 400 Building 1 layer 3

Applicant after: SHANGHAI SIYIXUAN ROBOT TECHNOLOGY CO., LTD.

Address before: 200233 room F21-22, room 4, building No. 18, Guiping Road, Xuhui District, Xuhui District, Shanghai

Applicant before: SHANGHAI SIYIXUAN ROBOT TECHNOLOGY CO., LTD.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20160928