WO2015172445A1 - Robot intelligent multifonctionnel domestique - Google Patents

Robot intelligent multifonctionnel domestique Download PDF

Info

Publication number
WO2015172445A1
WO2015172445A1 PCT/CN2014/084138 CN2014084138W WO2015172445A1 WO 2015172445 A1 WO2015172445 A1 WO 2015172445A1 CN 2014084138 W CN2014084138 W CN 2014084138W WO 2015172445 A1 WO2015172445 A1 WO 2015172445A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
module
gesture
recognition
image
Prior art date
Application number
PCT/CN2014/084138
Other languages
English (en)
Chinese (zh)
Inventor
黄鹏宇
周建雄
何跃凯
彭元华
郭振中
Original Assignee
成都百威讯科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都百威讯科技有限责任公司 filed Critical 成都百威讯科技有限责任公司
Publication of WO2015172445A1 publication Critical patent/WO2015172445A1/fr

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]

Definitions

  • the present invention relates to intelligent identification of machines, and more particularly to a home function intelligent robot. Background technique
  • the so-called smart home central control terminal is actually a low-cost, self-learning "universal" remote control, which is usually deployed in various rooms of the home, using short-range wireless control technology, people can achieve through its human-machine interface
  • the lighting of the room, electric curtains, electrical appliances, security alarms, background music, home theater, etc. can also be controlled by 3G/4G or wired network into the public network, so that people can carry on the household appliances through mobile terminals such as mobile phones. remote control.
  • a home service robot disclosed in Patent Document 1: 201310079408.8 employs a background server and various devices connected to the background server, as shown in FIG. 1, the sensing device (2), the sports expression device (3), Power supply and charging device (4), robot (5), wherein the sensing device (2) includes a smoke alarm (21), a gas concentration monitor (22), a temperature sensor (23), an infrared sensor (24), infrared Signal receiver (25) and ultrasonic induction
  • the motion expression device (3) includes a basic motion control device (31), a balance control device (32), an expression control device (33), and a body language control device (34); the power supply and charging The device (4) comprises a wired charging device (41), a wireless charging device (42) and an automatic charging device (43); the background server (1) is further connected with an audiovisual device (6) and a wireless network connection device (7)
  • the audiovisual device (6) includes a camera (61), a microphone (62), and a p8 (63).
  • the service robot can collect various kinds of information around the server through a rich sensing device, the robot cannot use the collected information to intelligently identify the surrounding environment and make judgments and comprehensive decisions accordingly.
  • the robot is just a complex that brings together various sensors, and cannot perform intelligent analysis, recognition, and comprehensive judgment.
  • Patent Document 2 201110191167.7 proposes a home mobile security device based on target recognition, by binding a robot to a mobile phone or a remote computer, and affixing a label at a target point; robot learning, finding the position of each label; The monitoring mode, time and frequency of the robot; the robot periodically starts the monitoring program according to the set method, detects the marking points, and sends the image and other information to the owner's mobile phone or computer; while doing the fixed point monitoring, the robot continuously collects the smoke, Sound and character information, and send abnormal information to the owner's mobile phone or computer; robots and smart home appliances, fixed monitoring systems communicate with each other through the Internet of Things; the robot receives control information from the owner's mobile phone or computer, interrupts the already set Tasks or tasks outside of routines.
  • the robot claims to be able to recognize the target, it is clear that such target recognition is only a set of pre-set targets and cannot cope with complex situations and various emergencies in the actual environment.
  • the target recognition robot proposed by the patent is essentially a fixed-point monitor or sensor with remote information transmission.
  • Patent Document 3 201210156595.0 proposes an all-weather home robot, which is equipped with a large number of sensors on a movable device, and an infrared camera infrared lamp and optics on a rotatable head, as in Patent Document 1, 2 camera, LED lights, the vehicle body is equipped with means electromagnetic sensor, an infrared sensor and a close co 2, formaldehyde sensor, smoke sensor with the left and right, CO sensor, an infrared heat sensor, a dust sensor and a temperature sensor, an ultrasonic vehicle interior Nebulizers, dehumidifiers, air purifiers, vacuum cleaners and control boards, touch screens, flip keyboards.
  • Patent Document 4 201310135363.1
  • a home robot including a wireless remote control terminal
  • the home SU mobile robot is provided with a wireless transmission receiving slip
  • the wireless remote control terminal is provided with a wireless transmitting and receiving unit, and the wireless remote control terminal receives and stores a remote control signal of each household appliance through a wireless receiving unit, and the remote control signal sent by the home service robot shooting unit, the home 5UL service
  • the robot sends a control signal to the wireless remote control terminal through the wireless transmitting unit, and the wireless remote terminal sends a control signal to each household appliance through the wireless transmitting unit.
  • the robot in Patent Document 4 adds a function of information exchange, reading, and transfer of a smart home appliance based on the aforementioned robot, and does not have the capability of automatic recognition and judgment, and is not competent for modern smart family life.
  • the role of the little helper is to say, the robot in Patent Document 4 adds a function of information exchange, reading, and transfer of a smart home appliance based on the aforementioned robot, and does not have the capability of automatic recognition and judgment, and is not competent for modern smart family life. The role of the little helper.
  • the shooting angle of the camera unit is all-round to the surrounding space, so inevitably, it will face personal privacy leakage.
  • the shooting angle and orientation of the camera unit cannot be adjusted in real time according to changes in environmental conditions.
  • the present invention provides a home multi-function intelligent robot including a process control system and a plurality of function subsystems, the plurality of functions
  • the system includes a mobile system, a data acquisition system, and a communication system, and is characterized by:
  • the mobile system includes a positioning module and a driving module, the positioning module identifies and locates an environment in which the robot is located, and controls the robot to move through the driving module;
  • the data acquisition system includes a vision module, a voice module, and a data collection module
  • the vision module includes an image capture device and captures a video image through the camera device
  • the voice module includes a pickup, and the pickup collects audio information
  • the communication system includes a wireless communication module that implements remote communication of the robot;
  • the processing control system includes a processing module and a control module, and the processing module receives data in the plurality of functional subsystems in real time according to a predetermined algorithm Processing is performed, and based on the processing result, the control module issues a control command to each functional subsystem system to control the action of the owner.
  • the plurality of functional subsystems further includes a power system, and the power system includes a power smart management module and a charging module;
  • the power intelligent management module detects the current remaining power of the robot in real time, and when the power is insufficient, issues a charging command, and the charging module starts charging the owner.
  • the robot as described above is characterized in that:
  • the charging command is transmitted to the driven system.
  • the driving module controls the robot to move to the charging position by itself, and the charging module starts charging the robot.
  • the robot is charged using contact charging or non-contact charging.
  • the contact charging includes setting a socket or a charging post at the charging position, and the charging module is electrically connected to the socket or the charging post after the robot moves to the charging position;
  • the non-contact charging includes providing an electromagnetic charging device at the charging position, and the charging module and the battery charging device are electromagnetically coupled after the robot moves to the charging position.
  • said plurality of functional subsystems further comprises an interactive system comprising one or more of a touch module, a remote control module and an I/O interface module.
  • the positioning module includes a self state sensing unit and an environment sensing unit;
  • the self-state sensing unit includes one or more of an acceleration sensor, an electronic compass, and a cliff sensor for determining a state of the robot itself;
  • the environment sensing unit includes one or more of a ranging sensor, a collision avoidance sensor, and an intelligent positioning system for determining an external state of the robot.
  • the drive module includes a drive motor and a wheeled mobile device.
  • the wheeled mobile device consists of three wheels or more than three wheels that are fixed to the bottom of the person by symmetrical support of the gimbal.
  • the drive motor drives at least one of the wheels for independent movement.
  • the vision module further includes a light-filling device, and the light-filling device is automatically turned on in the case of insufficient ambient light to ensure that the camera device can obtain a clear image under different illumination conditions.
  • the vision module further includes a pan/tilt mechanism, and the camera device is mounted on the person through the pan/tilt mechanism.
  • the pan/tilt mechanism includes a control motor and a transmission mechanism that are freely rotatable in horizontal and vertical directions to ensure free control of the shooting angle of the processing device by the processing control system.
  • the light-filling device surrounds the camera of the camera device and is linked with the camera device.
  • the vision module includes three camera devices, and the three camera devices are cameras arranged in a finished font.
  • the voice module further includes a speaker that outputs the output information of the robot in a manner simulating a person's voice.
  • the speech module includes two symmetrically arranged pickups and two symmetrically arranged speakers.
  • the data collection module interacts with a smart device in a surrounding environment, reads data in the smart device, and communicates control instructions to the smart device.
  • the interaction is in the form of a wireless local area network, including one or more of Bluetooth, Zigbee, and WiFi.
  • the smart device includes various smart sensors and home smart medical devices, as well as various remote control home appliances.
  • the smart sensor includes at least one of a smoke sensor, a gas sensor, and an infrared sensor;
  • the home smart medical device includes one or more of a sphygmomanometer, an oximeter, a blood glucose meter, and other wearable smart medical devices;
  • the various remote control home appliances include one or more of a smart lighting, a refrigerator, a television, a washing machine, a smart cooking device, and an air conditioner.
  • the robot as described above is characterized in that:
  • the wireless communication module includes a wireless communication unit, and communicates with the terminal device through the wireless communication unit;
  • the terminal device includes a handheld device and a server
  • the controller After the processing control system processes the data in the plurality of function subsystems, the controller sends the data related to security to the background server through the wireless communication unit for storage filing and the intelligent terminal to perform an alarm prompt.
  • the intelligent terminal and the server issue an operational command to the robot via a wireless communication system.
  • the wireless communication unit includes one or more of gsm, cdma, wcdma, cdma2000, td-scdma, LTE, 4G, wifi standard communication modules.
  • the processing module includes one or more of a CPU, a DSP, an ARM, and a PPC chip.
  • the control module issues a control command based on the processing of the data by the processing module, comprehensively analyzing the processing result.
  • the processing control system processes the video image collected by the video module, and intelligently recognizes and determines an event occurring in the surrounding environment, and issues a corresponding action instruction based on the determining.
  • the robot as described above is characterized in that it comprises one or more of illegal intrusion detection, limb collision detection recognition, gesture detection recognition, fall detection determination, flame detection, and smoke detection.
  • the illegal intrusion detection identification includes the following steps:
  • the human body captures, uses the machine learning video to measure, uses the HOG feature as the shape description of the human body, and selects the SVM to traverse the video image to realize the capture of the human body through the adaboost algorithm;
  • the human body tracking establish a human body tracking model, use the LBP function to perform similarity and calculate the similarity by the Pap singer distance, and realize the tracking of the human body based on the mean shift;
  • the process control system identifies the person in the video image, and if the recognition is determined to be a non-registered person, the control system confirms an illegal intrusion and issues an alert command.
  • the limb conflict detection identification includes the following steps:
  • Human body capture using video learning of machine learning, using HOG features as the shape description of the human body, selecting SVM to traverse the video image through the adaboost algorithm to achieve the capture of the human body;
  • the human body tracking establish a human body tracking model, use the LBP function to perform similar > 1 quantity and calculate the similarity by the Pap address, and realize the tracking of the human body based on the mean shift;
  • the KLT Kande-Lucas-Tomasi
  • the optical flow vector uses regional entropy to characterize severe irregular motion. When the region entropy exceeds the threshold, the control system confirms that a limb conflict has occurred and issues an alarm command.
  • a robot as described above characterized in that a person in the image performs face recognition and/or performs voiceprint recognition on a person's voice in the video image.
  • the limb conflict detecting step further includes performing audio analysis on the audio information acquired by the illustrated pickup in the voice module;
  • the control system confirms that a limb collision has occurred and issues an alarm command.
  • the voiceprint recognition includes offline model training and online voiceprint recognition
  • the offline model training obtains a voice pattern of a specific person
  • the online voiceprint recognition matches the sound with the voiceprint feature model to achieve voiceprint recognition.
  • the voiceprint recognition includes the following steps:
  • the GMM model is used to train the audio identification model and store it in the database to complete the voiceprint registration
  • the gesture detection and recognition includes four steps of image preprocessing, gesture tracking, feature extraction, and gesture recognition.
  • the robot as described above is characterized in that: smooth denoising, obtaining the gesture image, and segmenting the gesture image to obtain a gesture binary map.
  • the gesture tracking is performed by using a CAM Shift tracking algorithm of HSV space to binary the gesture.
  • the image is converted into a color probability distribution map, and the tracking of the object with the characteristic color is realized by multiple iterations;
  • the feature extraction is a Hu invariant moment feature for extracting a gesture from a gesture contour in the gesture binary image, the feature having translation, rotation, and scale invariance, and feeding the feature to the BP neural network. II practice;
  • the gesture recognition sends the extracted features to the trained BP neural network for matching recognition, thereby completing the gesture recognition.
  • the fall detection determination includes human body contour point extraction, adjacent frame contour point matching, and deformation analysis.
  • the human body contour point extraction uses the edge information to establish a single Gaussian background model, and the background subtraction method is used to extract the contour of the human body and the contour points are thinned by means of equal spacing sampling;
  • the target point contour matching of the target frame is to use the shape context (SC) feature to achieve the matching of the contour points;
  • SC shape context
  • the shape context describes the spatial distribution relationship between the feature point and its adjacent points, any one of the previous frames
  • the point can match any contour point of the next frame, and the matching selects the best matching arrangement by the bidirectional matching algorithm;
  • the deformation analysis is to quantify the deformation using the average matching cost of the best matching points, which is defined as follows: c" ⁇ l c(n)
  • the average matching cost when falling is a large value. After the fall, the human body will remain still or only have a slight movement in a short time. Therefore, the average matching cost f Is a small value;
  • the Dynamic Time Warping (DTW) algorithm is used to match the dynamic matching of the average matching cost f within a short time before and after the set threshold, so as to accurately detect and accurately determine the fall behavior.
  • the flame detection is to analyze the time and spatial transformation state of the image region to realize the extraction of the flame region in the image, thereby implementing detection of the flame in the video image; the specific process includes:
  • the smoke detection i1 ⁇ 2 is as follows:
  • Two-dimensional discrete wavelet transform is performed on the potential smoke region to obtain the high-frequency portion of the image. If the high-frequency energy is smaller than the corresponding background, it is further confirmed as the smoke region.
  • the data collection module interacts with a smart device in a surrounding environment to read data in the smart device;
  • the data is uploaded to the processing module in real time;
  • the processing module processes the data according to a predetermined algorithm, and the control module issues a corresponding prompt instruction based on the processing result.
  • the processing module simultaneously receives the video image and the audio information in real time and processes, analyzes, and identifies the video image and audio information according to a predetermined algorithm
  • the control module synthesizes the result of processing the data based on the processing, analysis, and recognition results, and issues a corresponding prompt instruction.
  • the monitoring data of the wearable smart medical device worn by the person who is received by the treatment control system is extracted, and the control system determines the current body of the person based on the monitoring data.
  • the condition, and a physical condition prompt instruction is included.
  • the monitoring data of the gas sensor set in the surrounding environment received by the control system, the control system determines the current fire condition based on the monitoring data, and issues a fire condition prompting instruction accordingly.
  • the robot is provided with a stage for carrying articles for convenient transportation.
  • the stage is placed on top of the robot.
  • the processing and comprehensive analysis of various environmental information including video images, audio information, and various sensing data are realized in an actual home environment, thereby intelligently recognizing the current environment.
  • the robot of the present invention is capable of recognizing a person's voice, images, and actions, and thus can conveniently assign tasks to the robot, thereby assisting the owner in performing some work;
  • the robot of the present invention can recognize itself and the position by using the mobile system, thereby being able to self-position, and in combination with the power supply system of the robot, in the case of insufficient power, it is possible to move to the charging position and charge itself.
  • the camera device provided by the robot in the present invention is mounted on a pan/tilt that can be rotated in all directions, and combined with the driving motor, thereby being able to freely set the shooting angle of the camera device. .
  • set the shooting angle to a height of 30cm or less, or within 20cm.
  • the robot in the present invention can be used as a small butler in the family center, and the user can issue various commands to the robot through rich voice commands, local or remote remote commands; and can set the timing inspection mode when no one is in the house. If abnormal conditions such as gas leakage, fire, stranger intrusion, personnel hijacking, violent conflict, theft, old people fall, etc. are found, the home local alarm is immediately notified, and the remote alarm of the intelligent terminal is preset to the owner.
  • FIG. 1 is a block diagram of the robot in the prior art
  • Figure 2 is a block diagram of the relative position (top view) of the robot system module of the present invention
  • FIG. 3 is a block diagram of the circuit connection of the robot system module of the present invention
  • Figure 4 is a functional block diagram of the robot of the present invention
  • Figure 5 is a flow chart of the robot system of the present invention
  • the robot cooperates with the three symmetrically disposed wheels on the bottom and the surface-distributed ranging sensors to achieve autonomous movement within the chamber.
  • the block diagram of the robot system module is installed in the shape of the machine on the machine side.
  • 3 sets of ⁇ , each complement light is matched with it, and the left and right symmetrical set of 2 sounds, 2 speakers, 1 HD display , 1 sensor panel, and integrated multi-channel sensor, and through Bluetooth, WIFI and Zigbee communication methods with infrared pyroelectric alarms, smoke detectors, home smart hospital equipment such as blood oxygen, blood pressure meter, and smart health Wearable devices, household appliances (lights, TVs, water tanks, home theaters, etc.) enable short-range communication, and this information is efficiently processed in real time by on-board high-performance CPU, PPC, DSP, ARM and other processors, and through 3G/4G
  • the wireless mobile communication method or wired network represented by the user realizes real-time interaction with various intelligent terminals such as smart phones, thereby realizing various powerful and intelligent design functions of the robot.
  • Interface module including touch display, indicator light, infrared remote control; setting the robot working parameters through the touch screen to display the working state of the robot;
  • the core control board module that is, the processing control system: mainly consists of embedded processor chips, including CPU, PPC, DSP, ARM and other processors, storage systems, such as eMMC file system and TF memory card and peripheral interface.
  • the Freescale quad-core 1GHZ processor is used to comprehensively analyze and process various information received by the controller, and the machine is instructed to perform corresponding actions; the core control module adopts an embedded operating system and runs the robot control.
  • Application software including robot autonomous walking, item transportation control, task setting, various signal acquisition, audio and video acquisition, alarm communication, face recognition, language recognition and analysis, human behavior analysis and target tracking algorithm.
  • the wireless monitoring module that is, the data acquisition module that interacts with various surrounding smart devices: uses Bluetooth, Zigbee or WIFI as the communication method to receive wireless signal data of home appliances and other devices in real time, and transmits them to the core control module for processing.
  • the following two types of equipment are mainly supported:
  • Smart sensors such as smoke sensors, gas leak detectors, glass break detectors And various other special gas sensors, including carbon dioxide, carbon monoxide, formaldehyde, air volume sensors, etc.;
  • smart medical devices such as sphygmomanometers, oximeters, blood glucose meters, and other smart medical or wearable devices;
  • a wireless remote control unit is further disposed, and the wireless remote control unit includes a wireless receiving unit and a wireless transmitting unit; the wireless receiving unit automatically receives and stores remote control signals of each household appliance, and when the robot is required to control the household appliance, the household The machine L sends a corresponding control command to the wireless remote control terminal, and the wireless remote control terminal transmits the wireless remote control command of the analog electric appliance to achieve the purpose of controlling the household appliance.
  • the positioning and driving module that is, the moving system of the robot: the positioning driving module is composed of the robot positioning module and the driving module; the robot positioning module transmits the external information of the position of the robot to the core control module for processing, and completes the identification and positioning of the environment where the robot is located, And the motor drive module drives the robot to walk under the control of the high-precision motion control device.
  • the robot positioning module is composed of a self-awareness sensing unit and an environment sensing unit;
  • the self-state sensing unit includes an acceleration sensor, an electronic compass and a cliff sensor for determining the state of the robot itself;
  • the environment sensing unit is composed of a ranging sensor, an anti-collision sensor and intelligent positioning.
  • the system consists of determining the external state of the robot.
  • the visual module ie the video capture module: consists of three miniature cameras, pan-tilt mechanisms and fill-in devices distributed in different locations.
  • the fill light device can be a near-infrared fill light system that provides video data for face recognition, identification, behavior analysis, indoor environmental recording and environmental monitoring 24 hours a day.
  • the three miniature cameras use CMOS video sensors to capture face recognition video, target tracking video, and ambient video.
  • the pan/tilt mechanism for controlling the angle at which the camera is photographed includes a control motor and a corresponding transmission mechanism that rotates in the horizontal and vertical directions to enable the camera to capture images of various corners.
  • the control motor can receive an instruction from the processing control system to adjust the shooting angle of the camera in real time.
  • the near-infrared fill light system consists of a near-infrared fill light that surrounds the camera, ensuring a stable and clear image under different lighting conditions.
  • the voice module includes 2 pickups and 2 speakers.
  • the pickup receives various voice inputs, such as a master voice command, an indoor abnormal sound, etc.; the universal speaker outputs the robot voice.
  • the wireless data transmission module that is, the communication system, uses today's mobile communication, including various communication standards including 2G, 3G/4G, and WIFI, to realize communication with intelligent terminals (such as mobile phones) and other servers.
  • various communication standards including 2G, 3G/4G, and WIFI
  • APP application server
  • smart terminal such as mobile phone
  • the intelligent power management module detects the robot's power in real time. When the power is not enough, the robot will automatically move to the charging position and charge itself.
  • Charging can be done by charging or by charging with the latest non-contact charging method.
  • the non-contact charging is carried out by using the principle of electromagnetic wave induction. Similar to the transformer, there is a coil at the transmitting and receiving ends, and the transmitting end line is connected to the wired power source to generate the signal, and the receiving end coil senses the electromagnetic signal of the transmitting end to generate a current. Battery charging, this charging method reduces manual intervention. When the robot is in low battery, the charging sensor automatically finds the charging pile for charging, which realizes the self-management of the robot.
  • the robot of the present invention sets a process control system to process the above data and make a corresponding judgment.
  • the initialization of the main chip is completed first, then the peripheral acquisition device is completed, the communication device is initialized, and the main process of the system software is started; the signal acquisition thread, the audio intelligent video analysis thread, the service, the control work thread, and the communication thread are created.
  • Signal acquisition thread Monitors the peripheral device signal, and the signal is processed and sent to the corresponding signal processing thread for processing. For example, when the camera and the pickup collect and collect audio and video data, the audio and video signals are first preprocessed, and then the preprocessed audio and video data is transmitted to the audio intelligent video analysis thread.
  • Intelligent audio and video analysis thread After receiving the audio and video data, the thread analyzes the audio and video frame data, and starts algorithm detection such as fall, gesture, person hijacking, personnel intrusion, violent conflict, etc., if the duration of the intrusion is detected to exceed the duration After the threshold is determined by face recognition, voice recognition, etc., the alarm signal is transmitted to the alarm thread; at the same time, the encoding, compression, and storage of the audio and video data are completed.
  • algorithm detection such as fall, gesture, person hijacking, personnel intrusion, violent conflict, etc.
  • Robot service, control thread including robot walking control, various command response processing.
  • Communication thread establish communication connection and communication with the remote device; if receiving the alarm signal, organize relevant alarm information, such as pre-recorded audio and video, pictures, etc., to send to the predetermined mobile phone terminal.
  • relevant alarm information such as pre-recorded audio and video, pictures, etc.
  • Intelligent video surveillance technology is mainly to automatically analyze video, extract key information from video, discover and identify abnormal events of interest, and thus can replace human monitoring or assistance.
  • Human monitoring video analysis and recognition involves complex software algorithms that can be programmed to identify strange and anomalous behaviors
  • video content analysis and recognition software can detect suspicious activity, events by analyzing live or recorded video streams. Or behavioral mode;
  • the intelligentization of the video surveillance system means that the system can automatically detect, identify, and analyze video anomalies in the monitoring screen without human intervention, and make pre-/alarms in time.
  • the intelligent recognition of video images mainly includes the following aspects.
  • HOG Heistograms of Oriented Gradients, HOG
  • HOG features have a certain degree of invariance for small changes in local regions
  • HOG features divide pictures into N units, called "cell", adjacent units (cell) is composed of a plurality of blocks, which may or may not overlap
  • the HOG feature block is extracted by counting the gradient direction distribution of each unit in the small block; by changing the cell
  • the division method can generate a large number of HOG features fast; each learns from a series of training image data through machine learning to a classifier, which can be used to detect the human body; the classifier selective support vector machine (Support Vector) Machine, SVM), the performance of linear SVM classifier classification improved by adaptive lifting learning algorithm (Adaboost);
  • SVM Simple Vector
  • Adaboost adaptive lifting learning algorithm
  • T-round screening select T optimal weak classifiers:
  • the weak classifier is a linear SVM, and each feature is a HOG feature block of various sizes and sizes; the body recognition classifier is trained by the above method, and the detection window of the classifier is traversed in the video image to realize the detection of the human body.
  • the texture feature selects the Local Binary Pattern (LBP) feature;
  • LBP is an effective texture description operator with strong texture recognition ability and is insensitive to changes in brightness.
  • the definition of LBP is as follows :
  • P represents the number of pixels in the neighborhood, g p expressed in g.
  • the center of the distance is the gray value of the p-th equidistant point on the ring of R;
  • the LBP histogram can be formed by the LBP value of each pixel in the statistical region, where the LBP histogram is quantized to 32 orders;
  • the color feature selects the H component and the V component, the H component reflects the color feature of the target, the V component reflects the brightness characteristic of the target, and the color component is quantized to 32 orders;
  • the characteristics of the final target are expressed as a three-dimensional feature histogram, including two-dimensional color features and one-dimensional texture features, and the quantization order of each dimensional feature histogram is 32;
  • the weighted feature histogram is selected as the target model, and the weighted feature histogram reflects the statistical characteristics of the target region.
  • the kernel function selects Epanechnikov Kernel, as shown in the following equation:
  • C is the normalization coefficient and P n ) represents the weight of the nth-order histogram centered on
  • N represents the number of pixels in the region
  • K () is a kernel function Epanechnikov, any point within the region, ⁇ - ⁇ is ⁇ ⁇ to the distance,] a unit impulse function, h (corresponding to the three-dimensional feature The order in the histogram;
  • X o [ p, q] ⁇ - / p 3 ⁇ 4 ( n )q( n ) where ⁇ o [ p , q] is expressed at x .
  • the degree of similarity between the weighted feature histogram established as the target model p 3 ⁇ 4 ( n ) and the pre-established template q( n ) is greater, and the degree of similarity is higher;
  • m represents the order of the histogram;
  • Target tracking includes position prediction, mean shift search and feature update.
  • the position prediction of the target is implemented by gray template matching.
  • the position prediction can find the approximate position of the target in the current frame.
  • the precise position of the target is averaged. Search is obtained; the mean offset position search is as shown in equation (26):
  • W and H represent the width and height of the target template
  • Represents the geometric center coordinates of the current target point
  • X is the sample point
  • G () is the derivative kernel function
  • h represents the bandwidth nucleus, a weighting coefficient
  • the update of the target model is necessary to achieve stable and accurate target tracking.
  • Blind update may cause external interference to be also incorporated into the model, so that the model cannot fully describe the characteristics of the target.
  • the model will come. The more the deviation from the actual situation of the target, the lower the tracking accuracy;
  • A represents the Pap address of the optimal position of the kth frame
  • Pk represents the model of the acquired k-th frame image object.
  • the position of the person in the image can be obtained by human body detection and tracking.
  • the face detection is turned on next, and if the face is detected, the face recognition is performed, and if the face recognition is judged as non-
  • an alarm is triggered and a frontal photo of the 3 ⁇ 4 member is captured.
  • limb conflict In the event of limb conflict, accompanied by intense and irregular movement and loud whistling, limb conflict can be detected by optical flow vector analysis and audio analysis. When both methods detect limb conflict, trigger limb conflict alarm and capture A photo of the field.
  • the target tracking area can be used to obtain the region where the target is located.
  • the KLT Kerade-Lucas-Tomasi
  • Histogram H P statistical analysis of the regional optical flow vector; i:l
  • E H The regional entropy E H is used to measure the violent irregular motion.
  • the expression of E H is as follows:
  • the analysis of the simultaneously collected audio can be combined to further improve the accuracy of the recognition.
  • the analysis of audio is mainly reflected in the fact that due to intense speech and loud whistling in the event of a limb conflict, it is possible to detect whether there is a physical conflict using a specific sound based on audio analysis.
  • the vision-based gesture recognition system mainly includes image preprocessing, gesture tracking, feature extraction and gesture recognition.
  • the collected image is subjected to median filtering smoothing and denoising.
  • the following formula is used in the HSV color space.
  • f(x, y) is the image coordinate of the pixel point
  • R, G, B are the color components of the RGB space
  • H is the color component of the HSV space
  • the morphological filtering is used to remove the voids and rough edges in the segmented image to obtain a closed and complete gesture image.
  • the contour of the gesture binary image is extracted by the 8-neighbor search method, and the chain code is obtained.
  • Gesture wheel gallery
  • the tracking of the gesture is implemented by the CamShift (Continuous Adaptive Mean-Shift) tracking algorithm based on HS V space.
  • the algorithm first establishes the color histogram model of the target, converts the image into a color probability distribution map, and makes the search window to the centroid through multiple iterations. The direction moves until convergence, thereby achieving tracking of the feature color object;
  • the contour image of the gesture is obtained by image preprocessing, where the Hu invariant moment feature of the gesture is extracted, which has translation, rotation and scale invariance; Hu's 7 invariant moment groups are defined as follows:
  • ⁇ , (3?7 21 - ⁇ 03 ) ( ⁇ 3 ⁇ + ⁇ 12 ) ⁇ [( o + 7 1 ⁇ 2 ⁇ 3 ( 2 ⁇ + os) 2 ]
  • ⁇ ⁇ is the center distance, defined as follows:
  • BP neural network is a forward-propagating multi-layer network, training consists of forward propagation and back propagation; in the forward propagation process, input information is implicit from the input layer.
  • the layer is processed step by step and passed to the output layer.
  • the state of each layer of neurons only affects the state of the next layer of neurons. If the output layer does not receive the expected output, it is converted to reverse broadcast, and the error signal is returned along the link channel. Passing, by modifying the weight of each layer of neurons, the error signal is minimized; the error backpropagation algorithm uses the error between the actual output value and the expected value to correct the multi-layer connection weight of the network from the back to the front;
  • the number of output nodes When actually training the neural network, first determine the number of output nodes according to the number of gesture categories, the number of input nodes is the number of features, the hidden layer generally selects one or two layers, and the number of hidden layer nodes is determined according to the training result;
  • the meaning of the gesture picture after image preprocessing, feature extraction, the feature iH ⁇ neural network into the w «i training, the number of hidden layer nodes in the training process to achieve the highest classification accuracy;
  • the extracted gesture features are sent to the network, and the category corresponding to the node with the largest response value of the network output layer is the recognized gesture category.
  • the edge information is used to establish a single Gaussian background model, which is compared with the use of color information to establish a background model.
  • the edge information is not sensitive to the change of the illumination and the shadow, which ensures the accuracy of the contour extraction.
  • the background subtraction method is used to extract the human body's veranda. Since the contour points of the extracted human body have great redundancy, the equally spaced way is adopted here. Sampling to achieve thinning of contour points;
  • log polar histogram is obtained by placing each feature point at the center of gravity of the coordinate; used to characterize the matching cost of the two contour points, where a ⁇ distribution is used to represent
  • Any contour point of the previous frame can match any wheel point of the next frame, to achieve the most contour
  • c (n) represents the matching cost of the nth best matching point
  • N is the number of best matching points
  • f is a large value, within a short period of time after the fall (such as
  • the temporal and spatial transformation states of the image region are analyzed to realize the extraction of the flame region in the image; the specific process includes:
  • the smoke has a fuzzy occlusion characteristic, that is, the smoke will obscure the object and the edge of the object will be blurred;
  • the smoke detection process designed according to the above characteristics is as follows:
  • the image is divided into sub-blocks that are not overlapped.
  • the motion direction of the sub-block is judged, and the sub-blocks conforming to the direction of smoke movement are clustered to obtain a potential smoke area;
  • Two-dimensional discrete wavelet transform is performed on the potential smoke region to obtain the high-frequency portion of the image. If the high-frequency energy is smaller than the corresponding background, it is further confirmed as the smoke region.
  • voiceprint recognition it mainly includes two parts, namely offline model training and online voiceprint recognition.
  • offline model training the feature model of a specific person can be obtained.
  • the audio features are sent to different feature models for calculation, and the model with the highest similarity is selected as the final matching model to complete the recognition.
  • the voiceprint recognition process includes the following steps:
  • MFCC Mel-frequency cepstral coefficients
  • MPEG7 low-level audio descriptor MPEG7 provides a rich description of audio data, such as frequency corridor lines, audio objects, timbre, harmony, frequency characteristics, amplitude envelope, time structure (including rhythm), etc.
  • the GMM model can describe any distribution well by linear combination of multiple Gaussian distributions.
  • the model parameters ⁇ ⁇ ⁇ ⁇ , ⁇ ; ⁇ , ⁇ represent the probability of the Gaussian model, respectively representing the mean vector and covariance matrix of the Gaussian model;
  • the ⁇ algorithm consists of two steps, stepping out the expectation, calculating the auxiliary function Q ( A , , M step expectation maximization, maximizing QW), and continuously iterating through the E step and the M step until the algorithm converges;
  • Each person obtains the model into the database through GMM training to complete the voiceprint registration.
  • the voice clip is extracted, the similarity between each GMM model is calculated in the database, and the registered person corresponding to the GMM model with the highest similarity is For the voice.
  • the above speech recognition technologies include natural speech recognition technology and feature speech recognition technology.
  • Speech recognition technology based on instruction category is used for human-computer interaction;
  • Feature sound recognition technology can enhance the sensitivity of the robot to sounds such as door lock opening and glass breakage;
  • the voice pattern recognition technology can be used under certain conditions, such as when face recognition fails, Voiceprint recognition to distinguish family members from strangers.
  • the sound source recognition technology is also set in the processing control system of the robot, which enables the robot to independently discover and find suspicious situations in the security function, and based on the sound source identification technology, independently determines the direction in which the sound is emitted, and then the head This direction is aligned so that the camera, combined with video image recognition technology, makes further identification and judgment.
  • the robot of the present invention adopts a video image analysis, a voiceprint recognition, and a sound source recognition technology, and the processing control system can comprehensively process the processing results of various sensor data carried thereon, automatically distinguish the current environment and make corresponding The reaction, so that it has a very intelligent multi-functional robot, its multi-function, mainly reflected in the following aspects:
  • the real-time interception captures the voice; intelligently distinguishes the voice command and the door lock on, the glass breaks, etc.; when there is an abnormality such as glass breakage, automatically determines the direction of the sound, and then aligns the environment camera with the direction
  • the robot can be further identified and judged; when the face recognition fails, the voiceprint recognition is used to distinguish the family members from the strangers;
  • the robot When the robot receives the audio and video preview or playback command from the remote terminal such as the mobile phone, the robot will launch the preview or playback software and transmit the live broadcast to the remote terminal in real time;
  • Unattended mode when there is no one at home, this mode can be turned on, and the robot will regularly patrol the house or focus on the designated room or area;
  • the robot can be manipulated, such as sending robot control commands, modifying the machine, and so on;
  • the touch screen dynamically displays the working status information of the robot, and receives a touch control command
  • the robot can also accompany patients, the elderly and children who stay at home, and regularly remind them to take medicine, study, etc.; also can have the functions of reminder, « ⁇ dialogue and communication.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Alarm Systems (AREA)

Abstract

L'invention concerne un robot intelligent multifonctionnel domestique qui comprend un système de traitement et de commande et une pluralité de sous-systèmes fonctionnels. La pluralité de sous-systèmes fonctionnels comprend un système de mouvement, un système de collecte de données et un système de communication. Le robot intelligent multifonctionnel domestique est caractérisé en ce que le système de mouvement comprend un module de positionnement et un module d'entraînement, le module de positionnement reconnaît et localise l'environnement où se trouve le robot, et le module d'entraînement commande le mouvement du robot ; le système de collecte de données comprend un module de vision, un module vocal et un module de collecte de données, le module de vision comprend un dispositif de caméra qui recueille des images vidéo, et le module vocal comprend un capteur qui recueille des informations audio ; le système de communication comprend un module de communication sans fil qui met en œuvre la télécommunication du robot ; le système de traitement et de commande comprend un module de traitement et un module de commande, le module de traitement reçoit les données dans la pluralité de sous-systèmes fonctionnels en temps réel et traite les données conformément à un algorithme prédéfini, et le module de commande transmet des instructions de commande à tous les sous-systèmes fonctionnels sur la base du résultat de traitement pour commander l'action du robot.
PCT/CN2014/084138 2014-05-15 2014-08-11 Robot intelligent multifonctionnel domestique WO2015172445A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410205535.2 2014-05-15
CN201410205535.2A CN103984315A (zh) 2014-05-15 2014-05-15 一种家用多功能智能机器人

Publications (1)

Publication Number Publication Date
WO2015172445A1 true WO2015172445A1 (fr) 2015-11-19

Family

ID=51276330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/084138 WO2015172445A1 (fr) 2014-05-15 2014-08-11 Robot intelligent multifonctionnel domestique

Country Status (2)

Country Link
CN (1) CN103984315A (fr)
WO (1) WO2015172445A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105835069A (zh) * 2016-06-06 2016-08-10 李志华 智能家用保健机器人
CN106346491A (zh) * 2016-10-25 2017-01-25 塔米智能科技(北京)有限公司 一种基于人脸信息的智能会员服务机器人系统
CN109889772A (zh) * 2017-12-06 2019-06-14 东莞华南设计创新院 一种智能玩具用监视系统
TWI713947B (zh) * 2018-03-01 2020-12-21 日商歐姆龍股份有限公司 判定裝置以及判定裝置的控制方法
US11188810B2 (en) 2018-06-26 2021-11-30 At&T Intellectual Property I, L.P. Integrated assistance platform

Families Citing this family (146)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317199A (zh) * 2014-09-16 2015-01-28 江苏大学 移动式智能管家
CN104269016A (zh) * 2014-09-22 2015-01-07 北京奇艺世纪科技有限公司 一种报警方法及装置
CN105656953A (zh) * 2014-11-11 2016-06-08 沈阳新松机器人自动化股份有限公司 一种基于互联网大数据机器人物联网系统
CN104503419B (zh) * 2015-01-24 2016-02-03 康群 一种用于病房数据采集方法
CN104699101A (zh) * 2015-01-30 2015-06-10 深圳拓邦股份有限公司 可定制割草区域的机器人割草系统及其控制方法
CN108334109B (zh) * 2015-03-30 2021-02-12 绵阳硅基智能科技有限公司 一种语音控制装置
US9990917B2 (en) * 2015-04-13 2018-06-05 Intel Corporation Method and system of random access compression of transducer data for automatic speech recognition decoding
CN106155050A (zh) * 2015-04-15 2016-11-23 小米科技有限责任公司 智能清洁设备的工作模式调整方法及装置、电子设备
CN104808670B (zh) * 2015-04-29 2017-10-20 成都陌云科技有限公司 一种智能互动机器人
CN104853165A (zh) * 2015-05-13 2015-08-19 许金兰 一种基于WiFi技术的多媒体传感器网络系统
CN104932534B (zh) * 2015-05-22 2017-11-21 广州大学 一种云机器人清扫物品的方法
CN106325142A (zh) * 2015-06-30 2017-01-11 芋头科技(杭州)有限公司 一种机器人系统及其控制方法
CN105022400B (zh) * 2015-07-22 2018-06-22 上海思依暄机器人科技股份有限公司 受控机器人、遥控设备、机器人系统及其控制方法
CN105549399B (zh) * 2015-07-29 2018-09-07 宇龙计算机通信科技(深圳)有限公司 一种室内环境监控方法及物联网终端
CN105204349B (zh) * 2015-08-19 2017-11-07 杨珊珊 一种用于智能家居控制的无人飞行器及其控制方法
DE112015006877B4 (de) 2015-09-03 2024-05-02 Mitsubishi Electric Corporation Verhaltens-Identifizierungseinrichtung, Klimaanlage und Robotersteuerung
CN105182983A (zh) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 基于移动机器人的人脸实时跟踪方法和跟踪系统
CN106685026A (zh) * 2015-11-09 2017-05-17 江苏嘉钰新能源技术有限公司 一种带无线充电功能的电动汽车充电桩
CN105468145B (zh) * 2015-11-18 2019-05-28 北京航空航天大学 一种基于手势和语音识别的机器人人机交互方法和装置
CN105291113A (zh) * 2015-11-27 2016-02-03 深圳市神州云海智能科技有限公司 一种家庭看护机器人系统
CN105364915B (zh) * 2015-12-11 2017-06-30 齐鲁工业大学 基于三维机器视觉的智能家庭服务机器人
CN105380575B (zh) * 2015-12-11 2018-03-23 美的集团股份有限公司 扫地机器人的控制方法、系统、云服务器和扫地机器人
CN105415384A (zh) * 2015-12-30 2016-03-23 天津市安卓公共设施服务有限公司 一种变电站清扫巡视一体化作业机器人
CN105653037A (zh) * 2015-12-31 2016-06-08 张小花 一种基于行为分析的交互系统及方法
CN105563494A (zh) * 2016-01-29 2016-05-11 江西智能无限物联科技有限公司 智能陪护机器人
CN105760824B (zh) * 2016-02-02 2019-02-01 北京进化者机器人科技有限公司 一种运动人体跟踪方法和系统
CN105578058A (zh) * 2016-02-03 2016-05-11 北京光年无限科技有限公司 一种面向智能机器人的拍摄控制方法、装置及机器人
CN105511477A (zh) * 2016-02-16 2016-04-20 江苏美的清洁电器股份有限公司 清扫机器人系统和清扫机器人
CN107197196A (zh) * 2016-03-15 2017-09-22 群耀光电科技(苏州)有限公司 实时监控载具系统
CN107292908A (zh) * 2016-04-02 2017-10-24 上海大学 基于klt特征点跟踪算法的行人跟踪方法
CN105773615B (zh) * 2016-04-06 2018-05-29 成都令可科技有限公司 一种机器人系统
CN105798931B (zh) * 2016-04-26 2018-03-09 南京玛锶腾智能科技有限公司 智能机器人唤醒方法及装置
CN105788161A (zh) * 2016-04-27 2016-07-20 深圳前海勇艺达机器人有限公司 一种智能报警机器人
US10377042B2 (en) * 2016-06-17 2019-08-13 Intel Corporation Vision-based robot control system
CN105979230A (zh) * 2016-07-04 2016-09-28 上海思依暄机器人科技股份有限公司 一种机器人通过图像进行监控的方法及装置
CN106203361A (zh) * 2016-07-15 2016-12-07 苏州宾果智能科技有限公司 一种机器人跟踪方法和装置
CN106249711A (zh) * 2016-08-03 2016-12-21 海南警视者科技开发有限公司 一种多功能智能机器人
CN106200564A (zh) * 2016-08-03 2016-12-07 苏州见真物联科技有限公司 一种智能家居用全移动式一体化操作终端
CN106297790A (zh) * 2016-08-22 2017-01-04 深圳市锐曼智能装备有限公司 机器人的声纹服务系统及其服务控制方法
US10592805B2 (en) * 2016-08-26 2020-03-17 Ford Global Technologies, Llc Physics modeling for radar and ultrasonic sensors
CN106227216B (zh) * 2016-08-31 2019-11-12 朱明� 面向居家老人的家庭服务机器人
CN106341477A (zh) * 2016-09-12 2017-01-18 国网辽宁省电力有限公司电力科学研究院 一种实验过程信息自动采录上传系统及方法
CN106891337A (zh) * 2016-09-21 2017-06-27 摩瑞尔电器(昆山)有限公司 多功能服务机器人
CN106325282A (zh) * 2016-09-28 2017-01-11 中国人民解放军国防科学技术大学 一种智能机器人安防与服务综合方法
CN106558052A (zh) * 2016-10-10 2017-04-05 北京光年无限科技有限公司 一种用于智能机器人的交互数据处理输出方法及机器人
CN106371441A (zh) * 2016-10-13 2017-02-01 安徽翔龙电气有限公司 一种具有语音输入功能的智能扫地机器人系统
CN106440229B (zh) * 2016-10-24 2019-07-30 美的集团武汉制冷设备有限公司 智能扫地机器人及其系统和检测空气状况的方法
CN106325171A (zh) * 2016-10-31 2017-01-11 河池学院 一种可充电服务机器人
CN106408852A (zh) * 2016-10-31 2017-02-15 河池学院 一种基于移动互联网的机器人报警系统
CN106297172A (zh) * 2016-10-31 2017-01-04 河池学院 一种基于移动互联网的机器人报警方法
CN106408853A (zh) * 2016-10-31 2017-02-15 河池学院 一种基于移动互联网的机器人视频安防系统
CN106529460A (zh) * 2016-11-03 2017-03-22 贺江涛 一种基于机器人端的物体分类识别系统及识别方法
CN106774317A (zh) * 2016-12-13 2017-05-31 安徽乐年健康养老产业有限公司 一种辅助式机器人的操控方法
CN106448027A (zh) * 2016-12-15 2017-02-22 湖南纽思曼导航定位科技有限公司 一种智能安防设备
CN106682602B (zh) * 2016-12-16 2020-01-21 深圳市华尊科技股份有限公司 一种驾驶员行为识别方法及终端
CN106597903A (zh) * 2016-12-26 2017-04-26 刘震 一种定点位置环境感知系统
CN106625711A (zh) * 2016-12-30 2017-05-10 华南智能机器人创新研究院 一种定位机器人智能互动的方法
CN106791681A (zh) * 2016-12-31 2017-05-31 深圳市优必选科技有限公司 视频监控和人脸识别方法、装置及系统
CN108326875B (zh) * 2017-01-20 2023-03-24 松下知识产权经营株式会社 通信控制方法与装置、远程呈现机器人以及存储介质
CN106781332A (zh) * 2017-02-14 2017-05-31 上海斐讯数据通信技术有限公司 通过扫地机器人实现报警的方法及系统
WO2018152723A1 (fr) * 2017-02-23 2018-08-30 深圳市前海中康汇融信息技术有限公司 Robot de sécurité et de protection domestique et son procédé de commande
CN111844046A (zh) * 2017-03-11 2020-10-30 陕西爱尚物联科技有限公司 一种机器人硬件系统及其机器人
CN106950973A (zh) * 2017-05-19 2017-07-14 苏州寅初信息科技有限公司 一种基于教学机器人的智能道路巡逻方法及其系统
CN107030714A (zh) * 2017-05-26 2017-08-11 深圳市天益智网科技有限公司 一种医用看护机器人
CN107168079A (zh) * 2017-06-05 2017-09-15 百色学院 一种基于无线通信的家居安防机器人系统
CN107168174B (zh) * 2017-06-15 2019-08-09 重庆柚瓣科技有限公司 一种使用机器人做居家养老的方法
CN107199572B (zh) * 2017-06-16 2020-02-14 山东大学 一种基于智能声源定位与语音控制的机器人系统及方法
CN109093627A (zh) * 2017-06-21 2018-12-28 富泰华工业(深圳)有限公司 智能机器人
CN109116740A (zh) * 2017-06-23 2019-01-01 美的智慧家居科技有限公司 应用于智能家居的移动装置
CN107280588A (zh) * 2017-06-24 2017-10-24 武汉洁美雅科技有限公司 一种基于物联网的吸尘器红外遥控控制系统
CN107195153A (zh) * 2017-07-06 2017-09-22 陈旭东 一种智能家居安防系统
CN107248410A (zh) * 2017-07-19 2017-10-13 浙江联运知慧科技有限公司 声纹识别垃圾箱开门的方法
CN107665398A (zh) * 2017-09-06 2018-02-06 安徽乐金环境科技有限公司 空气净化器的净化中心调整定位方法
CN107610766A (zh) * 2017-09-12 2018-01-19 合肥矽智科技有限公司 一种病房医护机器人物联网系统
CN107633644A (zh) * 2017-09-30 2018-01-26 河南职业技术学院 一种基于计算机的安防系统
CN109719736B (zh) * 2017-10-31 2024-03-26 科沃斯机器人股份有限公司 自移动机器人及其控制方法
CN107864078A (zh) * 2017-11-10 2018-03-30 刘永新 一种可移动遥控家居装置
CN108182379B (zh) * 2017-11-28 2020-06-16 珠海格力电器股份有限公司 家电设备的防盗追踪方法、装置和系统
CN108053606A (zh) * 2017-12-28 2018-05-18 深圳市国华光电科技有限公司 一种智能家居防盗系统
CN108132606A (zh) * 2018-02-02 2018-06-08 宁夏慧百通赢科技有限公司 基于无线传输的家居控制方法及装置
CN108363490A (zh) * 2018-03-01 2018-08-03 深圳大图科创技术开发有限公司 一种交互效果良好的智能机器人系统
CN108237536A (zh) * 2018-03-16 2018-07-03 重庆鲁班机器人技术研究院有限公司 机器人控制系统
CN108765921A (zh) * 2018-04-04 2018-11-06 昆山市工研院智能制造技术有限公司 基于视觉语意分析应用于巡逻机器人的智能巡逻方法
CN108806013A (zh) * 2018-04-04 2018-11-13 昆山市工研院智能制造技术有限公司 巡逻机器人生态系统
CN108527382A (zh) * 2018-04-09 2018-09-14 上海方立数码科技有限公司 一种巡检机器人
CN108724178B (zh) * 2018-04-13 2022-03-29 顺丰科技有限公司 特定人自主跟随方法及装置、机器人、设备和存储介质
CN112352244B (zh) * 2018-04-23 2024-04-09 尚科宁家运营有限公司 控制系统和更新存储器中的地图的方法
CN108551355A (zh) * 2018-04-23 2018-09-18 王宏伟 一种安防巡逻用机器人
CN108712404B (zh) * 2018-05-04 2020-11-06 重庆邮电大学 一种基于机器学习的物联网入侵检测方法
CN108748143A (zh) * 2018-05-04 2018-11-06 安徽三弟电子科技有限责任公司 一种基于物联网的生活提示机器人控制系统
CN108960109B (zh) * 2018-06-26 2020-01-21 哈尔滨拓博科技有限公司 一种基于两个单目摄像头的空间手势定位装置及定位方法
CN108874142B (zh) * 2018-06-26 2019-08-06 哈尔滨拓博科技有限公司 一种基于手势的无线智能控制装置及其控制方法
CN108921218B (zh) * 2018-06-29 2022-06-24 炬大科技有限公司 一种目标物体检测方法及装置
CN108806142A (zh) * 2018-06-29 2018-11-13 炬大科技有限公司 一种无人安保系统,方法及扫地机器人
CN109003262B (zh) * 2018-06-29 2022-06-21 炬大科技有限公司 顽固污渍清洁方法及装置
CN108898108B (zh) * 2018-06-29 2022-04-26 炬大科技有限公司 一种基于扫地机器人的用户异常行为监测系统及方法
CN108574804A (zh) * 2018-07-04 2018-09-25 珠海市微半导体有限公司 一种用于视觉机器人的光源补偿系统及方法
CN109118703A (zh) * 2018-07-19 2019-01-01 苏州菲丽丝智能科技有限公司 一种智能家居安防系统及其工作方法
CN109190456B (zh) * 2018-07-19 2020-11-20 中国人民解放军战略支援部队信息工程大学 基于聚合通道特征和灰度共生矩阵的多特征融合俯视行人检测方法
CN109005432A (zh) * 2018-07-24 2018-12-14 上海常仁信息科技有限公司 一种基于健康机器人的网络电视系统
CN108919809A (zh) * 2018-07-25 2018-11-30 智慧式控股有限公司 智慧式安保机器人及商业模式
CN109117055A (zh) * 2018-07-26 2019-01-01 深圳市商汤科技有限公司 智能终端及控制方法
CN109257563A (zh) * 2018-08-30 2019-01-22 浙江祥生建设工程有限公司 工地远程监控系统
CN109191768A (zh) * 2018-09-10 2019-01-11 天津大学 一种基于深度学习的家庭成员安全隐患监测方法
CN109445427A (zh) * 2018-09-26 2019-03-08 北京洪泰同创信息技术有限公司 智能家具、家具定位装置及家具定位系统
CN109147277A (zh) * 2018-09-30 2019-01-04 桂林海威科技股份有限公司 一种老人看护系统及方法
CN109333548A (zh) * 2018-10-18 2019-02-15 何勇 一种具有智力培训功能的智能服务聊天机器人
CN109634129B (zh) * 2018-11-02 2022-07-01 深圳慧安康科技有限公司 主动关怀的实现方法、系统及装置
CN109691090A (zh) * 2018-12-05 2019-04-26 珊口(深圳)智能科技有限公司 移动目标的监控方法、装置、监控系统及移动机器人
CN109739097A (zh) * 2018-12-14 2019-05-10 武汉城市职业学院 一种基于嵌入式web的智能家居机器人及其用途
CN109740461B (zh) * 2018-12-21 2020-12-25 北京智行者科技有限公司 目标跟随后的处理方法
CN109547771A (zh) * 2019-01-07 2019-03-29 中国人民大学 一种具备裸眼3d显示装置的家用智能机器人
CN109800802A (zh) * 2019-01-10 2019-05-24 深圳绿米联创科技有限公司 视觉传感器及应用于视觉传感器的物体检测方法和装置
JP7358051B2 (ja) * 2019-01-28 2023-10-10 株式会社日立製作所 移動型空気清浄機
CN110164538B (zh) * 2019-01-29 2024-06-18 浙江瑞华康源科技有限公司 一种医用物流系统及方法
CN109887515B (zh) * 2019-01-29 2021-07-09 北京市商汤科技开发有限公司 音频处理方法及装置、电子设备和存储介质
CN109917666B (zh) * 2019-03-28 2023-03-24 深圳慧安康科技有限公司 智慧家庭的实现方法及智能装置
CN109993945A (zh) * 2019-04-04 2019-07-09 清华大学 用于渐冻症患者监护的报警系统及报警方法
CN110161903B (zh) * 2019-05-05 2022-02-22 宁波财经学院 一种智能家居机器人及智能家居机器人的控制方法
CN110162044A (zh) * 2019-05-18 2019-08-23 珠海格力电器股份有限公司 一种自动无线充电装置及充电方法
CN110209483A (zh) * 2019-05-28 2019-09-06 福州瑞芯微电子股份有限公司 扫地机控制系统及控制方法、存储介质及控制终端
CN110765895A (zh) * 2019-09-30 2020-02-07 北京鲲鹏神通科技有限公司 一种机器人辨别物体方法
CN110891352B (zh) * 2019-11-26 2021-09-28 珠海格力电器股份有限公司 一种用于智能灯的控制方法及控制系统
CN111491004A (zh) * 2019-11-28 2020-08-04 赵丽侠 基于云存储的信息更新方法
CN111464776A (zh) * 2020-01-19 2020-07-28 浙江工贸职业技术学院 一种物联网安全报警设备及考核方法
CN111322718A (zh) * 2020-03-16 2020-06-23 北京云迹科技有限公司 一种数据处理方法和送货机器人
CN111300429A (zh) * 2020-03-25 2020-06-19 深圳市天博智科技有限公司 机器人控制系统、方法及可读存储介质
CN111428666A (zh) * 2020-03-31 2020-07-17 齐鲁工业大学 基于快速人脸检测的智能家庭陪伴机器人系统及方法
CN111508184A (zh) * 2020-04-10 2020-08-07 扬州大学 一种建筑物内智能防火系统
CN113571054B (zh) * 2020-04-28 2023-08-15 中国移动通信集团浙江有限公司 语音识别信号预处理方法、装置、设备及计算机存储介质
CN111611904B (zh) * 2020-05-15 2023-12-01 新石器慧通(北京)科技有限公司 基于无人车行驶过程中的动态目标识别方法
CN111618856B (zh) * 2020-05-27 2021-11-05 山东交通学院 基于视觉兴奋点的机器人控制方法、系统及机器人
CN113822095B (zh) * 2020-06-02 2024-01-12 苏州科瓴精密机械科技有限公司 基于图像识别工作位置的方法、系统,机器人及存储介质
CN111862524B (zh) * 2020-07-10 2022-08-05 广州博冠智能科技有限公司 一种基于智能家居系统的监控报警方法及装置
CN111898524A (zh) * 2020-07-29 2020-11-06 江苏艾什顿科技有限公司 一种5g边缘计算网关及其应用
CN111915851A (zh) * 2020-08-11 2020-11-10 山西应用科技学院 一种燃气智能开关
CN111964154B (zh) * 2020-08-28 2021-09-21 邯郸美的制冷设备有限公司 空调器室内机、控制方法、运行控制装置及空调器
CN112101145B (zh) * 2020-08-28 2022-05-17 西北工业大学 基于svm分类器的移动机器人位姿估计方法
CN113035374B (zh) * 2021-03-16 2024-03-12 深圳市南山区慢性病防治院 一种结核病综合管理系统及管理方法
CN113119118A (zh) * 2021-03-24 2021-07-16 智能移动机器人(中山)研究院 一种智能室内巡检机器人系统
CN113143165A (zh) * 2021-04-26 2021-07-23 上海甄徽网络科技发展有限公司 一种具有消毒功能的智能安防家居机器人
CN113177972A (zh) * 2021-05-20 2021-07-27 杭州华橙软件技术有限公司 一种对象跟踪方法、装置、存储介质及电子装置
CN113362563B (zh) * 2021-06-03 2022-07-15 国网北京市电力公司 电力隧道异常情况的确定方法和装置
CN113341812A (zh) * 2021-06-11 2021-09-03 深圳风角智能科技有限公司 一种环保蓄电式物联网终端能耗省电管理系统和方法
CN114018253B (zh) * 2021-10-25 2024-05-03 珠海一微半导体股份有限公司 具有视觉定位功能的机器人及定位方法
CN117064255A (zh) * 2022-05-10 2023-11-17 神顶科技(南京)有限公司 扫地机器人和扫地机器人识别跌倒的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000222576A (ja) * 1999-01-29 2000-08-11 Nec Corp 人物識別方法及び装置と人物識別プログラムを記録した記録媒体ならびにロボット装置
WO2005098729A2 (fr) * 2004-03-27 2005-10-20 Harvey Koselka Robot domestique autonome
CN101786272A (zh) * 2010-01-05 2010-07-28 深圳先进技术研究院 一种用于家庭智能监控服务的多感知机器人
CN101957194A (zh) * 2009-07-16 2011-01-26 北京石油化工学院 基于嵌入式的移动机器人快速视觉定位及远程监控系统及方法
CN103419203A (zh) * 2012-05-21 2013-12-04 李坚 一种全天型家用机器人

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010176177A (ja) * 2009-01-27 2010-08-12 Panasonic Electric Works Co Ltd 負荷制御システム
CN101947788A (zh) * 2010-06-23 2011-01-19 焦利民 一种智能机器人
CN203259876U (zh) * 2013-03-01 2013-10-30 李冀 一种移动机器人智能家居控制系统
CN103198605A (zh) * 2013-03-11 2013-07-10 成都百威讯科技有限责任公司 室内突发异常事件报警系统
CN103273982B (zh) * 2013-04-27 2018-01-19 深圳市英倍达科技有限公司 一种多功能全地形仿生机器人
CN103593680B (zh) * 2013-11-19 2016-09-14 南京大学 一种基于隐马尔科夫模型自增量学习的动态手势识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000222576A (ja) * 1999-01-29 2000-08-11 Nec Corp 人物識別方法及び装置と人物識別プログラムを記録した記録媒体ならびにロボット装置
WO2005098729A2 (fr) * 2004-03-27 2005-10-20 Harvey Koselka Robot domestique autonome
CN101957194A (zh) * 2009-07-16 2011-01-26 北京石油化工学院 基于嵌入式的移动机器人快速视觉定位及远程监控系统及方法
CN101786272A (zh) * 2010-01-05 2010-07-28 深圳先进技术研究院 一种用于家庭智能监控服务的多感知机器人
CN103419203A (zh) * 2012-05-21 2013-12-04 李坚 一种全天型家用机器人

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI, LIN ET AL.: "Human detection based on extended histogram of oriented gradient and multi-scale detection", JOURNAL OF COMPUTER APPLICATIONS, vol. S2, 31 December 2012 (2012-12-31) *
LIANG, CHENHUA ET AL.: "Human Recognition Based on Random Forest Classifier of HOG", VIDEO ENGINEERING, vol. 37, no. 15, 31 August 2013 (2013-08-31) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105835069A (zh) * 2016-06-06 2016-08-10 李志华 智能家用保健机器人
CN106346491A (zh) * 2016-10-25 2017-01-25 塔米智能科技(北京)有限公司 一种基于人脸信息的智能会员服务机器人系统
CN109889772A (zh) * 2017-12-06 2019-06-14 东莞华南设计创新院 一种智能玩具用监视系统
TWI713947B (zh) * 2018-03-01 2020-12-21 日商歐姆龍股份有限公司 判定裝置以及判定裝置的控制方法
US11614525B2 (en) 2018-03-01 2023-03-28 Omron Corporation Determination device and control method of determination device
US11188810B2 (en) 2018-06-26 2021-11-30 At&T Intellectual Property I, L.P. Integrated assistance platform

Also Published As

Publication number Publication date
CN103984315A (zh) 2014-08-13

Similar Documents

Publication Publication Date Title
WO2015172445A1 (fr) Robot intelligent multifonctionnel domestique
CN103839373B (zh) 一种突发异常事件智能识别报警装置及报警系统
CN103839346B (zh) 一种智能门窗防入侵装置以及系统、智能门禁系统
US10475311B2 (en) Dynamic assessment using an audio/video recording and communication device
Shojaei-Hashemi et al. Video-based human fall detection in smart homes using deep learning
US10963681B2 (en) Face concealment detection
US20190347916A1 (en) Electronic devices capable of communicating over multiple networks
Charfi et al. Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification
Planinc et al. Introducing the use of depth data for fall detection
CN110291489A (zh) 计算上高效的人类标识智能助理计算机
US11341825B1 (en) Implementing deterrent protocols in response to detected security events
CN106327738B (zh) 一种智能分级监控系统
Shoaib et al. View-invariant fall detection for elderly in real home environment
US10943442B1 (en) Customized notifications based on device characteristics
US11164435B1 (en) Audio/video recording and communication doorbell devices with supercapacitors
WO2017098265A1 (fr) Procédé et appareil de surveillance
US10733857B1 (en) Automatic alteration of the storage duration of a video
US10791607B1 (en) Configuring and controlling light emitters
US20240184868A1 (en) Reference image enrollment and evolution for security systems
JPH08257017A (ja) 状態監視装置及びその方法
Yun et al. Recognition of emergency situations using audio–visual perception sensor network for ambient assistive living
US11550276B1 (en) Activity classification based on multi-sensor input
US10834366B1 (en) Audio/video recording and communication doorbell devices with power control circuitry
US11032762B1 (en) Saving power by spoofing a device
US12014611B1 (en) Temporal motion zones for audio/video recording devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14891813

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 20-01-2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14891813

Country of ref document: EP

Kind code of ref document: A1