CN110765895A - Method for distinguishing object by robot - Google Patents

Method for distinguishing object by robot Download PDF

Info

Publication number
CN110765895A
CN110765895A CN201910947267.4A CN201910947267A CN110765895A CN 110765895 A CN110765895 A CN 110765895A CN 201910947267 A CN201910947267 A CN 201910947267A CN 110765895 A CN110765895 A CN 110765895A
Authority
CN
China
Prior art keywords
robot
image
database
objects
distinguishing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910947267.4A
Other languages
Chinese (zh)
Inventor
赵志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Roc Theurgy Technology Co Ltd
Original Assignee
Beijing Roc Theurgy Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Roc Theurgy Technology Co Ltd filed Critical Beijing Roc Theurgy Technology Co Ltd
Priority to CN201910947267.4A priority Critical patent/CN110765895A/en
Publication of CN110765895A publication Critical patent/CN110765895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention provides a method for distinguishing an object by a robot, which comprises the following steps: the robot captures image data of objects in the environment in the moving and advancing process by utilizing the installed camera; the robot filters the image data, removes background images, extracts object images, and then compares and matches the processed object images with object templates in a built-in database; if the matching is successful, the robot judges the name and the function of the object according to the comparison result to generate an object identification result; and the robot sends the discrimination result to the bound upper computer or terminal equipment for the user to check. The robot real-time learning comparison method is adopted, various objects are distinguished by the machine memory learning method, distinguishing efficiency is high, accuracy is high, and intelligent object distinguishing of the robot is finally achieved by adding background artificial real-time auxiliary comparison to machine learning, so that object identification is enabled to be omitted, and the distinguishing field is wide in coverage.

Description

Method for distinguishing object by robot
Technical Field
The invention relates to the technical field of robots, in particular to a method for distinguishing objects by a robot.
Background
The existing object detection and comparison still adopts the traditional manual comparison mode. This approach has the following problems: the comparison efficiency is low, the accuracy is also low, and long-time work, personnel easily produce tired and lead to the error higher moreover.
In addition, the robot is developed rapidly, and the machine learning contrast is a future development trend. How to apply machine learning to object identification is a technical problem to be solved in the existing object identification field.
Disclosure of Invention
The object of the present invention is to solve at least one of the technical drawbacks mentioned.
Therefore, the invention aims to provide a method for identifying an object by a robot.
In order to achieve the above object, an embodiment of the present invention provides a robot distinguishing method including the steps of:
step S1, the robot captures the image data of the objects in the environment during the moving process by using the installed camera;
step S2, the robot filters the image data, removes the background image, extracts the object image, and then compares and matches the processed object image with the object template in the built-in database; wherein, the database stores standard images of various objects and corresponding function descriptions;
step S3, if the matching is successful, the robot judges the name and the function of the object according to the comparison result and generates an object identification result;
and step S4, the robot sends the discrimination result to a bound upper computer or terminal equipment for a user to check.
Further, the camera adopts long-range high definition digtal camera to shoot, catches the visual angle and includes fore-and-aft direction, left and right sides direction and upper and lower direction.
Further, in step S3, if the matching is unsuccessful, the robot sends the object image to a remote server in a remote wireless communication mode, and presents the object image to a human operator for viewing, identifies the object by manual assistance, and enters the identification result into the remote server through a keyboard, and the remote server further sends the object image to a bound upper computer or terminal equipment for viewing by a user.
Further, the robot adopts a 4G/5G mobile communication mode or a WIFI communication mode.
Further, the built-in database of the robot is updated in real time, wherein the robot receives the object data updated by the human operator, learns the object data set and stores the object data set in the database for subsequent identification use.
Furthermore, a walking route is preset in the robot, a built-in positioning module is utilized, navigation walking is carried out according to the preset walking route and the set destination, when the robot deviates from the walking route, an alarm notice is sent out, and the route is corrected in time.
Further, the robot detects and distinguishes objects met on the traveling route by using the infrared sensor, automatically bypasses the obstacles when finding that the objects meet the obstacles on the traveling route, and automatically returns to the preset traveling route after bypassing the obstacles.
Furthermore, the main category, the sub-category, the application field and the function description of each object are stored in the database of the robot in a classification form, and when the acquired image data of the object is compared with the template data in the database, the category of the object is quickly located after the name of the object is distinguished.
Further, in the step S1, the robot captures a still image and a motion video of the object at the same time, recognizes the name of the object from the still image, recognizes the action trajectory of the object from the motion video, and provides the user with a reference view.
Further, in the step S4, the robot further receives a feedback correction opinion of the user with respect to the recognition result and a tracking instruction,
when the robot receives the feedback correction opinions, the recognition result is corrected into an object name provided by the feedback correction opinions of the user;
and when the robot receives a tracking instruction, tracking and capturing images of the corresponding object in the tracking instruction.
According to the robot object distinguishing method provided by the embodiment of the invention, the robot is adopted to capture the image data of the object in the environment, then the image data is processed to extract the accurate object image, and then the name of the object is distinguished through machine learning comparison and artificial-assisted comparison. Compared with the traditional manual mode, the mode of distinguishing objects by the machine can realize more efficient and high-speed data processing, and compared with the traditional manual mode, the accuracy is higher. The invention adopts a machine real-time learning comparison mode, and utilizes a machine memory learning method to distinguish various objects, so that the distinguishing efficiency is high and the precision is high. According to the invention, the background artificial real-time auxiliary comparison is added to the machine learning, and the robot can finally realize the intelligent object identification, so that the object identification is ensured to be omitted, and the identification field coverage is wide. When the situation that a high-difficulty machine cannot be distinguished is met, a manual distinguishing and manual processing mode is adopted, and omission is avoided.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flow chart of a method of identifying an object by a robot according to an embodiment of the invention;
fig. 2 is a schematic diagram of a method for identifying an object by a robot according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
As shown in fig. 1 and 2, a robot distinguishing method according to an embodiment of the present invention includes the steps of:
in step S1, the robot captures image data of objects within the environment during the moving travel using the mounted camera.
In this step, the camera adopts long-range high definition digtal camera to shoot, catches the visual angle and includes fore-and-aft direction, left and right sides direction and upper and lower direction. For example, the camera can be installed on the top of the robot, can rotate 360 degrees and move up and down, and can rotate in all directions.
In an embodiment of the present invention, the robot captures a still image and a motion video of an object at the same time, recognizes the name of the object from the still image, recognizes the action trajectory of the object from the motion video, and provides the user with reference observation.
Specifically, the captured still image may be beneficial to accurately analyzing the omnidirectional characteristics of the object, and the captured dynamic video may be beneficial to analyzing the movement trajectory of the object, thereby facilitating the analysis of the functional role of the object.
Step S2, the robot filters the image data, removes the background image, extracts the object image, and then compares and matches the processed object image with the object template in the built-in database; wherein the database stores standard images of various objects and corresponding functional descriptions.
Specifically, the robot filters the image data, mainly filters an invalid background image in the image, and retains a useful object image, thereby being more beneficial to subsequent image analysis and comparison.
It should be noted that the built-in database of the robot is updated in real time, wherein the robot receives the object data updated by the human operator, learns the object data set and stores the object data set in the database, so as to facilitate the subsequent identification and use.
In the embodiment of the invention, the database of the robot stores the main category, the sub-category, the application field and the function description of each object in the form of a classification form, and when the acquired image data of the object is compared with the template data in the database, the category of the object is quickly positioned after the name of the object is distinguished.
For example: main classification-food; sub-classification-beverage; the object-coca-cola.
When the robot detects the coca-cola, the robot can quickly locate the object as a beverage or food, and the object has the function of being drunk by people.
In step S3, if the matching is successful, the robot determines the name and function of the object from the comparison result, and generates an object identification result.
In addition, if the matching is unsuccessful, the robot sends the object image to a remote server in a remote wireless communication mode, the object image is displayed for a human operator to check, the object is distinguished by manual assistance, a distinguishing result is input into the remote server through a keyboard, and the remote server is further sent to a bound upper computer or terminal equipment for a user to check.
In the embodiment of the invention, the remote server can adopt a central computer and a computer server.
And step S4, the robot sends the discrimination result to the bound upper computer or terminal equipment for the user to check.
It should be noted that, high-speed 5G wireless modules are arranged on both the remote server and the upper computer, and each 5G wireless module includes a transmitting unit and a receiving unit, and the units are used for realizing remote transmission of data.
In this step, the robot further receives a feedback correction opinion of the user with respect to the discrimination result and a tracking instruction. For example, when the user receives the discrimination result and judges that the result is erroneous, the user feeds back a correction opinion, and writes an object name that the user thinks in the opinion. In addition, after the user receives the identification result, if the object is considered to be important and needs to be tracked, a tracking instruction is sent.
When the robot receives the feedback correction opinions, the recognition result is corrected into an object name provided by the feedback correction opinions of the user;
when the robot receives the tracking instruction, tracking and image capturing are carried out on the corresponding object in the tracking instruction.
In the embodiment of the invention, the robot adopts a 4G/5G mobile communication mode or a WIFI communication mode.
In addition, a walking route is preset in the robot, the robot navigates and walks according to the preset walking route and the set destination by utilizing the built-in positioning module, and an alarm notice is sent out when the robot deviates from the walking route so as to correct the route in time.
The robot detects and distinguishes objects met on a traveling route by using an infrared sensor, automatically bypasses an obstacle when finding that the objects meet the obstacle on the traveling route, and automatically returns to a preset traveling route after bypassing the obstacle.
According to the robot object distinguishing method provided by the embodiment of the invention, the robot is adopted to capture the image data of the object in the environment, then the image data is processed to extract the accurate object image, and then the name of the object is distinguished through machine learning comparison and artificial-assisted comparison. Compared with the traditional manual mode, the mode of distinguishing objects by the machine can realize more efficient and high-speed data processing, and compared with the traditional manual mode, the accuracy is higher. The invention adopts a machine real-time learning comparison mode, and utilizes a machine memory learning method to distinguish various objects, so that the distinguishing efficiency is high and the precision is high. According to the invention, the background artificial real-time auxiliary comparison is added to the machine learning, and the robot can finally realize the intelligent object identification, so that the object identification is ensured to be omitted, and the identification field coverage is wide. When the situation that a high-difficulty machine cannot be distinguished is met, a manual distinguishing and manual processing mode is adopted, and omission is avoided.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A method for identifying objects by a robot, comprising the steps of:
step S1, the robot captures the image data of the objects in the environment during the moving process by using the installed camera;
step S2, the robot filters the image data, removes the background image, extracts the object image, and then compares and matches the processed object image with the object template in the built-in database; wherein, the database stores standard images of various objects and corresponding function descriptions;
step S3, if the matching is successful, the robot judges the name and the function of the object according to the comparison result and generates an object identification result;
and step S4, the robot sends the discrimination result to a bound upper computer or terminal equipment for a user to check.
2. The robot-aware object method of claim 1, wherein the camera takes a picture with a remote high-definition camera, and the capturing view angle includes a front-back direction, a left-right direction, and an up-down direction.
3. The method for identifying an object by a robot as claimed in claim 1, wherein in step S3, if the matching is unsuccessful, the robot sends the image of the object to a remote server by remote wireless communication, and presents the image to a human operator for viewing, identifies the object by human assistance, and enters the identification result into the remote server by a keyboard, and the identification result is further sent by the remote server to a bound upper computer or terminal equipment for viewing by a user.
4. A robot-aware object method according to claim 3, wherein the robot employs a 4G/5G mobile communication system or a WIFI communication system.
5. The robot-aware object method of claim 1, wherein the built-in database of the robot is updated in real time, wherein the robot receives the object data updated by the human operator, learns the object data set and stores it in the database for subsequent use in recognition.
6. The method for identifying an object by a robot according to claim 1, wherein the robot is preset with a walking route, navigates according to the preset walking route and a set destination by using a built-in positioning module, and gives an alarm notification to correct the route in time when deviating from the walking route.
7. The robot-aware object recognition method according to claim 6, wherein the robot detects and recognizes the object encountered on the travel route by using an infrared sensor, automatically bypasses the obstacle when finding that the obstacle is encountered on the travel route, and automatically returns to the preset travel route after bypassing the obstacle.
8. The robot-aware object recognition method according to claim 1, wherein the database of the robot stores therein main categories, sub-categories, application fields and function descriptions of the respective objects in the form of classification forms, and when comparing the acquired image data of the object with the template data in the database, the categories of the object are quickly located after recognizing the names of the objects.
9. The robot-recognized object method of claim 1, wherein in the step S1, the robot simultaneously captures a still image and a moving video of the object, recognizes a name of the object from the still image, recognizes a trajectory of the object from the moving video, and provides the user with a reference view.
10. The robot recognition object method as claimed in claim 1, wherein in said step S4, said robot further receives a user' S feedback correction opinion and a tracking instruction with respect to said recognition result,
when the robot receives the feedback correction opinions, the recognition result is corrected into an object name provided by the feedback correction opinions of the user;
and when the robot receives a tracking instruction, tracking and capturing images of the corresponding object in the tracking instruction.
CN201910947267.4A 2019-09-30 2019-09-30 Method for distinguishing object by robot Pending CN110765895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910947267.4A CN110765895A (en) 2019-09-30 2019-09-30 Method for distinguishing object by robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910947267.4A CN110765895A (en) 2019-09-30 2019-09-30 Method for distinguishing object by robot

Publications (1)

Publication Number Publication Date
CN110765895A true CN110765895A (en) 2020-02-07

Family

ID=69330973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910947267.4A Pending CN110765895A (en) 2019-09-30 2019-09-30 Method for distinguishing object by robot

Country Status (1)

Country Link
CN (1) CN110765895A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114532923A (en) * 2022-02-11 2022-05-27 珠海格力电器股份有限公司 Health detection method and device, sweeping robot and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
US20160203391A1 (en) * 2014-05-20 2016-07-14 International Business Machines Corporation Information Technology Asset Type Identification Using a Mobile Vision-Enabled Robot
CN106926247A (en) * 2017-01-16 2017-07-07 深圳前海勇艺达机器人有限公司 With the robot looked for something in automatic family
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109345733A (en) * 2018-09-07 2019-02-15 杭州物宜网络科技有限公司 The pricing method and system of intelligent scale
CN110232326A (en) * 2019-05-20 2019-09-13 平安科技(深圳)有限公司 A kind of D object recognition method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
US20160203391A1 (en) * 2014-05-20 2016-07-14 International Business Machines Corporation Information Technology Asset Type Identification Using a Mobile Vision-Enabled Robot
CN106926247A (en) * 2017-01-16 2017-07-07 深圳前海勇艺达机器人有限公司 With the robot looked for something in automatic family
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109345733A (en) * 2018-09-07 2019-02-15 杭州物宜网络科技有限公司 The pricing method and system of intelligent scale
CN110232326A (en) * 2019-05-20 2019-09-13 平安科技(深圳)有限公司 A kind of D object recognition method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114532923A (en) * 2022-02-11 2022-05-27 珠海格力电器股份有限公司 Health detection method and device, sweeping robot and storage medium
CN114532923B (en) * 2022-02-11 2023-09-12 珠海格力电器股份有限公司 Health detection method and device, sweeping robot and storage medium

Similar Documents

Publication Publication Date Title
CN109887040B (en) Moving target active sensing method and system for video monitoring
CN111496770B (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
JP3927980B2 (en) Object detection apparatus, object detection server, and object detection method
CN110135249B (en) Human behavior identification method based on time attention mechanism and LSTM (least Square TM)
CN111079600A (en) Pedestrian identification method and system with multiple cameras
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN110560373B (en) Multi-robot cooperation sorting and transporting method and system
US11371851B2 (en) Method and system for determining landmarks in an environment of a vehicle
Momeni-k et al. Height estimation from a single camera view
CN110853073A (en) Method, device, equipment and system for determining attention point and information processing method
TWI759767B (en) Motion control method, equipment and storage medium of the intelligent vehicle
Kim et al. Eye-in-hand stereo visual servoing of an assistive robot arm in unstructured environments
CN112518748B (en) Automatic grabbing method and system for visual mechanical arm for moving object
CN106584516A (en) Intelligent photographing robot for tracing specified object
CN115346256A (en) Robot searching method and system
Gori et al. Robot-centric activity recognition ‘in the wild’
CN108881846B (en) Information fusion method and device and computer readable storage medium
CN101859376A (en) Fish-eye camera-based human detection system
CN110765895A (en) Method for distinguishing object by robot
CN114511592A (en) Personnel trajectory tracking method and system based on RGBD camera and BIM system
Fahn et al. A high-definition human face tracking system using the fusion of omni-directional and PTZ cameras mounted on a mobile robot
CN113158766A (en) Pedestrian behavior recognition method facing unmanned driving and based on attitude estimation
CN108074264A (en) A kind of classification multi-vision visual localization method, system and device
Naser et al. Infrastructure-free NLoS obstacle detection for autonomous cars
CN113743380A (en) Active tracking method based on video image dynamic monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200207

RJ01 Rejection of invention patent application after publication