CN110370273A - A kind of Obstacle Avoidance, device and system - Google Patents

A kind of Obstacle Avoidance, device and system Download PDF

Info

Publication number
CN110370273A
CN110370273A CN201910566538.1A CN201910566538A CN110370273A CN 110370273 A CN110370273 A CN 110370273A CN 201910566538 A CN201910566538 A CN 201910566538A CN 110370273 A CN110370273 A CN 110370273A
Authority
CN
China
Prior art keywords
robot
moment
information
sample information
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910566538.1A
Other languages
Chinese (zh)
Other versions
CN110370273B (en
Inventor
徐学军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910566538.1A priority Critical patent/CN110370273B/en
Publication of CN110370273A publication Critical patent/CN110370273A/en
Priority to PCT/CN2020/097871 priority patent/WO2020259524A1/en
Application granted granted Critical
Publication of CN110370273B publication Critical patent/CN110370273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the present application discloses a kind of Obstacle Avoidance, device and system, is related to field of electronic device, solves and is carried out in predicament identification process using machine learning method, and human cost investment is big, the low problem of trained model accuracy.The specific scheme is that robot obtains the sample information at N number of moment in moving process.Wherein, the sample information at each moment includes: the ambient image information for being used to indicate the moment robot local environment, is used to indicate the location information of the robot moment position in N number of moment, and is used to indicate the label information whether the robot moment be in the state of being caught in.The sample information at M moment in N number of moment is sent to server by robot.Robot receives the resulting predicament identification model of sample information training acquired by one or more robot from server.Robot carries out avoidance according to predicament identification model in moving process.During the embodiment of the present application is for robot movement.

Description

A kind of Obstacle Avoidance, device and system
Technical field
This application involves field of electronic device more particularly to a kind of Obstacle Avoidances, device and system.
Background technique
Robot (such as sweeping robot) has been widely used in daily life.Due to the work of robot Make to be almost entirely autonomous, therefore, in the process of work, is frequently encountered some stranded situations.Currently, can be by making Identification predicament is carried out come image training robot with the method for machine learning, to carry out avoidance.
Wherein, more typical machine learning method includes supervised learning and intensified learning.In supervised learning, need artificial A large amount of sample data is collected, and manually stamps the corresponding label being caught in or be not caught in these sample datas, then Exercising supervision to tagged sample data, learning model is trained, and acquisition corresponds to model, so that robot is sentenced according to the model It is disconnected whether to need to carry out avoidance.
It illustratively, is sweeping robot with robot, for sample data is environmental information locating for sweeping robot. Environmental information as shown in Figure 1A can not move freely because sweeping robot is stranded by the connecting line of insert row, therefore, manually The label being caught in can be stamped for the sample data.For another example, in environment as shown in Figure 1B, sweeping robot not insert row and its Connecting line is stranded, and can be moved freely, and therefore, manually can stamp the label not being caught in for the sample data.For another example, scheming In environment shown in 2A, sweeping robot is in the angular position of wall and fixed object, can not continue to move freely, because This, manually can be that the sample data is stamped the label being caught in.For another example, in the environment shown in Fig. 2 B, sweeping robot It in the position far from wall and fixed object, can continue to move freely, therefore, can manually be stamped for the sample data The label not being caught in.For another example, in the environment shown in Fig. 3 A, sweeping robot is in the gap between fixed object and ground In, it can not continue to move freely, therefore, manually can stamp the label being caught in for the sample data.It is corresponding, such as Fig. 3 B institute Show, robot can be moved freely far from the gap between fixed object and fixed object and ground, then manually can be The sample data stamps the label not being caught in.According to shown in a large amount of similar Figure 1A-Fig. 3 B it is tagged after sample data, The learning model training that exercises supervision can obtain corresponding model, the avoidance for sweeping robot.
Intensified learning refers to a kind of method that robot is learnt in a manner of " trial and error ".Robot is according to artificially collecting A large amount of sample data, and the label being caught in or be not caught in manually stamped to these sample datas, by continuous " trial and error ", determine the maximum behavior of the award of environmental feedback.According to award result and the corresponding trial and error side of award result Method is marked, in this, as the reference of lower task.
It can be seen that requiring a large amount of sample number no matter using supervised learning or intensified learning progress predicament identification According to.These sample datas require to artificially collect at present, are such as searched for and are obtained by web crawlers.In order to distinguish the sample being collected into The state (such as whether in the state being caught in) of the corresponding robot of data, needs manually to carry out the sample data being collected into It identifies and stamps corresponding label.Thus the investment of a large amount of human cost can be brought.In addition, due to ununified sample number According to the standard of collection, the sample data quality being collected into is difficult to control, and the mark quality that will lead to label is low, such as marking error. And the sample data being collected by web crawlers, such as above-mentioned environmental information, it is with the environment map of the angle shot of user mostly Piece intuitively can not accurately embody robot local environment.These will lead to trained model inaccuracy.
Summary of the invention
The embodiment of the present application provides a kind of Obstacle Avoidance, device and system, solves using machine learning method It carries out in predicament identification process, human cost investment is big, the low problem of trained model accuracy.
In order to achieve the above objectives, the embodiment of the present application adopts the following technical scheme that
The embodiment of the present application in a first aspect, provide a kind of Obstacle Avoidance, this method comprises: robot obtains machine The sample information at N number of moment in device people's moving process, N are the integer greater than 0;Wherein, the sample at each moment is believed in N number of moment Breath includes: the ambient image information for being used to indicate the moment robot local environment, is used to indicate the robot moment in environment The location information of position in the environment of image information instruction, and it is used to indicate the label letter of the state at the robot moment Breath;Label information includes the first label or the second label, and the first label, which is used to indicate robot and is in, not to be caught in state, and second Label is used to indicate robot and is in the state that is caught in;The sample information at M moment in N number of moment is sent to service by robot Device, M are the positive integer less than or equal to N;Robot from server receive predicament identification model, predicament identification model by one or The sample information training gained of the multiple robot acquisitions of person;Robot is kept away in moving process according to predicament identification model Barrier.
In this way, by robot autonomous acquisition ambient image information and location information, and obtained according to current state Whether 100% be correctly in the label information for the state of being caught in for identified machine people, ensure that the accuracy of sample information While the investment of a large amount of manpowers that avoids.And the sample information at N number of moment is sent to server by robot, is obtained by training Predicament identification model is taken, for the mobile carry out avoidance guidance to robot, due to the available guarantor of the accuracy of sample information Card, therefore the accuracy of predicament identification model is also largely increased, it is possible thereby to which more accurate guidance machine people keeps away Barrier.
With reference to first aspect with above-mentioned possible implementation, in alternatively possible implementation, each moment Ambient image information includes: on the shiftable haulage line of the robot and mobile route periphery object in the moment robot local environment The image information of body.In this way, it is possible to which guaranteeing may include most possibly becoming robot moving disorder in ambient image information Object information.
With reference to first aspect with above-mentioned possible implementation, in alternatively possible implementation, each moment Location information includes: on the shiftable haulage line of robot and robot and to move in the environment of ambient image information instruction at the moment The relative position information of the object on moving-wire road periphery.In this way it can be determined that robot is specific in the moment local environment Position, so that robot can accurately predict position of the robot in subsequent time local environment.
With reference to first aspect with above-mentioned possible implementation, in alternatively possible implementation, robot will be N number of The sample information at M moment is sent to server in moment, comprising: robot determines in the sample information at N number of moment, n-th The label information that the sample information at moment includes is used to indicate machine and is in the state being caught at the n-th moment;Robot will The sample information and the sample information at n-th moment at the preceding M-1 moment at n-th moment are sent to server.In this way, due to right For the training of predicament identification model, the matter of the sample information being caught in and the sample information for the previous period being caught in Amount is higher, thus while guaranteeing the transmission of high quality samples information, reduces the total amount of system information transmissions, reduces The communication load of system.
With reference to first aspect with above-mentioned possible implementation, in alternatively possible implementation, robot according to Predicament identification model carries out avoidance in moving process, comprising: robot obtains the ambient image information at current time, and current The location information at moment;Robot according to the ambient image information at current time, the location information at current time and it is current when The moving direction and movement speed for carving robot obtain the ambient image information of subsequent time and the position letter of subsequent time Breath;Robot is according to the ambient image information of subsequent time, under the location information and predicament identification model of subsequent time determine The probability that one moment robot is caught in;Robot determines that the probability that subsequent time robot is caught in is greater than preset threshold, machine Device people changes Motion, carries out avoidance.In this way, by prediction subsequent time robot local environment ambient image information with And location information, in conjunction with predicament identification model, so that it may accurately judge whether subsequent time robot can be caught in, root The case where carrying out avoidance according to the result, effectively robot can be avoided to be caught in moving process appearance.
The second aspect of the embodiment of the present application provides a kind of obstacle avoidance apparatus, the apparatus may include: acquiring unit, communication Unit and avoidance unit.Acquiring unit, for obtaining the sample information at N number of moment in robot moving process, N is greater than 0 Integer;Wherein, the sample information at each moment includes: the ring for being used to indicate the moment robot local environment in N number of moment Border image information, the location information for being used to indicate position of the robot moment in the environment that ambient image information indicates, with And it is used to indicate the label information of the state at the robot moment;Label information includes the first label or the second label, the first mark Label are used to indicate robot and are in the state that is not caught in, and the second label is used to indicate robot and is in the state that is caught in;Communication unit Member, for the sample information at M moment in N number of moment to be sent to server, M is the positive integer less than or equal to N;Communication unit Member is also used to receive predicament identification model, the sample that predicament identification model is acquired by one or more robot from server Information training gained;Avoidance unit, for carrying out avoidance in robot moving process according to predicament identification model.
In conjunction with second aspect and above-mentioned possible implementation, in alternatively possible implementation, each moment Ambient image information includes: on the shiftable haulage line of the robot and mobile route periphery object in the moment robot local environment The image information of body.
In conjunction with second aspect and above-mentioned possible implementation, in alternatively possible implementation, each moment Location information includes: on the shiftable haulage line of robot and robot and to move in the environment of ambient image information instruction at the moment The relative position information of the object on moving-wire road periphery.
In conjunction with second aspect and above-mentioned possible implementation, in alternatively possible implementation, determination unit is used In the sample information for determining N number of moment, the label information that the sample information at n-th moment includes is used to indicate machine in N A moment is in the state being caught in;Communication unit, for the sample information at M moment in N number of moment to be sent to server, It include: communication unit, for sending the sample information at the preceding M-1 moment at n-th moment and the sample information at n-th moment To server.
In conjunction with second aspect and above-mentioned possible implementation, in alternatively possible implementation, acquiring unit, also For obtaining the ambient image information at current time and the location information at current time, believed according to the ambient image at current time Breath, the location information at current time and the moving direction and movement speed of current time robot, obtain the ring of subsequent time The location information of border image information and subsequent time;Avoidance unit, for being moved through according to predicament identification model in robot Avoidance is carried out in journey, comprising: avoidance unit, for the ambient image information according to subsequent time, the location information of subsequent time, And predicament identification model determines the probability that subsequent time robot is caught in, and determines the probability that subsequent time robot is caught in Greater than preset threshold, changes robot motion's strategy, carry out avoidance.
The third aspect of the embodiment of the present application, provides a kind of robot obstacle-avoiding system, which may include: one or more A robot and server.Robot, for obtaining the sample information at N number of moment in robot moving process, when will be N number of The sample information at quarter is sent to server;Wherein, N is the integer greater than 0, and the sample information at each moment includes: in N number of moment It is used to indicate the ambient image information of the moment robot local environment, is used to indicate the robot moment in ambient image information The location information of position in the environment of instruction, and it is used to indicate the label information of the state at the robot moment;Label letter Breath includes the first label or the second label, and the first label is used to indicate robot and is in the state that is not caught in, and the second label is used for Instruction robot is in the state that is caught in;Server, for receiving the sample information from one or more robots, to reception To the sample informations of one or more robots be trained, obtain predicament identification model, predicament identification model be sent to Robot;Robot is also used to receive predicament identification model, and avoidance is carried out in moving process according to predicament identification model.
In conjunction with the third aspect and above-mentioned possible implementation, in alternatively possible implementation, server is received It include first kind sample information and the second class sample information to sample information;Server is used for the one or more machines received The sample information of device people is trained, and obtains predicament identification model, comprising: server, for being carried out to first kind sample information Training obtains initial model, is verified according to the second class sample information to the accuracy rate of initial model, if verification accuracy rate is big In preset threshold, then initial model is determined as predicament identification model, if verification accuracy rate is less than preset threshold, server is connect The new sample information that robot is sent is received, continues to train according to new sample information and initial model, and according to the Two class sample informations are verified to the model that training obtains is continued, until verification accuracy rate is greater than preset threshold, by verification standard The model that true rate is greater than preset threshold is determined as predicament identification model;Wherein, if ambient image in the second class sample information is believed The state whether robot indicated by the result that breath and location information input initial model obtain is caught in, with the second class sample The state whether robot indicated by the label information in information is caught in is identical, it is determined that the verification of sample information is accurate; If the ambient image information of each sample information in the second class sample information and location information input initial model are obtained As a result the state whether indicated robot is caught in, with robot indicated by the label information in the second class sample information The state whether being caught in is different, it is determined that the verification inaccuracy of sample information;According to each sample in the second class sample information The check results of information determine verification accuracy rate.In this way, server can pass through first kind sample information and the second class sample The training and verification of information obtain accurate predicament identification model and send to robot for carrying out effective avoidance.
Fourth aspect, the embodiment of the present application provide a kind of robot, which may include processor, are used for and storage Device is connected, and calls the program stored in memory, to execute as appointed in the possible implementation of first aspect or first aspect A kind of Obstacle Avoidance.
5th aspect, the embodiment of the present application provide a kind of computer readable storage medium, comprising: computer software instructions; When computer software instructions are run in obstacle avoidance apparatus, so that obstacle avoidance apparatus executes the possibility such as first aspect or first aspect Implementation in any Obstacle Avoidance.
6th aspect, the embodiment of the present application provides a kind of computer program product, when computer program product is in computer When upper operation, so that the robot that computer executes as any in the possible implementation of first aspect or first aspect is kept away Barrier method.
It is to be appreciated that the robot obstacle-avoiding system of the obstacle avoidance apparatus of the second aspect of above-mentioned offer, the third aspect, the 4th The computer program product of the robot of aspect, the computer readable storage medium of the 5th aspect and the 6th aspect is used to hold Row corresponding method presented above, therefore, attainable beneficial effect can refer to corresponding side presented above Beneficial effect in method, details are not described herein again.
Detailed description of the invention
Figure 1A is robot in one of moving process environment schematic;
Figure 1B is another environment schematic of the robot in moving process;
Fig. 2A is another environment schematic of the robot in moving process;
Fig. 2 B is another environment schematic of the robot in moving process;
Fig. 3 A is another environment schematic of the robot in moving process;
Fig. 3 B is another environment schematic of the robot in moving process;
Fig. 4 is a kind of rough schematic view of robot obstacle-avoiding system provided by the embodiments of the present application;
Fig. 5 provides a kind of composition schematic diagram of robot for the application;
Fig. 6 is a kind of flow diagram of Obstacle Avoidance provided by the embodiments of the present application;
Fig. 7 is the flow diagram of another Obstacle Avoidance provided by the embodiments of the present application;
Fig. 8 is a kind of composition schematic diagram of predicament identification model provided by the embodiments of the present application;
Fig. 9 is robot obstacle-avoiding schematic diagram provided by the embodiments of the present application;
Figure 10 is a kind of rough schematic view of obstacle avoidance apparatus provided by the embodiments of the present application.
Specific embodiment
The method of existing robot (such as sweeping robot) identification predicament, needs to artificially collect sample data, these receipts The sample data collected also needs artificial tagged, therefore can bring the investment of larger human cost.And it manually labels Process may also can malfunction.Illustratively, it is illustrated by taking the artificial sweeping robot of machine as an example.For example, when machine of sweeping the floor When people is in position as shown in Figure 3A, people is from angle as shown in Figure 3A, it is believed that sweeping robot is stuck in solid It within gap between earnest body and ground, can not continue to move to, can manually stamp the label being caught in for it at this time.However it sweeps Floor-washing robot may not stopped by gap, can continue to work in the gap, in this way, just will appear the mark of the sample data The situation for signing information labeling mistake occurs.In addition, the sample data collected at present, is with the environment of the angle shot of user mostly Picture intuitively can not accurately embody robot local environment.These will lead to the predicament identification model of training acquisition not Accurately.
The application provides a kind of Obstacle Avoidance, the basic principle is that: robot obtains the N in itself moving process The sample information at a (N is the integer greater than 0) moment, is sent to server for the sample information at N number of moment.Robot is from service Device receives predicament identification model, wherein the sample information training institute that predicament identification model is acquired by one or more robot ?.Robot can carry out avoidance according to predicament identification model in moving process.
Wherein, the corresponding sample information of each moment at above-mentioned N number of moment includes the ring of the moment robot local environment The location information of position of border image information, the robot moment in the environment that ambient image information indicates.These information are equal To be collected by robot, can accurately identified machine people at that time where environment.Meanwhile each sample information is also Including being used to indicate the label information whether the robot moment be in the state being caught in.Likewise, label information is also machine Device people also can accurately embody the current state of robot according to the status indication itself identified at that time.In this way, by by this The sample information of sample is sent to server, for server training predicament identification model, can be improved the standard of predicament identification model True property.Simultaneously as not needing to artificially collect the ambient image information in the robot course of work, do not need manually to the ring yet Border image information labels, and therefore, saves a large amount of human cost investment.
Obstacle Avoidance provided by the present application is described in detail below in conjunction with attached drawing.
Referring to FIG. 4, being a kind of rough schematic view of robot obstacle-avoiding system provided by the embodiments of the present application.Such as Fig. 4 institute Show, which may include: robot 401 and server 402.It should be noted that above-mentioned robot 401 may include one or more robots, 1- robot, robot n as shown in Figure 4.
Wherein, each robot in above-mentioned robot 401 may be incorporated for, in itself moving process, obtains N The sample information at the N number of moment respectively obtained is sent to server respectively by wireless communication by the sample information at a moment 402.Wherein, the sample information at each moment may include: the ambient image information at the moment, the location information at the moment and should The label information at moment.Ambient image information is used to indicate environment locating for the moment robot.Location information is used to indicate machine Position of the device people in the environment indicated by the ambient image information at the moment.Label information is used to indicate the moment robot The no state being caught in.
Above-mentioned server 402, can be according to the sample information from one or more robots be received, to one received The sample information of a or multiple robots is trained, and obtains predicament identification model, and predicament identification model is sent to each Robot.So that robot 401 can carry out avoidance according to the predicament identification model received in moving process.
Illustratively, server 402 obtains the process of predicament identification model by sample information, can be by the following method Realize: the sample information of acquisition can be divided into two classes by server 402, and such as referred to as first kind sample information and the second class sample are believed Breath.First kind sample information may be used as the training of predicament identification model, and the second class sample information can know the predicament trained Other model is detected, to determine the accuracy of the predicament identification model trained.Server 402 obtains predicament identification model Process will be described in the examples below, herein without repeating.
Wherein, above-mentioned robot 401 can be sweeping robot, and the service industries such as bank provide the robot of Self-Service Etc. can be with the electronic equipment of autonomous.
Referring to FIG. 5, providing a kind of composition schematic diagram of robot 500 for the application.As shown in figure 5, robot 500 can To include image capture module 501, sensor 502, processor 503, memory 504 and communication module 505.
Wherein, processor 503 may include one or more processing units, such as: processor 503 may include using place It manages device (application processor, AP), modem processor, graphics processor (graphics processing Unit, GPU), image-signal processor (image signal processor, ISP), controller, memory, coding and decoding video Device, digital signal processor (digital signal processor, DSP), baseband processor and/or Processing with Neural Network Device (neural-network processing unit, NPU) etc..Wherein, different processing units can be independent device, Also it can integrate in one or more processors.
Wherein, controller can be nerve center and the command centre of robot 500.Controller can be operated according to instruction Code and clock signal generate operating control signal, the control completing instruction fetch and executing instruction.
Memory can also be set in processor 503, for storing instruction and data.In some embodiments, processor In memory be cache memory.The memory can save just used processor or the instruction being recycled or number According to.If processor needs to reuse the instruction or data, can be called directly from the memory.Repetition is avoided to deposit It takes, reduces the waiting time of processor 503, thus improve the efficiency of system.
In some embodiments, processor 503 may include one or more interfaces.Interface may include integrated circuit (inter-integrated circuit, I2C) interface, universal asynchronous receiving-transmitting transmitter (universal asynchronous Receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor Interface, MIPI), universal input exports (general-purpose input/output, GPIO) interface, user identifier Module (subscriber identity module, SIM) interface and/or universal serial bus (universal serial Bus, USB) interface etc..
Interface is a kind of bi-directional synchronization universal serial bus, including a serial data line (serial data line, SDA) and A piece serial time clock line (derail clock line, SCL).In some embodiments, it is total to may include multiple groups I2C for processor Line.Processor can distinguish coupling sensor, camera etc. by different I2C bus interface.
UART interface is a kind of Universal Serial Bus, is used for asynchronous communication.The bus can be bidirectional communications bus. The data that it will be transmitted are converted between serial communication and parallel communications.In some embodiments, UART interface usually by with In connection processor 503 and communication module 505.Such as: processor 503 passes through the bluetooth in UART interface and wireless communication module Module communication, realizes Bluetooth function.
MIPI interface can be used to connect the peripheral components such as processor and camera.MIPI interface includes that camera is serial Interface (camera serial interface, CSI) etc..In some embodiments, processor and camera pass through CSI interface The shooting function of robot 500 is realized in communication.
GPIO interface can pass through software configuration.GPIO interface can be configured as control signal, may be alternatively configured as counting It is believed that number.In some embodiments, GPIO interface can be used for connecting processor and camera, wireless communication module, sensor Module etc..GPIO interface can be additionally configured to I2C interface, UART interface, MIPI interface etc..
USB interface is the interface for meeting USB standard specification, specifically can be Mini USB interface, Micro USB interface, USB Type C interface etc..USB interface can be used for connecting charger for the charging of robot 500, can be used for robot 500 Data are transmitted between peripheral equipment.The interface can be also used for connection other robot 500 etc..
Image capture module 501 can acquire the image information on 500 periphery of robot, for example, shooting photo or shooting view Frequency etc..Robot 500 can be by ISP, and camera, Video Codec, GPU and application processor etc. realize that image is adopted Collect function.
ISP is used to handle the data of cam feedback.For example, opening shutter when taking pictures, light is passed to by camera lens On camera photosensitive element, optical signal is converted to electric signal, and the electric signal is passed to ISP and handled by camera photosensitive element, It is converted into macroscopic image.ISP can also carry out algorithm optimization to the noise of image, brightness etc..ISP can also be to shooting The exposure of scene, the parameter optimizations such as colour temperature.In some embodiments, ISP can be set in camera.
Camera is for capturing still image or video.Object generates optical imagery by camera lens and projects photosensitive element. Photosensitive element can be charge-coupled device (charge coupled device, CCD) or complementary metal oxide semiconductor (complementary metal-oxide-semiconductor, CMOS) phototransistor.Photosensitive element converts optical signal At electric signal, electric signal is passed into ISP later and is converted into data image signal.Data image signal is output to DSP and added by ISP Work processing.Data image signal is converted into the RGB of standard, the picture signal of the formats such as YUV by DSP.In some embodiments, machine Device people 500 may include 1 or N number of camera, and N is the positive integer greater than 1.
The movement speed of the available robot 500 of sensor 502, moving direction and the distance between with periphery object Etc. information.Illustratively, sensor 502 may include gyro sensor, velocity sensor, acceleration transducer, distance biography Sensor etc..
Wherein, gyro sensor is determined for the athletic posture of robot 500.It in some embodiments, can be with Determine that robot 500 surrounds the angular speed of three axis (that is, x, y and z-axis) by gyro sensor.Gyro sensor can be with For shooting stabilization.Illustratively, when robot 500 is when carrying out Image Acquisition, gyro sensor detection robot 500 is trembled Dynamic angle goes out the distance that lens module needs to compensate according to angle calculation, camera lens is allowed to offset robot 500 by counter motion Shake, realize stabilization.Gyro sensor can be also used for navigating, and judge the scenes such as whether robot 500 stranded.
Velocity sensor is for measuring movement speed.In some embodiments, robot 500 is measured by velocity sensor The movement speed at current time, can be in conjunction with range sensor with a period of time under environmental forecasting locating for current time robot 500 Carve environment locating for robot 500 etc..
Acceleration transducer can detect the size of (the generally three axis) acceleration in all directions of robot 500.Work as machine Device people 500 can detect that size and the direction of gravity when static.
Range sensor, for measuring distance.Robot 500 can pass through infrared or laser distance measuring.In some realities It applies in example, photographed scene, robot 500 can use range sensor ranging to realize rapid focus.
Memory 504 may include external memory and internal storage.External memory interface can be used for connecting External memory card, such as Micro SD card realize the storage capacity of expanding machinery people 500.External memory card passes through external storage Device interface and processor communication realize data storage function.Such as sample information file is stored in external memory card.
Internal storage can be used for storing computer executable program code, and the executable program code includes referring to It enables.Processor is stored in the instruction of internal storage by operation, thereby executing the various function application and number of robot 500 According to processing.Internal storage may include storing program area and storage data area.Wherein, storing program area can store operation system It unites, application program needed at least one function.Storage data area can store the number created in 500 use process of robot According to.In addition, internal storage may include high-speed random access memory, it can also include nonvolatile memory, such as extremely A few disk memory, flush memory device, generic flash memory (universal flash storage, UFS) etc..
The wireless communication function of robot 500 can be realized by communication module 505.For example, by communication module 505, Robot 500 can realize the communication between other equipment, such as the communication between server.As an example, it communicates Module 505 may include antenna 1, antenna 2, mobile communication module, wireless communication module, at modem processor and base band Manage device etc..
Antenna 1 and antenna 2 electromagnetic wave signal for transmitting and receiving.Each antenna in robot 500 can be used for covering Single or multiple communication bands.Different antennas can also be multiplexed, to improve the utilization rate of antenna.Such as: antenna 1 can be answered With the diversity antenna for WLAN.In other embodiments, antenna can be used in combination with tuning switch.
Mobile communication module can provide the solution of wireless communications such as including 2G/3G/4G/5G applied in robot 500 Certainly scheme.Mobile communication module may include at least one filter, switch, power amplifier, low-noise amplifier (low Noise amplifier, LNA) etc..Mobile communication module can be received electromagnetic wave by antenna 1, and be carried out to received electromagnetic wave Filtering, the processing such as amplification, is sent to modem processor and is demodulated.Mobile communication module can also mediate modulated solution The modulated signal amplification of device is managed, switchs to electromagenetic wave radiation through antenna 1 and goes out.In some embodiments, mobile communication module At least partly functional module can be arranged in processor.In some embodiments, at least partly function of mobile communication module Energy module can be arranged in the same device at least partly module of processor.
It includes WLAN (wireless local that wireless communication module, which can be provided and be applied in robot 500, Area networks, WLAN) (such as Wireless Fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), Global Navigation Satellite System (global navigation satellite system, GNSS), infrared technique The solution of wireless communications such as (infrared, IR).Wireless communication module can be integrated into a few Communications Processor Module One or more devices.Wireless communication module receives electromagnetic wave via antenna 2, at electromagnetic wave signal frequency modulation and filtering Reason, by treated, signal is sent to processor.Wireless communication module can also receive signal to be sent from processor, to it Frequency modulation is carried out, amplification switchs to electromagenetic wave radiation through antenna 2 and goes out.
In some embodiments, the antenna 1 of robot 500 and mobile communication module coupling, antenna 2 and wireless communication module Coupling, allowing robot 500, technology is communicated with server and other equipment by wireless communication.The wireless communication skill Art may include global system for mobile communications (global system for mobile communications, GSM), general It is grouped wireless service (general packet radio service, GPRS), CDMA accesses (code division Multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division CDMA (time-division code division multiple access, TD-SCDMA), it is long Phase evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM and/or IR technology etc..The GNSS can To include GPS (global positioning system, GPS), Global Navigation Satellite System (global Navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation Satellite system, BDS), quasi- zenith satellite system (quasi-zenith satellite system, QZSS) and/or Satellite-based augmentation system (satellite based augmentation systems, SBAS).
It is understood that the structure of the present embodiment signal does not constitute the specific restriction to robot 500.At other In embodiment, robot 500 may include perhaps combining certain components than illustrating more or fewer components or splitting certain A little components or different component layouts.The component of diagram can be realized with hardware, the combination of software or software and hardware.
Obstacle Avoidance provided by the embodiments of the present application can be as in the system of fig. 4 and shown in Fig. 5 Robot in realize.
Fig. 6 is a kind of flow diagram of Obstacle Avoidance provided by the embodiments of the present application.It please refers to shown in Fig. 6, This method may include S601-S605.
S601, robot obtain the sample information at N number of moment in moving process, and N is the integer greater than 0.
Wherein, the sample information at each moment includes the ambient image letter for being used to indicate the moment robot local environment The location information of position of the robot moment in the environment that ambient image information indicates is ceased, is used to indicate, and for referring to Show that the label information of the state at the robot moment, label information include the first label or the second label, the first label is for referring to Show that robot is in the state that is not caught in, the second label is used to indicate robot and is in the state that is caught in.
Illustratively, robot can obtain the environment of robot local environment by the module being arranged in robot Image information and location information.Meanwhile robot can judge whether to be caught according to the moving condition at current time, with Obtain the label information at the moment.For example, judging to be in the shape being caught at current time when robot can not continue to move to State, the label information that can get the moment accordingly is the second label.It is on the contrary, it is determined that robot is at current time and is not stranded State firmly, the label information that can get the moment accordingly is the first label.In this way, robot in moving process, passes through reality When obtain these information, the sample information at N number of moment that can obtain robot in moving process.Wherein, N is whole greater than 0 Number.
The sample information at M moment in N number of moment is sent to server by S602, robot, and M is less than or equal to N's Positive integer.
Wherein, the sample at N number of moment that robot can will acquire all is sent to server, i.e. M is equal to N.It is exemplary , robot, which can be, is just sent to service for the sample information at the moment in real time after every sample information for obtaining a moment Device is also possible to the sample information at this N number of moment issuing server together.
It is understood that robot can obtain a large amount of sample information in moving process, and some of which sample This information quality is higher, then, the higher information of the part mass in sample information that robot can will acquire is sent to Robot, that is, send the sample information at M moment in N number of moment, and M is the positive integer less than N.Illustratively, robot can be with By when being caught in sample information and robot be caught in front of the sample information not being caught in be sent to server.In this way Information quality can be improved under the premise of guaranteeing that sample information is sufficient, reduce system information load.
S603, server are trained the sample information acquired from one or more robot, obtain predicament and know Other model.
Wherein, since the training of predicament identification model needs a large amount of sample information, in the present embodiment, can have Multiple robots can report itself collected sample information to server.In this way, server can be more according to what is received The sample information of a robot acquisition carries out model training, to obtain more accurate predicament identification model.
S604, robot receive predicament identification model from server.
Wherein, server can send in real time to robot and be instructed according to the sample information that one or more robots obtain Practice resulting predicament identification model, is also possible to send predicament identification model to robot at predetermined intervals.Machine People can receive the predicament identification model by communication module, and store in memory.
S605, robot carry out avoidance according to the predicament identification model in moving process.
Robot is in moving process, the moving direction and movement speed at available current time, and according to current The ambient image information of ambient image information and location information the prediction subsequent time robot local environment at moment and position Confidence breath.So that robot determines whether subsequent time robot can be in the state being caught according to predicament identification model, or Person determines the probability that subsequent time robot is caught in.In this way, robot can determine subsequent time according to determining result Whether avoidance is needed.
Illustratively, in some embodiments, with the ambient image information of subsequent time robot local environment and position Input of the information as predicament identification model can determine whether subsequent time robot can be in the state being caught in.Work as machine When device people determines that subsequent time robot can be caught in, robot can carry out avoidance at current time, to avoid subsequent time It is caught in.In further embodiments, using the ambient image information of subsequent time robot local environment and location information as The input of predicament identification model can determine the probability that subsequent time robot is caught in.When robot determines subsequent time machine When the probability that device people is caught in is greater than preset threshold, robot can carry out avoidance operation.
Obstacle Avoidance provided by the embodiments of the present application, robot acquire the ring at N number of moment in itself moving process Border image information and location information, being capable of accurate identified machine people environment locating for each moment in N number of people's moment.Together When robot autonomous judge whether in the state that is caught in, to be automatically that environment locating for the corresponding moment is beaten according to judging result Upper label, this can accurately embody whether robot is caught at the corresponding moment.Robot using these information as sample by believing Breath is sent to server, for server training predicament identification model, can be improved the accuracy of predicament identification model.Meanwhile Due to not needing to artificially collect the ambient image information in the robot course of work, do not need manually to the ambient image information yet It labels, therefore, saves a large amount of human cost investment.
Referring to FIG. 7, for another Obstacle Avoidance provided by the embodiments of the present application.As shown in fig. 7, this method can To include S701-S706.
S701, robot obtain the sample information at N number of moment in moving process, the sample at each moment in N number of moment Information includes the label information of the ambient image information at the moment, the location information at the moment and the moment.
Wherein, the ambient image information at the moment is used to indicate the moment robot local environment.Believe the position at the moment Breath is used to indicate the position in the environment that ambient image information of the robot moment at the moment indicates.The label at the moment is believed Breath is used to indicate the state at the robot moment.Label information includes the first label or the second label, and the first label is used to indicate Robot is in the state that is not caught in, and the second label is used to indicate robot and is in the state that is caught in.
Due to robot be at work it is constantly mobile, in different moments, robot may be in different rings Border, position may also be different, if may also be different in the state being caught in.It is enough in order to allow server to obtain Sample information carry out the training of predicament identification model, robot can be according to environment locating in moving process, the position at place Set and whether obtain in the state being caught in the sample information at N number of moment in itself moving process.Wherein, N is whole greater than 0 Number.
Illustratively, illustrate some so that robot obtains the sample information at the first moment in moving process as an example below The acquisition process of moment sample information.First moment is any one moment in above-mentioned N number of moment.
In moving process, robot obtains the ambient image information of the robot local environment at the first moment.
Illustratively, in conjunction with Fig. 5, robot 500 can be by the image including camera that is arranged in robot 500 The ambient image information of the acquisition local environment of acquisition module 501.
In some embodiments, which can be the institute, robot that robot is shot at the first moment Locate the photo of environment.
In general, robot can be recorded by the way of recorded video in the process of work working condition or to User feedback job information.Therefore, in some other embodiment, robot can be by the first moment in the video of recording Ambient image information of the image as the first moment robot local environment.It is understood that video can be decomposed into it is more A continuous image of time domain.And the corresponding one section of play time of one section of video, therefore, each image in one section of video can correspond to A moment in the play time.So, each image corresponding with the moment can serve as robot and inscribe at this The ambient image information of local environment.For example, it is assumed that including the image A at moment 1, the figure at moment 2 in the video that robot records As the B and image C at moment 3.So, robot can using image A as robot the first moment local environment ring Border image information 1.Similar, using image B as robot at other moment, such as ambient image of the second moment local environment is believed B is ceased, using image C as robot at other moment, such as the ambient image information C of third moment local environment.
It should be noted that since the ambient image information is directly acquired by robot, it can be more accurate Reaction robot local environment.In addition, in robot moving process, due on its shiftable haulage line or shiftable haulage line periphery Object causes a possibility that hindering maximum the movement of robot, therefore, in ambient image information acquired in robot can be with Including on its shiftable haulage line or the image information of the object on shiftable haulage line periphery.In this way, ambient image information can be for machine Whether people may be caught in, and offer is more accurate to be referred to.Wherein, the shiftable haulage line of robot can be according to current time, as above The moving direction and movement speed for stating the first moment robot determine.In certain scenes, the shiftable haulage line of robot can be with It is that planning in advance is good, then robot can also obtain the letter of shiftable haulage line by inquiring the shiftable haulage line planned in advance Breath.
In moving process, robot obtains location information of the first moment robot in local environment.
Wherein, location information of the first moment robot in local environment can be first moment robot and machine The relative position information between other objects in people's local environment.When first moment robot local environment can refer to first Environment indicated by the ambient image information at quarter.
The environment locating for different moments robot is different, and therefore, robot is between different moments and same object Distance is also different.Illustratively, in conjunction with Fig. 5, robot 500 can be by the sensor 502 that is arranged in robot 500 When one moment, the relative position information between other objects in the robot and its local environment is obtained.For example, this is with respect to position Confidence breath can be the distance between robot 500 and above-mentioned object information.Wherein, which may include that robot exists First moment on shiftable haulage line or at a distance from the object on shiftable haulage line periphery.
Certainly, in some embodiments, robot 500 can also obtain the robot when current by sensor 502 Latitude and longitude information and elevation information or altitude information locating for (such as the first moment) are carved, these information can also be used as rheme Confidence breath.In this way, robot can be determined more accurately in the position at the moment in robot according to the location information.That , when robot is under another moment (such as the second moment), when the location information of acquisition is identical as above-mentioned location information, energy It is enough accurately to determine that robot environment locating for the second moment is identical with robot environment locating for the first moment.And Predicament identification model as robot obstacle-avoiding guidance is exactly by including above-mentioned first moment robot sample letter collected The information of breath is trained acquisition, therefore, robot can by the predicament identification model to the robot at the second moment Locating environment carries out more accurately judgement (that is, it is judged that whether robot can be caught at the second moment).
In moving process, robot obtains the label information of the first moment expression robot status.Such as, label is believed Breath may include above-mentioned first label or above-mentioned second label.
It is understood that robot can be identified accurately whether current time (such as the first moment) is caught in.Example Such as, be stuck when wheel occurs in robot, surrounding is caught in, in certain time when a fixed place is spinned, Robot can consider in the state being caught in.Thus the information being caught in obtained can accurately identified machine people it is current State.Likewise, robot is when being caught in, such as there is not above-mentioned be caught at the first moment in robot The case where, then, robot can accurately judge current time state in which.It determines in robot in the first moment quilt After bottling up or not being caught in, it will be able to obtain with robot in the corresponding label information of the first moment status.So, we It can think robot automatically and be ambient image information (the picture letter around such as that robot is collected at the first moment Breath) and location information (such as range information) stamped corresponding label.
In the ambient image information for obtaining above-mentioned first moment, the location information at the first moment, the label letter at the first moment After breath, robot can be using these information as the sample information at the first moment.Similar, by repetition in moving process Process is stated, the sample information at N number of moment in robot moving process can be obtained.
The sample information at M moment in N number of moment is sent to server by S702, robot, wherein M is to be less than or wait In the positive integer of N.
In some embodiments, M can be equal to N.The sample information at N number of moment that robot can will acquire all is sent to Server.In this manner it is ensured that server can be trained according to enough sample informations, accurate predicament identification is obtained Model.
It is understood that robot is in the process of moving, the most of the time is not caught in, therefore, has the The quantity of the sample information of one label may be much larger than the quantity of the sample information with the second label.In addition, in robot The sample information the not being caught in value with higher of a period of time before being caught in, therefore, in the embodiment of the present application, Robot is then forwarded to server after being screened to the sample information at N number of moment.That is, robot can select The sample information at M moment in N number of moment is taken to be sent to server.Wherein, robot is caught at the n-th moment, this when M-1 moment robot before quarter is not caught in.
Therefore, in some other embodiment, M can be less than N.Robot can will have higher price in N number of moment The sample information at M moment of value is sent to server.In this way, accurate predicament identification can be obtained guaranteeing server While model, the effective data volume for reducing the communication between robot and server.
Illustratively, robot determines in the sample information at N number of moment, the label that the sample information at n-th moment includes Information is used to indicate machine and is in the state being caught at the n-th moment, that is, the sample information at n-th moment may include having Identified machine people is in the second label of the state that is caught at the moment.Robot is by the sample at the preceding M-1 moment at n-th moment This information and the sample information at n-th moment are sent to server.
For example, the memory of robot can be buffer storage (buffer), robot can will obtain in moving process The sample information taken is stored in buffer, and whether the label information monitored in the sample information that current time obtains is second Label, if the label information in the sample information of current time acquisition is the second label, then robot just will be in buffer Sample information is sent to server.If the label information in the sample information that current time obtains does not include the second label, then Robot can continue acquisition sample information and be stored in buffer, until robot gets the sample with the second label Information.For example, it is assumed that the buffer of robot is at best able to 10 sample informations of storage, if the 10th sample in buffer Information includes the second label, then 10 sample informations in the buffer can be issued together server by robot.Such as 10 sample informations stored in fruit buffer are without the sample information including the second label, then robot can continue to obtain Sample information is taken, the sample information at the earliest moment in 10 sample informations having existed in buffer is deleted, and will be new Sample information be stored in buffer, so recycle, until occur include the second label sample information.
Optionally, robot can empty buffer after sending sample information to server.
When robot and server are under normal communication state, robot can will be will acquire by diversified forms Sample information is sent to server.For example, multiple sample informations can be sent to server by robot together.For another example, machine Multiple sample informations point can be sent to server by people several times.For another example, robot can be by multiple sample informations according to certain Time interval be sent to server.
When that can not communicate between robot and server, such as robot off-line or servers off-line, robot can be incited somebody to action Sample information is stored in the memory being arranged in robot.After robot is communicated with server recovery, robot is just Sample information stored in memory can be sent to server.
S703, server are trained the sample information acquired from one or more robot, obtain predicament and know Other model.
Illustratively, the sample information that server can will acquire is divided into two parts (such as first kind sample information and second Class sample information).First kind sample information can be used for training and obtain predicament identification model, and the second class sample information can be used for Whether the testing result for detecting predicament identification model is accurate.
For example, it is assumed that the sample information that server obtains includes X first kind sample information and Y the second class sample letters Breath, wherein X and Y is greater than 0 integer, and X+Y=N.Believed with the ambient image information for including in sample information and position Breath as input, X first kind sample information is trained, be under the present circumstances with robot the probability that can be caught in or The result that whether can be specifically caught in is carried out as output, and by the label information for including in the output result and sample information It compares, such circuit training, until comparison result converges to expected level, to obtain initial model.When according to the X first kind It, can be by the environment in each sample information of Y the second class sample informations after sample information is trained acquisition initial model Image information and location information are input in the predicament identification model and detect to the initial model, and detection predicament identifies mould The accuracy of type.For example, if the ambient image information of some sample information in the second class sample information and location information are inputted The state whether robot indicated by the result that initial model obtains is caught in, with the label information in the second class sample information The state whether indicated robot is caught in is identical, it is determined that for the sample information is accurate to the verification of initial model 's;If the ambient image information of some sample information in the second class sample information and location information input initial model are obtained As a result the state whether indicated robot is caught in, with robot indicated by the label information in the second class sample information The state whether being caught in is different, it is determined that the check results for the sample information are inaccurate.Due to the second class sample Information can be to be multiple, therefore, and the accuracy of the initial model, can obtain the verification of the initial model after comprehensive detection every time Accuracy rate.When the accuracy rate is greater than preset threshold, so that it may using the initial model as predicament identification model.When the accuracy rate Less than preset threshold, then can think that the initial model is not accurate enough, need to continue to train.So server can incite somebody to action The new sample information from robot is added in training, is further trained to above-mentioned initial model, and it is accurate to obtain The higher model of rate.Server is assured that accuracy rate is greater than the model of preset threshold as a result, and using the model as predicament Identification model is sent to robot.
Illustratively, X first kind sample information is trained, is can be according to loss function (loss Function) or cost function (cost function) carry out training.Loss function (loss function) or cost letter Number (cost function) is that chance event or its value in relation to stochastic variable are mapped as nonnegative real number to indicate that this is random The function of " risk " or " loss " of event.It in the embodiment of the present application, can be by minimizing loss function (cost function) It solves.When solving the minimum value of loss function, can be minimized by gradient descent method come iterative solution step by step Loss function and model parameter value, to realize the acquisition to predicament identification model (initial model).
Illustratively, a kind of model based on convolutional neural networks can be by the predicament identification model that training obtains. For example, as shown in figure 8, predicament identification model may include input layer, volume base, pond layer, full articulamentum and output layer.It needs It is noted that being only to be illustrated with two convolutional layers, two pond layers and a full articulamentum in figure.And this Shen It please may include multiple convolutional layers in the predicament identification model in embodiment, multiple pond layers and multiple full articulamentums.
It should be noted that the type of sample information and quantity are bigger, training learns the predicament identification model obtained also just It is more accurate.Therefore, which can be the sample information obtained in moving process of a robot, be also possible to one The sample information obtained in a multiple moving process of robot, is also possible to the sample obtained in the multiple moving process of multiple robots This information.The embodiment of the present application is herein with no restrictions.
S704, robot receive predicament identification model from server.
Multiple sample informations that server is sent according to one or more robots, training obtain predicament identification model, and The predicament identification model is handed down to robot.In conjunction with Fig. 5, robot 500 can be by the communication that is arranged in the robot Module 505 receives the predicament identification model, and the predicament identification model is stored in memory 504.To utilize the tired of storage Border identification model carries out avoidance, such as executes following S705-S706.
S705, robot obtain the moving direction and movement speed of current time robot.
Robot can predict environment locating for subsequent time robot in moving process.Illustratively, robot can To obtain the moving direction and movement speed of current time robot by the sensor being arranged in robot.And it obtains The ambient image information and location information at current time.The processor of robot can be according to the mobile side at robot current time To, movement speed and the ambient image information and location information at current time, subsequent time robot local environment is obtained Ambient image information and location information in this context.
Assuming that robot position shown in (a) that the location of current time is in Fig. 9, also, robot is It is moved along direction 1.So, believed at a distance from object 1 according to the first location information at robot current time, such as robot The range information of breath and robot and object 2, moving direction and movement speed, so that it may obtain robot in subsequent time Second location information, and the ambient image information according to locating for the location information and current time robot carries out triangle Conversion obtains the ambient image information of the second moment robot local environment, such as shown in (b) in Fig. 9.
For example, robot, which obtains the ambient image on its shiftable haulage line by image capture module, to be believed under current time Breath is P1 (such as photo of robot local environment), and obtaining location information by sensor is S1 (such as robot and local environment The distance of middle object).The moving direction and movement speed at the current time that robot can be obtained by sensor.And then it can To calculate the location information S2 of subsequent time according to 1-S2=t*V of formula S.Wherein, S1 is current time robot before The distance of square object body, S2 are distance of the subsequent time robot apart from front physics, and t is the time of current time and subsequent time Interval, V is the movement speed of current time robot.
In addition, robot can also utilize triangle according to mobile distance (such as S1-S2) from current time to subsequent time Conversion calculates and obtains robot in the image information P2 of subsequent time local environment.
In this way, ambient image information P2 and robot that robot just obtains subsequent time local environment are in lower a period of time Carve the location information S2 in local environment.
S706, robot identify mould according to the ambient image information of subsequent time local environment, location information and predicament Whether type prediction subsequent time can be caught in, and carry out avoidance.
In conjunction with the example in above-mentioned S705, robot can by subsequent time robot local environment image information P2 and Input of the location information S2 as predicament identification model, by calculating, obtaining robot in subsequent time is the probability being caught in. Robot can compare the probability and preset threshold that this is caught in, when the probability that robot determines that subsequent time is caught in is greater than When preset threshold, robot can change current Motion (direction of motion at such as current time) and carry out avoidance, to keep away Exempt from the case where robot is caught in the process of moving appearance.When robot determines that the probability that subsequent time is caught in is less than in advance If when threshold value, it is believed that subsequent time robot will not be caught in, then robot can continue to execute current movement Strategy is moved.
Illustratively, in conjunction with Fig. 9, it is assumed that current time robot is under the environment as shown in (a) in Fig. 9, works as machine Device people judges that subsequent time can be caught according to predicament identification model, then robot can be in the original fortune of the time changing Dynamic strategy, for example, by it is original move changing into along direction 1 moved as shown in (c) in Fig. 9 along direction 2, with Subsequent time robot is avoided to be caught in.
It should be noted that the above S706 be by taking the output of predicament identification model is the probability that is caught in of subsequent time as an example into Row explanation.In the embodiment of the present application, predicament identification model can also export whether subsequent time robot can be caught in really Fixed prediction result, i.e. subsequent time can be caught in or subsequent time will not be caught in.When robot is according to predicament identification model When determining that subsequent time can be caught in, robot can change current Motion (direction of motion at such as current time) into Row avoidance, the case where being caught in the process of moving to avoid robot appearance.When robot is true according to predicament identification model When determining subsequent time will not be caught in, robot can continue to execute current Motion, be moved.
In Obstacle Avoidance provided by the embodiments of the present application, the process that robot executes above-mentioned S701-S706 is not It is one way, but can recycle.Such as robot can constantly obtain the sample information of different moments and by the sample Information is sent to server, so that server is according to the sample information and predicament identification model of the update, obtains by incremental training Take the predicament identification model of update.Wherein, the process of incremental training is similar with process trained for the first time, only in incremental training In the process, trained model before the model of load is, the weight of the model of load is also trained.And it is trained for the first time When, the weight of the model of load is default value.After the completion of incremental training, using the second class sample information detection model, such as It is high before the accuracy rate ratio that fruit detected, just illustrate that the model quality that incremental training comes out is high.Server can will be new at this time The model trained is handed down to robot, so that robot can be according to updated predicament identification model, for later Movement is instructed, to carry out more effective avoidance.
In this way, robot can independently obtain the sample information at N number of moment, and each moment at above-mentioned N number of moment is corresponding Sample information include referring in ambient image information at the ambient image information of the moment robot local environment, the robot moment The location information of the position in environment shown.It, can be accurate since these information are collected by robot Identified machine people at that time where environment.Meanwhile each sample information further includes being used to indicate whether the robot moment is in The label information for the state being caught in.Likewise, label information is also robot according to the status indication itself identified at that time , it also can accurately embody the current state of robot.Robot is by being sent to server for such sample information, for taking Business device training predicament identification model, can be improved the accuracy of predicament identification model.Simultaneously as not needing to artificially collect machine Ambient image information during artificial work, does not need manually to label to the ambient image information yet, therefore, saves a large amount of Human cost investment.Server can also carry out incremental training to the predicament identification model according to the sample information of update to obtain It takes more accurate predicament identification model and sends to robot, in this way, robot can constantly obtain in moving process Sample information is simultaneously sent to server, and it is more accurate to the moving process progress of robot to obtain update predicament identification model with this Avoidance guidance.
It is described above that mainly scheme provided by the embodiments of the present application is described from the angle of robot.It is understood that , robot in order to realize the above functions, it comprises executing the corresponding hardware configuration of each function and/or software module, These execute the corresponding hardware configuration of each function and/or software module may be constructed a robot.Those skilled in the art It should be readily appreciated that, algorithm steps described in conjunction with the examples disclosed in the embodiments of the present disclosure, the application can be with The combining form of hardware or hardware and computer software is realized.Some function is actually hard with hardware or computer software driving The mode of part executes, specific application and design constraint depending on technical solution.Professional technician can be to each Specific application is to use different methods to achieve the described function, but this realization is it is not considered that exceed the model of the application It encloses.
The embodiment of the present application can carry out the division of functional module according to above method example to robot, for example, machine People may include obstacle avoidance apparatus, which can correspond to each functional module of each function division, can also by two or More than two functions are integrated in a processing module.Above-mentioned integrated module both can take the form of hardware realization, It can be realized in the form of software function module.It should be noted that being signal to the division of module in the embodiment of the present application Property, only a kind of logical function partition, there may be another division manner in actual implementation.
In the case where each function division of use correspondence each functional module, Figure 10 is shown involved in above-described embodiment Obstacle avoidance apparatus a kind of possible composition schematic diagram, as shown in Figure 10, which includes: acquiring unit 1001, communication Unit 1003, avoidance unit 1004.For example, acquiring unit 1001 is also possible to image capture module 501 as shown in Figure 5 and passes Sensor 502, communication unit 1003 are also possible to communication module 505 as shown in Figure 5, and avoidance unit 1004 is also possible to such as Fig. 5 Shown in processor 503.
Wherein, acquiring unit 1001, for obtaining the sample information at N number of moment in robot moving process, N is greater than 0 Integer.Wherein, the sample information at each moment includes: the ring for being used to indicate the moment robot local environment in N number of moment Border image information, the location information for being used to indicate position of the robot moment in the environment that ambient image information indicates, with And it is used to indicate the label information of the state at the robot moment;Label information includes the first label or the second label, the first mark Label are used to indicate robot and are in the state that is not caught in, and the second label is used to indicate robot and is in the state that is caught in.It is exemplary , acquiring unit 1001 can be used for executing the S601 of Obstacle Avoidance shown in above-mentioned Fig. 6.Acquiring unit 1001 can be with For executing the S701 of Obstacle Avoidance shown in above-mentioned Fig. 7.
Communication unit 1003, for the sample information at M moment in N number of moment to be sent to server, M is to be less than or wait In the positive integer of N.Illustratively, communication unit 1003 can be used for executing the S602 of Obstacle Avoidance shown in above-mentioned Fig. 6. Communication unit 1003 can be also used for executing the S702 of Obstacle Avoidance shown in above-mentioned Fig. 7.
Communication unit 1003 is also used to receive predicament identification model from server, and predicament identification model is by one or more The sample information training gained of a robot acquisition.Illustratively, communication unit 1003 can be also used for executing shown in above-mentioned Fig. 6 The S604 of Obstacle Avoidance.Communication unit 1003 can be also used for executing Obstacle Avoidance shown in above-mentioned Fig. 7 S704。
Avoidance unit 1004, for carrying out avoidance in robot moving process according to predicament identification model.Illustratively, Avoidance unit 1004 can be also used for executing the S605 of Obstacle Avoidance shown in above-mentioned Fig. 6.Avoidance unit 1004 can be with For executing the S706 of Obstacle Avoidance shown in above-mentioned Fig. 7.
Further, above-mentioned obstacle avoidance apparatus can also comprise determining that unit 1002.
Determination unit 1002, in the sample information for determining N number of moment, mark that the sample information at n-th moment includes Label information is used to indicate machine and is in the state being caught at the n-th moment.
It should be noted that all related contents for each step that above method embodiment is related to can quote correspondence The function of functional module describes, and details are not described herein.Obstacle avoidance apparatus provided by the embodiments of the present application, for executing above-mentioned robot Barrier-avoiding method, therefore can achieve effect identical with above-mentioned Obstacle Avoidance.
Using integrated unit, the alternatively possible group of obstacle avoidance apparatus involved in above-described embodiment It may include: acquisition module, processing module and communication module in.
Acquisition module is used to acquire the information of robot obstacle-avoiding needs, for example, acquisition module is for supporting robot to execute S701, S705 in S601 in Fig. 6, Fig. 7 and/or other processes for techniques described herein.Processing module is used for Control management is carried out to the movement of robot, for example, processing module is for supporting robot to execute the S605 in Fig. 6, in Fig. 7 S706 and/or other processes for techniques described herein.Communication module is for supporting robot and other network entities Communication, such as with the communication between server and other function module or network entity is shown in Fig. 6 or Fig. 7.Robot It can also include memory module, for storing the sample information, predicament identification model and other program code sum numbers of robot According to.
Obstacle avoidance apparatus provided by the embodiments of the present application, for executing above-mentioned Obstacle Avoidance, therefore can achieve with The identical effect of above-mentioned Obstacle Avoidance.
Through the above description of the embodiments, it is apparent to those skilled in the art that, for description It is convenienct and succinct, only the example of the division of the above functional modules, in practical application, can according to need and will be upper It states function distribution to be completed by different functional modules, i.e., the internal structure of device is divided into different functional modules, to complete All or part of function described above.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of module or unit, Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be with In conjunction with or be desirably integrated into another device, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING of device or unit or Communication connection can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component can be a physical unit or multiple physical units, it can and it is in one place, or may be distributed over multiple Different places.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product To be stored in a read/write memory medium.Based on this understanding, the technical solution of the embodiment of the present application substantially or Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Out, which is stored in a storage medium, including some instructions are used so that an equipment (can be monolithic Machine, chip etc.) or processor (processor) execute each embodiment method of the application all or part of the steps.And it is aforementioned Storage medium include: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory The various media that can store program code such as (Random Access Memory, RAM), magnetic or disk.
More than, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, and it is any at this Apply for the change or replacement in the technical scope disclosed, should all cover within the scope of protection of this application.Therefore, the application Protection scope should be subject to the protection scope in claims.

Claims (12)

1. a kind of Obstacle Avoidance, which is characterized in that the described method includes:
The robot obtains the sample information at N number of moment in the robot moving process, and N is the integer greater than 0;Wherein, N The sample information at each moment includes: the ambient image letter for being used to indicate robot local environment described in the moment in a moment The location information of position of the robot moment in the environment that the ambient image information indicates is ceased, is used to indicate, with And it is used to indicate the label information of the state at the robot moment;The label information includes the first label or the second mark Label, first label are used to indicate the robot and are in the state that is not caught in, and second label is used to indicate the machine Device people is in the state that is caught in;
The sample information at M moment in N number of moment is sent to server by the robot, and M is just whole less than or equal to N Number;
The robot receives predicament identification model from the server, and the predicament identification model is by one or more machine The sample information training gained of people's acquisition;
The robot carries out avoidance according to the predicament identification model in moving process.
2. the method according to claim 1, wherein
The ambient image information at each moment includes: the movement of the robot in the robot local environment described in the moment The image information of on route and mobile route periphery object.
3. method according to claim 1 or 2, which is characterized in that
The location information at each moment include: the moment ambient image information instruction environment in, the robot with it is described On the shiftable haulage line of robot and the relative position information of the object on shiftable haulage line periphery.
4. method according to any one of claim 1-3, which is characterized in that when the robot is by M in N number of moment The sample information at quarter is sent to server, comprising:
The robot determines in the sample information at N number of moment that the label information that the sample information at n-th moment includes is used The state being caught in is at the n-th moment in the instruction machine;
The robot is by the sample information and the sample information at the n-th moment at the preceding M-1 moment at the n-th moment It is sent to the server.
5. method according to any of claims 1-4, which is characterized in that the robot is identified according to the predicament Model carries out avoidance in moving process, comprising:
The robot obtains the ambient image information and the location information at current time at current time;
Ambient image information of the robot according to the current time, the location information at the current time, and it is current The moving direction and movement speed of robot described in moment obtain the ambient image information of subsequent time and the position of subsequent time Confidence breath;
The robot is according to the ambient image information of the subsequent time, the location information of the subsequent time and described Predicament identification model determines the probability that robot described in subsequent time is caught in;
The robot determines that the probability that robot described in subsequent time is caught in is greater than preset threshold, and the robot changes fortune Dynamic strategy, carries out avoidance.
6. a kind of obstacle avoidance apparatus, which is characterized in that be applied to robot, described device include: acquiring unit, communication unit and Avoidance unit;
The acquiring unit, for obtaining the sample information at N number of moment in the robot moving process, N is whole greater than 0 Number;Wherein, the sample information at each moment includes: the ring for being used to indicate robot local environment described in the moment in N number of moment Border image information, the position for being used to indicate position of the robot moment in the environment that the ambient image information indicates Information, and it is used to indicate the label information of the state at the robot moment;The label information include the first label or Second label, first label are used to indicate the robot and are in the state that is not caught in, and second label is used to indicate The robot is in the state that is caught in;
The communication unit, for the sample information at M moment in N number of moment to be sent to server, M is less than or equal to N Positive integer;
The communication unit, be also used to from the server receive predicament identification model, the predicament identification model by one or The sample information training gained of the multiple robot acquisitions of person;
The avoidance unit, for carrying out avoidance in the robot moving process according to the predicament identification model.
7. device according to claim 6, which is characterized in that
The ambient image information at each moment includes: the movement of the robot in the robot local environment described in the moment The image information of on route and mobile route periphery object.
8. device according to claim 6 or 7, which is characterized in that
The location information at each moment include: the moment ambient image information instruction environment in, the robot with it is described On the shiftable haulage line of robot and the relative position information of the object on shiftable haulage line periphery.
9. device a method according to any one of claims 6-8, which is characterized in that described device further include: determination unit;
The determination unit, in the sample information for determining N number of moment, mark that the sample information at n-th moment includes Label information is used to indicate the machine and is in the state being caught at the n-th moment;
The communication unit, for the sample information at M moment in N number of moment to be sent to server, comprising:
The communication unit, for by the sample information at the preceding M-1 moment at the n-th moment and the n-th moment Sample information is sent to the server.
10. the device according to any one of claim 6-9, which is characterized in that
The acquiring unit is also used to obtain the ambient image information at current time and the location information at current time, according to institute State the ambient image information at current time, the movement of robot described in the location information at the current time and current time Direction and movement speed obtain the ambient image information of subsequent time and the location information of subsequent time;
The avoidance unit, for carrying out avoidance in the robot moving process according to the predicament identification model, comprising:
The avoidance unit, for the ambient image information according to the subsequent time, the location information of the subsequent time, with And the predicament identification model determines the probability that robot described in subsequent time is caught in, and determines robot quilt described in subsequent time The probability bottled up is greater than preset threshold, changes robot motion's strategy, carries out avoidance.
11. a kind of robot obstacle-avoiding system characterized by comprising one or more robots and server;
The robot, for obtaining the sample information at N number of moment in the robot moving process, by N number of moment Sample information is sent to the server;Wherein, N is the integer greater than 0, and the sample information at each moment includes: in N number of moment It is used to indicate the ambient image information of robot local environment described in the moment, is used to indicate the robot moment described The location information of position in the environment of ambient image information instruction, and be used to indicate the state at the robot moment Label information;The label information includes the first label or the second label, and first label is used to indicate at the robot In the state that is not caught in, second label is used to indicate the robot and is in the state that is caught in;
The server, it is one to what is received for receiving the sample information from one or more of robots Or the sample information of multiple robots is trained, and obtains predicament identification model, the predicament identification model is sent to described Robot;
The robot is also used to receive the predicament identification model, according to the predicament identification model in moving process into Row avoidance.
12. system according to claim 11, which is characterized in that it includes the first kind that the server, which receives sample information, Sample information and the second class sample information;
The server obtains predicament and knows for being trained to the sample information of the one or more of robots received Other model, comprising:
The server, for being trained acquisition initial model to the first kind sample information, according to the second class sample This information verifies the accuracy of the initial model, obtains verification accuracy rate, presets if the verification accuracy rate is greater than The initial model is then determined as the predicament identification model by threshold value, if the verification accuracy rate is less than the preset threshold, Then the server receives the new sample information that the robot is sent, to the new sample information and the introductory die Type continues to train, and is verified according to the second class sample information to the accuracy for continuing the model that training obtains, Until the verification accuracy rate of the model currently obtained is greater than the preset threshold, which is determined as the predicament and identifies mould Type;
Wherein, if the ambient image information for the sample information for including in the second class sample information and location information are inputted institute The state whether robot indicated by the result of initial model acquisition is caught in is stated, with the label in the sample information The state that whether robot is caught in indicated by information is identical, it is determined that the initial model is accurate;If by described The ambient image information and location information for the sample information for including in two class sample informations input the knot that the initial model obtains Indicated by label information in the state that whether robot is caught in indicated by fruit, with the second class sample information The state whether robot is caught in is different, it is determined that the initial model inaccuracy;Believed according to the second class sample Each sample information determines the verification accuracy rate of the initial model to the check results of the accuracy of the initial model in breath.
CN201910566538.1A 2019-06-27 2019-06-27 Robot obstacle avoidance method, device and system Active CN110370273B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910566538.1A CN110370273B (en) 2019-06-27 2019-06-27 Robot obstacle avoidance method, device and system
PCT/CN2020/097871 WO2020259524A1 (en) 2019-06-27 2020-06-24 Robot obstacle avoidance method, apparatus, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910566538.1A CN110370273B (en) 2019-06-27 2019-06-27 Robot obstacle avoidance method, device and system

Publications (2)

Publication Number Publication Date
CN110370273A true CN110370273A (en) 2019-10-25
CN110370273B CN110370273B (en) 2021-04-09

Family

ID=68250960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910566538.1A Active CN110370273B (en) 2019-06-27 2019-06-27 Robot obstacle avoidance method, device and system

Country Status (2)

Country Link
CN (1) CN110370273B (en)
WO (1) WO2020259524A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339901A (en) * 2020-02-21 2020-06-26 北京容联易通信息技术有限公司 Intrusion detection method and device based on image, electronic equipment and storage medium
CN111633686A (en) * 2020-05-19 2020-09-08 华为技术有限公司 Robot safety protection method and device and robot
WO2020259524A1 (en) * 2019-06-27 2020-12-30 华为技术有限公司 Robot obstacle avoidance method, apparatus, and system
CN112286191A (en) * 2020-10-28 2021-01-29 深圳拓邦股份有限公司 Robot escaping control method and device and sweeping robot
CN114518762A (en) * 2022-04-20 2022-05-20 长沙小钴科技有限公司 Robot obstacle avoidance model, obstacle avoidance control method and robot

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103822625A (en) * 2013-12-01 2014-05-28 兰州大学 Line-tracking navigation method and device for intelligent robot
CN104793620A (en) * 2015-04-17 2015-07-22 中国矿业大学 Obstacle avoidance robot based on visual feature binding and reinforcement learning theory
CN105511478A (en) * 2016-02-23 2016-04-20 百度在线网络技术(北京)有限公司 Robot cleaner, control method applied to same and terminal
CN107808123A (en) * 2017-09-30 2018-03-16 杭州迦智科技有限公司 The feasible area detecting method of image, electronic equipment, storage medium, detecting system
CN108921119A (en) * 2018-07-12 2018-11-30 电子科技大学 A kind of barrier real-time detection and classification method
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot
CN108968811A (en) * 2018-06-20 2018-12-11 四川斐讯信息技术有限公司 A kind of object identification method and system of sweeping robot
CN109446970A (en) * 2018-10-24 2019-03-08 西南交通大学 A kind of Intelligent Mobile Robot road scene recognition methods based on deep learning
CN109696913A (en) * 2018-12-13 2019-04-30 中国航空工业集团公司上海航空测控技术研究所 A kind of sweeping robot intelligent barrier avoiding system and method based on deep learning
CN109934121A (en) * 2019-02-21 2019-06-25 江苏大学 A kind of orchard pedestrian detection method based on YOLOv3 algorithm

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013169221A (en) * 2012-02-17 2013-09-02 Sharp Corp Self-propelled cleaner
JP6083520B2 (en) * 2013-04-02 2017-02-22 株式会社Ihi Robot guidance method and apparatus
CN110370273B (en) * 2019-06-27 2021-04-09 华为技术有限公司 Robot obstacle avoidance method, device and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103822625A (en) * 2013-12-01 2014-05-28 兰州大学 Line-tracking navigation method and device for intelligent robot
CN104793620A (en) * 2015-04-17 2015-07-22 中国矿业大学 Obstacle avoidance robot based on visual feature binding and reinforcement learning theory
CN105511478A (en) * 2016-02-23 2016-04-20 百度在线网络技术(北京)有限公司 Robot cleaner, control method applied to same and terminal
CN107808123A (en) * 2017-09-30 2018-03-16 杭州迦智科技有限公司 The feasible area detecting method of image, electronic equipment, storage medium, detecting system
CN108968811A (en) * 2018-06-20 2018-12-11 四川斐讯信息技术有限公司 A kind of object identification method and system of sweeping robot
CN108921119A (en) * 2018-07-12 2018-11-30 电子科技大学 A kind of barrier real-time detection and classification method
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot
CN109446970A (en) * 2018-10-24 2019-03-08 西南交通大学 A kind of Intelligent Mobile Robot road scene recognition methods based on deep learning
CN109696913A (en) * 2018-12-13 2019-04-30 中国航空工业集团公司上海航空测控技术研究所 A kind of sweeping robot intelligent barrier avoiding system and method based on deep learning
CN109934121A (en) * 2019-02-21 2019-06-25 江苏大学 A kind of orchard pedestrian detection method based on YOLOv3 algorithm

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020259524A1 (en) * 2019-06-27 2020-12-30 华为技术有限公司 Robot obstacle avoidance method, apparatus, and system
CN111339901A (en) * 2020-02-21 2020-06-26 北京容联易通信息技术有限公司 Intrusion detection method and device based on image, electronic equipment and storage medium
CN111633686A (en) * 2020-05-19 2020-09-08 华为技术有限公司 Robot safety protection method and device and robot
WO2021232933A1 (en) * 2020-05-19 2021-11-25 华为技术有限公司 Safety protection method and apparatus for robot, and robot
CN111633686B (en) * 2020-05-19 2022-04-12 华为技术有限公司 Robot safety protection method and device and robot
CN112286191A (en) * 2020-10-28 2021-01-29 深圳拓邦股份有限公司 Robot escaping control method and device and sweeping robot
CN114518762A (en) * 2022-04-20 2022-05-20 长沙小钴科技有限公司 Robot obstacle avoidance model, obstacle avoidance control method and robot
CN114518762B (en) * 2022-04-20 2022-07-22 长沙小钴科技有限公司 Robot obstacle avoidance device, obstacle avoidance control method and robot

Also Published As

Publication number Publication date
WO2020259524A1 (en) 2020-12-30
CN110370273B (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN110370273A (en) A kind of Obstacle Avoidance, device and system
CN104574386B (en) Indoor positioning method based on three-dimensional environment model matching
CN100369487C (en) Object detection device, object detection server, and object detection method
CN106650705A (en) Region labeling method and device, as well as electronic equipment
CN106211803A (en) Territory perception camera chain
CN109863500A (en) Event driven area-of-interest management
CN103926927A (en) Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN108965609A (en) The recognition methods of mobile terminal application scenarios and device
US20170307393A1 (en) Information processing apparatus, information processing method, and program
CN105827959B (en) Method for processing video frequency based on geographical location
CN109190648A (en) Simulated environment generation method, device, mobile terminal and computer-readable storage medium
CN205158401U (en) Portable motion of traceable timing wearing system
CN109357679B (en) Indoor positioning method based on significance characteristic recognition
CN103426172A (en) Vision-based target tracking method and device
CN101471993A (en) Handhold intelligent mobile and fixed radio devices
CN109522996A (en) Constant mark object automatic identification control system
CN108121350B (en) Method for controlling aircraft to land and related device
CN109711504A (en) Constant mark object automatic recognition control method
CN112950673A (en) Target object detection and tracking system and method
WO2023211499A1 (en) Machine learning real property object detection and analysis apparatus, system, and method
CN107193820A (en) Location information acquisition method, device and equipment
CN113330278B (en) Navigation device, navigation method and related equipment
CN107454609A (en) The device and method of telecommunication is carried out by mobile network and unmanned plane
CN109903308B (en) Method and device for acquiring information
CN105554400A (en) Method for achieving automatic jumping photographing through intelligent wristband

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant