WO2019037517A1 - 一种用于处理任务区域的任务的移动电子设备以及方法 - Google Patents

一种用于处理任务区域的任务的移动电子设备以及方法 Download PDF

Info

Publication number
WO2019037517A1
WO2019037517A1 PCT/CN2018/090585 CN2018090585W WO2019037517A1 WO 2019037517 A1 WO2019037517 A1 WO 2019037517A1 CN 2018090585 W CN2018090585 W CN 2018090585W WO 2019037517 A1 WO2019037517 A1 WO 2019037517A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
mobile electronic
picture
module
processor
Prior art date
Application number
PCT/CN2018/090585
Other languages
English (en)
French (fr)
Inventor
潘景良
陈灼
李腾
陈嘉宏
高鲁
Original Assignee
炬大科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 炬大科技有限公司 filed Critical 炬大科技有限公司
Publication of WO2019037517A1 publication Critical patent/WO2019037517A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • the present invention relates to the field of electronic devices.
  • the invention relates to the field of intelligent robot systems.
  • the traditional sweeping robot randomly moves according to the scanned map autonomously positioned and moved or collided, and sweeps the ground at the same time. Therefore, the traditional sweeping robot cannot fully judge the complex situation of the ground during the work process because of the immature or inaccurate drawing and positioning technology, and it is easy to lose the position and direction.
  • some models can only change direction by the physical principle of collision rebound because they do not have the positioning ability, and even cause damage to the household goods or the robot itself or even personal injury, causing interference to the user.
  • the embodiment of the present invention proposes a method for taking a picture based on a mobile phone terminal, and defining a picture subspace name for the photo or the selected specific target area at the mobile phone terminal, and the microphone receives the voice through the APP or the microphone of the robot.
  • the instruction is associated with the named zone name and completes the task in the area indicated by the instruction.
  • the embodiment of the present invention sends instructions to the robot through voice or App, so that the robot automatically reaches the defined picture subspace name to complete the task, which is convenient for automatic cleaning of the robot.
  • a mobile electronic device for processing a task of a task area includes a first wireless signal transceiver, a processor, a positioning module, a path planning module, and a motion module, wherein a first wireless signal transceiver communicatively coupled to the second mobile electronic device, configured to obtain an instruction from the second mobile electronic device, the instruction including a name of a destination task area to be processed by the mobile electronic device,
  • the name of the task area is associated with a picture subspace of a picture library in the mobile electronic device
  • the processor is communicably coupled to the first wireless signal transceiver, configured to determine the target task area An environment space corresponding to the name
  • the positioning module is communicably connected to the processor, configured to record a distance range between a current location of the mobile electronic device and the environment space
  • the path planning module may Communicatingly connected to the processor, configured to generate a path planning scheme according to the name of the task area
  • the motion module is communicably coupled to the path planning module and the positioning module,
  • a method for processing a task of a task area in a mobile electronic device including a first wireless signal transceiver, a processor, a positioning module, a path plan a module and a motion module, wherein instructions from the second mobile electronic device are obtained by the first wireless signal transceiver communicatively coupled to the second mobile electronic device, the instructions including the mobile electronic device to be processed a name of a destination task area, the name of the destination task area being associated with a picture subspace in a picture library in the mobile electronic device; by the process of communicatively connecting to the first wireless signal transceiver Determining an environmental space corresponding to a name of the destination task area; recording, between the current location of the mobile electronic device and the environmental space, by the positioning module communicably connected to the processor Range of distances; by the path planning module communicatively coupled to the processor, according to the mission area a path planning scheme; generating, by the motion module that is communicably connected
  • FIG. 1 shows a schematic diagram of a system in which a mobile electronic device is located, in accordance with one embodiment of the present invention.
  • Figure 2 illustrates a method flow diagram in accordance with one embodiment of the present invention.
  • FIG. 1 shows a schematic diagram of a system in which a mobile electronic device is located, in accordance with one embodiment of the present invention.
  • the mobile electronic device 100 includes, but is not limited to, a cleaning robot, an industrial automation robot, a service robot, a disaster relief robot, an underwater robot, a space robot, a drone, and the like. It can be understood that the mobile electronic device 100 can also be referred to as the first mobile electronic device 100 in order to distinguish it from the following second mobile electronic device 140.
  • the second mobile electronic device 140 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a remote controller, and the like.
  • the mobile electronic device optionally includes an operator interface.
  • the second mobile electronic device is a mobile phone, and the operation interface is a mobile phone APP.
  • the signal transmission manner between the mobile electronic device 100 and the charging station 160 includes but is not limited to: Bluetooth, WIFI, ZigBee, infrared, ultrasonic, ultra-wide bandwidth (UWB), etc., in this embodiment, signal transmission It is WIFI as an example for description.
  • the mission area represents the venue where the mobile electronic device 100 performs the task. For example, when the task of the mobile electronic device 100 is to clean the ground, the task area represents the area that the cleaning robot needs to clean. For another example, when the task of the mobile electronic device 100 is disaster relief, the task area indicates an area where the disaster relief robot needs to be rescued.
  • a mission site represents a venue that contains the entire mission area.
  • a mobile electronic device 100 for processing tasks of a task area includes a first wireless signal transceiver 102, a processor 104, a positioning module 106, a path planning module 108, and a motion module 110.
  • the first wireless signal transceiver 102 is communicably coupled to the second mobile electronic device 140 and configured to acquire instructions from the second mobile electronic device 140, the instructions including a name indicating a destination task area to be processed by the mobile electronic device 100, The name of the destination task area is associated with a picture subspace in the picture library in the mobile electronic device 100.
  • the second mobile electronic device 140 can be, for example, a mobile phone.
  • the second mobile electronic device 140 includes a second camera 144, a second processor 146, and a second wireless signal transceiver 142.
  • the user of the second mobile electronic device 140 photographs a plurality of pictures of the task area using the second camera 144.
  • the second processor 146 is communicably coupled to the second camera 144 to define at least one picture subspace for the plurality of pictures taken according to a user's instruction.
  • the second wireless signal transceiver 142 of the second mobile electronic device 140 transmits the photo to the mobile electronic device 100, thereby forming a photo library in the mobile electronic device 100.
  • the photo library may be stored at the charging station 180 of the mobile electronic device 100, or stored at the server of the mobile electronic device 100, or stored in the cloud of the mobile electronic device 100.
  • the processor 104 or the second processor 146 defines different types of picture subspace names according to the user's instructions. Names For example, the processor 104 or the second processor 146 defines six subspaces, such as a bedroom, a living room, a living room, a hallway, a study, an entire home, and the like. Note that some images can belong to different image subspaces at the same time.
  • the user can either include the picture of the living room in the picture subspace named as the living room, or include the picture of the living room in the picture subspace named as the entire home.
  • the processor 104 more specifically, the image processor 1040 in the processor 104 establishes a coordinate system for each image in the picture library, and assigns corresponding coordinate values to each point in the task area, thereby establishing an environment space. map.
  • This coordinate system can be, for example, the charging post 180 as a coordinate origin.
  • the second wireless signal transceiver 142 is communicably coupled to the second processor 146 and configured to transmit the name in the at least one picture subspace to the mobile electronic device 100 as the name of the destination task area to be processed.
  • the processor 104 or the second processor 146 of the mobile electronic device 100 is further configured to further subdivide the name of the at least one picture subspace according to a user's instruction.
  • the user can also draw a picture and select it in the picture library or space name to further subdivide the picture subspace.
  • the processor 104 or the second processor 146 of the mobile electronic device 100 may further define a picture name as a bedroom bedside, a living room coffee table, a living room table, etc. according to the user's circle and text input, and are stored by the processor 104.
  • the stored storage for example, in the memory of the charging station 180, in the server, or in the cloud, and the like.
  • the user of the second mobile electronic device 140 desires the mobile electronic device 100 to clean the living room, and thus the user of the second mobile electronic device 140 sends an instruction "living room" to the mobile electronic device 100.
  • the user of the second mobile electronic device 140 may send an instruction in the form of voice, or may input the text "living room” by using the APP to indicate the instruction.
  • the first wireless signal transceiver 102 acquires an instruction from the second mobile electronic device 140, where the instruction includes the name of the destination task area to be processed by the mobile electronic device 100, for example, the task area that the user wants to clean is the living room, and the name of the task area
  • the "living room" is associated with a picture subspace in the picture library in the mobile electronic device 100, that is, a picture subspace named as a living room.
  • the processor 104 is communicably coupled to the first wireless signal transceiver 102 and configured to determine an environmental space corresponding to the name of the destination task area.
  • the environmental space that is, the environmental space map, may be established by the mobile electronic device 100 when the mobile electronic device 100 is first used, and may, for example, be in any of the following ways:
  • the mobile electronic device 100 (for example, a robot) includes a camera, and the user of the second mobile electronic device 140 wears a positioning receiver.
  • the mobile electronic device 100 also includes a first camera 112, wherein the second mobile electronic device 140 further includes a second wireless signal transceiver 142 that is configured to operate in a map-building mode.
  • First wireless signal transceiver 102 and second wireless signal transceiver 142 are communicably coupled to a plurality of reference wireless signal sources, respectively, configured to determine mobile electronic device 100 and second based on signal strengths obtained from a plurality of reference wireless signal sources The location of the mobile electronic device 140.
  • the signals received from the reference wireless signal source can be converted to distance information by any method known in the art including, but not limited to, Time of Flight (ToF), Angle of Arrival (Angle) ofArrival, AoA), Time Difference of Arrival (TDOA) and Received Signal Strengh (RSS).
  • TOF Time of Flight
  • Angle Angle of Arrival
  • AoA Time Difference of Arrival
  • TDOA Time Difference of Arrival
  • RSS Received Signal Strengh
  • the motion module 110 is configured to follow the motion of the second mobile electronic device 140 in accordance with the location of the mobile electronic device 100 and the second mobile electronic device 140.
  • mobile electronic device 100 includes a monocular camera 112, a user of second mobile electronic device 140 wears a wireless positioning receiver wristband, or a user handheld mobile phone equipped with a wireless positioning receiver peripheral.
  • the use of the monocular camera 112 can reduce hardware cost and computational cost, and the use of a monocular camera achieves the same effect as using a depth camera. Image depth information may not be needed.
  • the distance depth information is sensed by the ultrasonic sensor and the laser sensor.
  • a monocular camera is taken as an example for description.
  • the mobile electronic device 100 follows the user through its own wireless location receiver. For example, for the first time, the user of the second mobile electronic device 140 realizes the interaction with the mobile electronic device 100 through the mobile phone APP to complete the indoor establishment of the map.
  • the wireless signal transmitting group in a fixed position placed indoors as a reference point, for example, UWB, the mobile APP of the second mobile electronic device 140 and the wireless signal module in the mobile electronic device 100 read the signal strength (RSS) for each signal source.
  • RSS signal strength
  • the location of the user of the second mobile electronic device 140 and the mobile electronic device 100 within the room is determined.
  • the motion module 110 of the mobile electronic device 100 completes user tracking according to real-time location information (mobile phone and robot location) transmitted by the smart charging station.
  • the first camera 112 is configured to capture a plurality of images when the motion module 110 is in motion, the plurality of images including feature information and corresponding photographing position information.
  • the follow-up process is completed by the robot's monocular camera.
  • the mobile electronic device 100 captures the entire indoor layout by using the first camera 112, such as a monocular camera, and takes the captured image with a large number of features and corresponding shooting position information and the mobile electronic device 100.
  • the memory 116 is transmitted to the memory 116 in real time via a local wireless communication network (WIFI, Bluetooth, ZigBee, etc.).
  • WIFI local wireless communication network
  • Bluetooth Bluetooth
  • ZigBee ZigBee
  • FIG. 1 memory 116 is shown as being included in mobile electronic device 100.
  • the memory 116 may also be included in the smart charging station 180, ie, the cloud.
  • the image processing module 104 is communicably coupled to the first camera 112 and configured to generate feature maps by extracting the plurality of images, extracting feature information and shooting location point information of the plurality of images, and generating an image map. For example, the image processing module 104 performs map stitching creation on a large number of images captured by the first camera 112 via the image processor 1040 in the processor 104 according to the height and internal and external parameters of the first camera 112 of the mobile electronic device 100. Extraction (for example, SIFT, SURF algorithm, etc.), feature point position information is added, and then indoor image map information (including a large number of image feature points) is generated, and the processed image map information is stored in the memory 116.
  • Extraction for example, SIFT, SURF algorithm, etc.
  • the internal parameters of the camera refer to parameters related to the camera's own characteristics, such as the camera's lens focal length, pixel size, etc.; the camera's external parameters are parameters in the world coordinate system (the actual coordinate system in the charging pile room), such as the camera's Position, direction of rotation, angle, etc.
  • the photos taken by the camera have their own camera coordinate system, so the internal and external parameters of the camera are required to realize the conversion of the coordinate system.
  • the mobile electronic device 100 (robot) includes a camera and the displayable camera corrects the black and white checkerboard, and the user of the second mobile electronic device 140 does not need to wear the positioning receiver.
  • the mobile electronic device 100 further includes a display screen 118, the mobile electronic device 100 is configured to operate in a map-building mode, and the second mobile electronic device 140 includes a second camera 144, the first wireless signal
  • the transceiver 142 is communicably coupled to a plurality of reference wireless signal sources configured to determine a location of the mobile electronic device 100 based on signal strengths obtained from the plurality of reference wireless signal sources.
  • the first camera 112 is configured to detect the location of the second mobile electronic device 140.
  • the mobile electronic device 100 further includes an ultrasonic sensor and a laser sensor, and the distance between the mobile electronic device 100 and the second mobile electronic device 140 can be detected.
  • the motion module 110 is configured to follow the motion of the second mobile electronic device 140 in accordance with the location of the mobile electronic device 100 and the second mobile electronic device 140.
  • the user of the second mobile electronic device 140 implements user interaction with the mobile electronic device 100 through the mobile phone APP to complete the indoor establishment of the map.
  • the first wireless signal transceiver 102 in the mobile electronic device 100 reads the signal strength (RSS) for each signal source to determine the mobile electronic device by using a wireless signal transmitting group (UWB or the like) of a fixed position placed indoors as a reference point. 100 indoors location.
  • RSS signal strength
  • Target positioning and following of the user of the second mobile electronic device 100 is achieved by the first camera 112 of the mobile electronic device 100, such as a monocular camera, an ultrasonic sensor, and a laser sensor 114.
  • the user of the second mobile electronic device 140 can set the following distance through the mobile phone APP, so that the mobile electronic device 100 adjusts and the second mobile electronic according to the following distance and the angle between the second mobile electronic device 140 measured in real time. The distance and angle between the devices 140.
  • the mobile electronic device 100 transmits the following path coordinates to the smart charging post 180 in real time.
  • display 118 of mobile electronic device 100 is configured to display, for example, a black and white checkerboard.
  • Image processor 1040 in processor 104 is communicably coupled to second camera 144 and is configured to receive a plurality of images taken from second camera 144 as motion module 110 moves.
  • image processor 1040 in processor 104 can receive a plurality of images taken from second camera 144 via first wireless signal transceiver 102 and second wireless signal transceiver 142.
  • the plurality of images includes an image of the display 118 of the mobile electronic device 100 that is displayed as a black and white checkerboard.
  • the image processor 1040 in the processor 104 is further configured to generate an image map by splicing a plurality of images, extracting feature information and shooting position point information in the plurality of images. In this manner, the user of the second mobile electronic device 140 does not need to wear the positioning receiver, so the external parameters of the second mobile device 140, such as the mobile phone camera, require camera calibration through the calibration map.
  • the calibration picture is a checkerboard composed of black and white rectangles, as shown in Figure 5.
  • the mobile electronic device 100 ie, the robot, includes a first camera 112, such as a monocular camera, and a display 118 that can display a black and white camera to correct the board.
  • the user does not need to wear the wireless positioning receiver bracelet, and the user does not need to hold the mobile phone equipped with the wireless positioning receiver peripheral.
  • the mobile electronic device 100 follows the user visually, and the user of the second mobile electronic device 140 uses the mobile phone APP to complete the drawing. For example, each time a room is reached, the user of the second mobile electronic device 140 launches the room building application via the mobile phone APP, at which time the liquid crystal display 118 of the mobile electronic device 100 displays a classic black and white checkerboard for correcting the camera.
  • the mobile electronic device 100 simultaneously transmits its own coordinate and direction information to the positioning module 106.
  • the user of the second mobile electronic device 140 photographs the room environment using the mobile phone APP, and the photograph taken needs to include a black and white checkerboard in the liquid crystal display of the mobile electronic device 100.
  • the user of the second mobile electronic device 140 takes a plurality of photos according to the layout of the room (the photos all need to take a black and white checkerboard in the robot LCD screen), and the room environment and the mobile electronic device 100 that are photographed by the mobile phone APP, for example
  • the image of the robot 100 is transferred to the memory 116 via a local wireless communication network (WIFI, Bluetooth, ZigBee, etc.).
  • WIFI local wireless communication network
  • the image processor 1040 in the processor 104 performs map stitching on a large number of images taken by the user of the second mobile electronic device 140. Feature selection extraction, feature point position information addition, indoor image feature point map information is generated, and the processed image map information is stored in the memory 116.
  • Mode 3 The mobile electronic device 100 (robot) does not include a camera, and the user of the second mobile electronic device 140 wears a positioning receiver.
  • the second mobile electronic device 140 further includes a second wireless signal transceiver 142 and a second camera 144.
  • a second wireless signal transceiver 142 is communicably coupled to the plurality of reference wireless signal sources, configured to determine a location of the second mobile electronic device 140 based on signal strengths obtained from the plurality of reference wireless signal sources.
  • the second camera 144 is configured to capture a plurality of images of the mission location.
  • the image processor 1040 in the processor 104 is communicably coupled to the second camera 140, and is configured to generate an image map by splicing a plurality of images, extracting feature information in the plurality of images, and capturing position point information.
  • the mobile electronic device 100 such as a robot, does not include a monocular camera and the robot does not follow the user of the second mobile electronic device 140.
  • the user of the second mobile electronic device 140 wears a wireless positioning receiver wristband, or the user holds a mobile phone equipped with a wireless positioning receiver peripheral, and uses the mobile phone APP to complete the indoor drawing.
  • the user of the second mobile electronic device 140 realizes the indoor establishment of the map through the mobile phone APP or the wireless positioning receiver wristband worn by the user or the wireless positioning receiver peripheral of the mobile phone equipment.
  • the wireless signal transceiver 142 in the second mobile electronic device 140 reads the received signal strength (RSS) for each reference wireless signal source by using a fixed position reference wireless signal source (UWB or the like) placed indoors as a reference point. The location of the user of the second mobile electronic device 140 indoors is determined. Each time a room is reached, the user of the second mobile electronic device 140 initiates a room building program via the mobile APP. The user of the second mobile electronic device 140 photographs the room environment using the mobile phone APP, for example, multiple photos can be taken according to the layout of the room.
  • RSS received signal strength
  • UWB fixed position reference wireless signal source
  • the mobile APP of the second mobile electronic device 140 will record the pose information of the second camera 144 for each shot and the second mobile electronic device 140 recorded by the second wireless signal transceiver 142, such as the height information of the mobile phone relative to the ground and its
  • the location information of the room is transmitted to the memory 116 via a local wireless communication network (WIFI, Bluetooth, ZigBee, etc.).
  • WIFI local wireless communication network
  • the height information and the position information at the time of shooting a large number of images captured by the image processor in the processor 104 are used for map stitching creation, feature selection extraction, feature point position information. Adding, generating indoor image feature point map information, and storing the processed image map information in the memory 116.
  • An image processor 1040 in the processor 104 is communicably coupled to the first wireless signal transceiver 102, configured to extract feature information of a photo containing the selected region, and to compare the extracted feature information with the stored image including the location information
  • the feature information of the map determines the actual coordinate range corresponding to the selected area in the photo.
  • the location information refers to the location information of the image feature points in the image map during the process of establishing the map, that is, the actual coordinate position.
  • the location information includes, for example, the location of the charging post 180 and/or the location of the mobile electronic device 100 itself.
  • image processor 1040 in processor 104 can use the location of charging post 180 as a coordinate origin.
  • the user may arrange at least one camera indoors, such as a ceiling, and a plurality of pictures including the mobile electronic device 100 collected by the at least one camera.
  • the at least one camera transmits the picture information to the image processor 1040 of the mobile electronic device 100 via the first wireless signal transceiver 102 of the mobile electronic device 100.
  • the image processor 1040 then identifies the feature information of the mobile electronic device 100 in the image of the task area, and establishes a coordinate system for the image, and assigns a corresponding coordinate value to each point in the task area to establish an environmental space map.
  • the mobile electronic device 100 utilizes the first camera 112, for example, the planar graphic information acquired by the depth camera while the mobile electronic device 100 is moving, and the distance information of the object in the graphic, and includes more graphic information and distance information.
  • the three-dimensional information is sent to an image processor 1040; the image processor 1040 is communicably coupled to the first wireless signal transceiver 102, configured to process the received plurality of three-dimensional information; and is communicatively coupled to the image processor 1040
  • the map module obtains an environmental space map of the task area by drawing an image of the three-dimensional task area according to the plurality of three-dimensional information processed by the image processor 1040.
  • the processor 104 in the mobile electronic device 100 is communicably coupled to the first wireless signal transceiver 102 and configured to determine an environmental space corresponding to the name of the destination task area. For example, the mobile electronic device 100 may first determine a picture subspace corresponding to the name of the destination task area; and then determine an environment space corresponding to the picture subspace according to the picture subspace.
  • the memory 116 of the mobile electronic device 100 stores an environment map established during the first use of the indoor environment map, such as indoor image map information, including image feature points and their location information.
  • the memory 116 of the mobile electronic device 100 further includes a correspondence between a name of the picture subspace and at least one representative picture representing the subspace.
  • a representative picture of the example living room may be stored in the memory 116 and the picture is named the living room.
  • the following is an example of a representative picture of the living room. Those skilled in the art will appreciate that this embodiment is also applicable to other types of rooms.
  • the processor 104 first determines a picture subspace corresponding to the received instruction "living room", such as a representative picture, by a technique such as voice recognition. For example, processor 104 retrieves the name of the photo library stored in mobile electronic device 100 and finds a representative picture named "living room.”
  • the processor 104 includes an image processor 1040, and then the image processor 1040 extracts, for example, feature information and location information in a representative picture of the living room, and further utilizes an image feature point matching algorithm (such as SIFT, SURF, etc.) and The indoor environment map (including location information) in the memory 116 performs fast comparison analysis.
  • the image feature points may be identified by a Scale Invariant Feature Transform (SIFT) algorithm or a Speeded Up Robust Features (SURF) algorithm. With the SIFT algorithm, a reference image needs to be stored in the memory 116.
  • SIFT Scale Invariant Feature Transform
  • SURF Speeded Up Robust Features
  • the image processor 1040 first identifies the key points of the object of the reference image stored in the memory 110, extracts the SIFT features, and then compares the SIFT features of the respective key points in the memory 110 with the SIFT features of the newly acquired image, and then based on the K nearest neighbor
  • the matching feature of the algorithm K-Nearest Neighbor KNN
  • the SURF algorithm is based on an approximate 2D Haar wavelet response and uses an integral image for image convolution using a Hessian matrix-based measure for the detector. And use a distribution-based descriptor.
  • determining the coordinate range of the actual area of the room corresponding to the representative picture of the living room may determine the actual coordinate range of the task area by coordinate mapping conversion.
  • the feature points in the representative picture of the living room stored in the mobile electronic device 100 will match the image feature points in the image map, and the actual coordinate position of the feature points in the image in the mobile electronic device 100 can be determined.
  • the coordinate system conversion relationship of the camera coordinate system where the representative picture of the living room is located with respect to the actual world coordinate system where the charging pile is located can be calculated.
  • a representative picture of a living room in a photo library stored in a memory of the mobile electronic device 100 includes feature points such as a sofa, a coffee table, and a television cabinet, and a coordinate range of each of the furniture.
  • the environment map stored in the memory of the mobile electronic device 100 also includes a sofa, a coffee table, and a TV cabinet in the living room.
  • the image processor 1040 compares the picture of the living room in the picture library with the environment map, extracts the feature information, and compares the respective coordinate values to perform coordinate changes, thereby obtaining an actual world coordinate range of the living room that needs to be cleaned.
  • the location module 106 is communicably coupled to the processor 104 and is configured to record the current location of the mobile electronic device 100 and the environmental space, such as a range of distances from the living room. For example, the positioning module 106 sets the location of the charging post 180 as the coordinate origin, and each point in the image corresponds to a coordinate value (X, Y). The positioning module 106 and the encoder cause the mobile electronic device 100 to know its current location.
  • the location module 106 is a module that calculates the location of the mobile electronic device 100 in the room. The mobile electronic device 100 needs to know its own indoor location at all times during operation, and is implemented by the positioning module 106.
  • the path planning module 108 is communicably coupled to the processor 104 and configured to generate a path planning scheme based on the name of the task area.
  • the path planning module 108 is further configured to perform path planning on the selected area by using a grid-based spanning tree path planning algorithm.
  • the path planning module 108 implements a cleaning path planning for the selected target cleaning area using a Grid-based Spanning Tree Path Planning. The method uses gridding processing for the corresponding coordinate region, establishes a tree node for the mesh and generates a tree, and then uses a Hamiltonian path surrounding the spanning tree as an optimized cleaning path for cleaning the region.
  • the mobile electronic device 100 is located at the smart charging station 180.
  • the path planning module 108 will read the path that the mobile electronic device 100 follows to reach the region when first used (if the mobile electronic device 100 adopts the following mode) Or adopting the walking path in the user mapping process of the second mobile electronic device 140 as the path to the area (if the first time the mobile electronic device 100 does not follow the user), and optimizing the path and the selected area
  • the sweep path synthesizes the sweep task path.
  • the synthesis can connect the two paths in a simple sequence, the first path realizes reaching the target cleaning area, and the second path realizes optimal coverage of the circled cleaning area to complete the cleaning task.
  • the motion module 110 can be communicatively coupled to the path planning module 108, configured to perform motion in accordance with a path planning scheme.
  • the mobile electronic device 100 further includes a first camera 112 and a memory 116 for taking a picture of the target task area while the task is being performed.
  • the first wireless signal transceiver 102 is further communicably connected to the first camera 112 for acquiring a picture of a task area captured by the first camera 112, and storing the picture in the memory 116, and with the picture subspace Corresponding.
  • the first camera 112 of the mobile electronic device 100 is stored in the memory 116 after taking an image of the living room, for example, may be stored in a picture subspace named Living Room.
  • the mobile electronic device 100 when the mobile electronic device 100 performs the cleaning of the bedroom, the photo of the bedroom is taken and the layout of the current bedroom is stored in the bedroom subspace name, thereby adding the picture to the corresponding photo library of the mobile electronic device 100 by self-learning. in.
  • the mobile electronic device 100 for example, the robot 100 further includes an encoder and an inertial measurement module (IMU) to assist the first camera 112 in acquiring the position and attitude of the mobile electronic device 100, such as a robot.
  • an encoder and an inertial measurement module IMU
  • both the encoder and the IMU can provide the position and attitude of the robot.
  • the encoder can be used as an odometer to record the trajectory of the robot by recording the rotation information of the robot wheel.
  • the mobile electronic device 100 may further include a sensor 114 that transmits obstacle information around the mobile electronic device 100 to the motion module 110.
  • the motion mode 110 is also configured to adjust the motion orientation of the mobile electronic device 100 to avoid obstacles. It can be understood that, because the height of the installation is different, the height of the first camera 112 mounted on the mobile electronic device 100 is different from the height of the sensor 114 mounted on the mobile electronic device 100, so the obstacle information and the sensor device captured by the first camera 112 are The obstacles taken may be different because there may be obscuration.
  • the first camera 112 can change the visual direction by means of rotation, pitch, etc. to obtain a wider visual range.
  • the senor 114 can be mounted at a relatively low horizontal position, which may be a blind spot of the first camera 112. Objects do not appear in the viewing angle of the first camera 112, so these conventional sensors 112 are relied upon to avoid obstacles.
  • camera 112 may acquire obstacle information in conjunction with ultrasound and laser sensor 114 information. The image obtained by the monocular camera 112 is used for object recognition, and the ultrasonic and laser sensors 114 are ranging.
  • the sensor 114 includes an ultrasonic sensor and/or a laser sensor.
  • the first camera 112 and the sensor 114 can assist each other. For example, if there is shielding, the mobile electronic device 100 needs to rely on its own laser sensor, ultrasonic sensor 114, etc. to avoid obstacles in the shaded portion.
  • the laser sensor and the ultrasonic sensor mounted on the mobile electronic device 100 detect static and dynamic environments around the mobile electronic device 100, and assist in avoiding static and dynamic obstacles and adjusting the optimal path.
  • FIG. 2 shows a flow diagram of a method 200 in a mobile electronic device in accordance with one embodiment of the present invention.
  • FIG. 2 illustrates a method 200 for processing a task of a task area in a mobile electronic device.
  • the mobile electronic device includes a first wireless signal transceiver, a processor, a positioning module, a path planning module, and a motion module, wherein the method 200 includes, in block 210, the first portion communicably coupled to the second mobile electronic device a wireless signal transceiver that acquires an instruction from the second mobile electronic device, the instruction including a name of a destination task area to be processed by the mobile electronic device, a name of the destination task area, and a picture in the mobile electronic device A picture subspace in the library is associated; in block 220, an environmental space corresponding to the name of the destination task area is determined by the processor communicably coupled to the first wireless signal transceiver; In block 230, a range of distances between a current location of the mobile electronic device and the environmental space is recorded by the positioning module communicatively coupled to the processor; in block 240, by communicatively The path planning module connected to the processor generate
  • the method 200 further comprises determining a picture subspace corresponding to the name of the destination task area; determining the environmental space corresponding to the picture subspace according to the picture subspace.
  • the mobile electronic device further comprises a camera and a memory
  • the method 200 further comprising capturing a picture of the target task area while performing the task; by communicably connecting to the camera The first wireless signal transceiver; acquiring a picture of a task area captured by the camera, storing the picture in the memory, and corresponding to the picture subspace.
  • the mobile electronic device further comprises an encoder and an inertial measurement module communicably coupled to the processor
  • the method 200 further comprising, by the encoder and the inertial measurement module, an auxiliary The camera acquires the position and attitude of the mobile electronic device.
  • the mobile electronic device further comprises a charging post, wherein the charging post comprises the processor, the path planning module and the positioning module.
  • the method 200 further comprising transmitting, by the sensor, obstacle information around the mobile electronic device to the motion module, by the motion module Adjusting a motion orientation of the mobile electronic device to avoid the obstacle.
  • the senor comprises an ultrasonic sensor and/or a laser sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

用于处理任务区域的任务的移动电子设备(100)包括第一无线信号收发器(102)、处理器(104)、定位模块(106)、路径规划模块(108)以及运动模块(110)。第一无线信号收发器(102)可通信地连接至第二移动电子设备(140),获取来自第二移动电子设备(140)的指令,指令包括移动电子设备(100)待处理的目的任务区域的名称;处理器(104)可通信地连接至第一无线信号收发器(102),确定与目的任务区域的名称相对应的环境空间;定位模块(106)可通信地连接至处理器(104),记录移动电子设备(100)的当前所在位置与环境空间之间的距离范围;路径规划模块(108)可通信地连接至处理器(104),根据任务区域的名称,生成路径规划方案;运动模块(110)可通信地连接至路径规划模块(108)和定位模块(106),根据路径规划方案和定位模块(106)记录的距离范围,进行任务。

Description

一种用于处理任务区域的任务的移动电子设备以及方法 技术领域
本发明涉及电子设备领域。具体而言,本发明涉及智能机器人系统领域。
背景技术
传统的扫地机器人按扫描的地图自主定位和移动或者碰撞反弹变向随机行走,同时清扫地面。因此,传统的扫地机器人因为制图和定位技术不成熟或不精确,在工作过程中无法完全判断地面复杂状况,容易出现失去位置与方向的情况。此外,某些机型由于不具备定位能力,只能通过碰撞反弹的物理原理来变向,甚至会造成家居用品或者机器人自身损坏甚至人身伤害,对用户造成干扰等问题。
发明内容
本发明的实施例提出了一种基于手机终端拍摄图片,并在手机终端对照片或者选定的具体目标区域进行定义命名图片子空间名称,通过APP或者机器人的麦克收音,机器人通过语音识别将语音指令和命名的区域名称关联,并在由指令所指示的区域完成任务。本发明的实施例通过语音或者App发送指令至机器人,使机器人自动到达定义的图片子空间名称内完成任务,便于机器人的自动清扫。
根据本发明的一方面的实施例,提供了一种用于处理任务区域的任务的移动电子设备,包括第一无线信号收发器、处理器、定位模块、路径规划模块以及运动模块,其中所述第一无线信号收发器可通信地连接至第二移动电子设备,配置为获取来自所述第二移动电子设备的指令,所述指 令包括所述移动电子设备待处理的目的任务区域的名称,所述任务区域的名称与所述移动电子设备中的图片库的图片子空间相关联;所述处理器可通信地连接至所述第一无线信号收发器,配置为确定与所述目的任务区域的名称相对应的环境空间;所述定位模块可通信地连接至所述处理器,配置为记录所述移动电子设备的当前所在位置与所述环境空间之间的距离范围;所述路径规划模块可通信地连接至所述处理器,配置为根据所述任务区域的名称,生成路径规划方案;所述运动模块可通信地连接至所述路径规划模块和所述定位模块,配置为根据所述路径规划方案和所述定位模块记录的所述距离范围,进行所述任务。
根据本发明的一方面的实施例,提供了一种在移动电子设备中用于处理任务区域的任务的方法,所述移动电子设备包括第一无线信号收发器、处理器、定位模块、路径规划模块以及运动模块,其中通过可通信地连接至第二移动电子设备的所述第一无线信号收发器,获取来自所述第二移动电子设备的指令,所述指令包括所述移动电子设备待处理的目的任务区域的名称,所述目的任务区域的名称与所述移动电子设备中的图片库中的图片子空间相关联;通过可通信地连接至所述第一无线信号收发器的所述处理器,确定与所述目的任务区域的名称相对应的环境空间;通过可通信地连接至所述处理器的所述定位模块,记录所述移动电子设备的当前所在位置与所述环境空间之间的距离范围;通过可通信地连接至所述处理器的所述路径规划模块,根据所述任务区域的名称,生成路径规划方案;通过可通信地连接至所述路径规划模块和所述定位模块的所述运动模块,根据所述路径规划方案和所述定位模块记录的所述距离范围,进行所述任务。
附图说明
本发明的更完整的理解通过参照关联附图描述的详细的说明书所获得,在附图中相似的附图标记指代相似的部分。
图1示出根据本发明的一个实施例的移动电子设备所在系统的示意图。
图2示出根据本发明的一个实施例的方法流程图。
具体实施方式
图1示出根据本发明的一个实施例的移动电子设备所在系统的示意图。
参照图1,移动电子设备100包括但不限于扫地机器人、工业自动化机器人、服务型机器人、排险救灾机器人、水下机器人、空间机器人、无人机等。可以理解,为了与以下的第二移动电子设备140相区别,移动电子设备100也可以称为第一移动电子设备100。
第二移动电子设备140包括但不限于:手机、平板电脑、笔记本电脑、遥控器等。移动电子设备可选地包含操作界面。在一个可选的实施方式中,第二移动电子设备是手机,操作界面是手机APP。
移动电子设备100与充电桩160之间的信号传输方式包括但不限于:蓝牙、WIFI、ZigBee、红外、超声波、超宽带(Ultra-wide Bandwidth,UWB)等,在本实施例中以信号传输方式是WIFI为例进行描述。
任务区域表示移动电子设备100执行任务的场地。例如,当移动电子设备100的任务为清扫地面时,任务区域表示扫地机器人需要清扫的区域。又例如,当移动电子设备100的任务为排险救灾时,任务区域表示该排险救灾机器人需要抢险的区域。任务场所表示包含整个任务区域的场地。
如图1所示,用于处理任务区域的任务的移动电子设备100包括第一无线信号收发器102、处理器104、定位模块106、路径规划模块108以及运动模块110。第一无线信号收发器102可通信地连接至第二移动电子设备140,配置为获取来自第二移动电子设备140的指令,该指令包括指 示移动电子设备100待处理的目的任务区域的名称,该目的任务区域的名称与移动电子设备100中的图片库中的图片子空间相关联。
第二移动电子设备140例如可以是手机。第二移动电子设备140包括第二摄像头144、第二处理器146和第二无线信号收发器142。第二移动电子设备140的用户利用第二摄像头144拍摄任务区域的多个图片。第二处理器146可通信地连接至第二摄像头144,根据用户的指令,为所拍摄的多个图片定义至少一个图片子空间。
例如,手机拍摄照片后,第二移动电子设备140的第二无线信号收发器142将照片传输至移动电子设备100,从而在移动电子设备100中形成图片库。例如,该图片库可以存储在移动电子设备100的充电桩180处,或者存储在移动电子设备100的服务器处,或者存储在移动电子设备100的云端中等。在图片库中,处理器104或第二处理器146根据用户的指示,定义不同类型图片子空间名称。名称例如,处理器104或第二处理器146定义六个子空间,例如,卧室、客厅、客厅、走廊、书房、整个家,等等。注意,一些图片可以同时属于不同的图片子空间。例如,用户既可以将客厅的图片包含在名称为客厅的图片子空间中,也可以将客厅的图片包含在名称为整个家的图片子空间中。
此外,处理器104,更具体地,处理器104中的图像处理器1040为图片库中的每一个图像建立坐标系,并对任务区内的每一个点赋予相应的坐标值,从而建立环境空间地图。该坐标系例如,可以以充电桩180作为坐标原点。
第二无线信号收发器142可通信地连接至第二处理器146,配置为将至少一个图片子空间中的名称作为待处理的目的任务区域的名称发送至移动电子设备100。
可选地,移动电子设备100的处理器104或第二处理器146还配置为根据用户的指令,对至少一个图片子空间的名称进行进一步的细分。例如,用户也可以将拍摄的图片进行画圈选择,存入图片库或者空间名称中, 从而对图片子空间进行进一步的细分。例如,移动电子设备100的处理器104或第二处理器146可以根据用户的画圈以及文字输入,进一步定义图片名称为卧室床边、客厅茶几、客厅餐桌等,并存储在由处理器104可访问的存储中,例如,充电桩180的存储器中,服务器中,或者云端等。
进一步地,例如,第二移动电子设备140的用户希望移动电子设备100清扫客厅,因此,第二移动电子设备140的用户向移动电子设备100发送指令“客厅”。第二移动电子设备140的用户可以采用语音的形式发送指令,也可以利用APP输入文字“客厅”,以表示该指令。
第一无线信号收发器102获取来自第二移动电子设备140的指令,指令中包含移动电子设备100待处理的目的任务区域的名称,例如用户想要清扫的任务区域为客厅,该任务区域的名称“客厅”与移动电子设备100中的图片库中的图片子空间,也即命名为客厅的图片子空间相关联。
处理器104可通信地连接至第一无线信号收发器102,配置为确定与该目的任务区域的名称相对应的环境空间。环境空间,也即环境空间地图,可以在移动电子设备100首次使用时由移动电子设备100所建立,可以,例如,采用如下几种方式中的任一种:
以下分别描述移动设备100首次使用时建立室内环境地图的多种方式。
方式一:移动电子设备100(例如机器人)包含摄像头,且第二移动电子设备140的用户佩戴定位接收器
移动电子设备100还包括第一摄像头112,其中,第二移动电子设备140还包括第二无线信号收发器142,移动电子设备100被配置为工作在建立地图模式。第一无线信号收发器102和第二无线信号收发器142分别可通信地连接至多个参考无线信号源,配置为根据从多个参考无线信号源获取的信号强度,确定移动电子设备100和第二移动电子设备140的位置。例如,可通过本领域已知的任何方法将从参考无线信号源处接收到的信号转换为距离信息,上述方法包括但不限于:飞行时间算法(Time of  Flight,ToF)、到达角度算法(Angle ofArrival,AoA)、到达时间差算法(Time Difference ofArrival,TDOA)和接收信号强度算法(Received Signal Strengh,RSS)。
运动模块110被配置为根据移动电子设备100和第二移动电子设备140的位置,跟随第二移动电子设备140的运动。例如,移动电子设备100包含单目摄像头112,第二移动电子设备140的用户佩戴有无线定位接收器手环,或用户手持装备有无线定位接收器外设的手机。使用单目摄像头112可以降低硬件成本与计算代价,采用单目摄像头实现与采用深度摄像头一样的效果。可以不需要图像深度信息。距离深度信息通过超声波传感器与激光传感器感知。本实施例中,以单目摄像头为例进行说明,本领域技术人员应能理解,也可以采用深度摄像头等作为移动电子设备100的摄像头。移动电子设备100通过自身的无线定位接收器跟随用户。例如,首次使用,第二移动电子设备140的用户通过手机APP实现与移动电子设备100的交互完成室内建立地图。通过室内放置的固定位置的无线信号发射组作为参考点,例如,UWB,第二移动电子设备140的手机APP和移动电子设备100中的无线信号模块读取对各个信号源的信号强度(RSS),来确定第二移动电子设备140的用户和移动电子设备100在室内的位置。并且,移动电子设备100的运动模块110根据智能充电桩发送的实时位置信息(手机和机器人位置),完成用户跟随。
第一摄像头112被配置为在运动模块110运动时拍摄多个图像,该多个图像包含特征信息和对应的拍摄位置信息。例如,跟随过程中由机器人的单目摄像头完成建图。在跟随的过程中,移动电子设备100利用第一摄像头112,例如单目摄像头,对整个室内布局进行拍摄,并将拍下的含大量特征的图像及其对应的拍摄位置信息以及移动电子设备100跟随路径坐标,经过本地无线通信网络(WIFI、蓝牙、ZigBee等)实时传送至存储器116中。在图1中,存储器116被显示包含在移动电子设备100中。可选地,存储器116也可以包含在智能充电桩180中,也即云端。
图像处理模器104可通信地连接至第一摄像头112,被配置为通过对该多个图像进行拼接,提取该多个图像中的特征信息和拍摄位置点信息,生成图像地图。例如,图像处理模器104根据移动电子设备100的第一摄像头112的高度和内外参数,经由处理器104中的图像处理器1040对第一摄像头112所拍摄的大量图像进行地图拼接创建,特征选择提取(例如SIFT、SURF算法等),特征点位置信息添加,进而生成室内图像地图信息(含大量图像特征点),再将处理后的图像地图信息存储在存储器116中。相机(摄像头)的内参数指与相机自身特性相关的参数,比如相机的镜头焦距、像素大小等;相机的外参数是在世界坐标系中(充电桩室内实际坐标系)的参数,比如相机的位置、旋转方向、角度等。相机拍摄的照片有自己的相机坐标系,故需要相机内外参数实现坐标系的转换。
方式二:移动电子设备100(机器人)包含摄像头以及可显示相机校正黑白棋盘,第二移动电子设备140的用户无需佩戴定位接收器。
可选地,在另一实施例中,移动电子设备100还包括显示屏118,移动电子设备100被配置为工作在建立地图模式,第二移动电子设备140包括第二摄像头144,第一无线信号收发器142可通信地连接至多个参考无线信号源,配置为根据从多个参考无线信号源获取的信号强度,确定移动电子设备100的位置。
第一摄像头112被配置为检测第二移动电子设备140的位置。可选地,移动电子设备100还包括超声波传感器及激光传感器,可以检测移动电子设备100与第二移动电子设备140之间的距离。
运动模块110被配置为根据移动电子设备100和第二移动电子设备140的位置,跟随第二移动电子设备140的运动。例如,首次使用,第二移动电子设备140的用户通过手机APP实现与移动电子设备100的用户交互来完成室内建立地图。通过室内放置的固定位置的无线信号发射组(UWB等)作为参考点,移动电子设备100中的第一无线信号收发器102读取对各个信号源的信号强度(RSS),来确定移动电子设备100在室内 的位置。通过移动电子设备100的第一摄像头112,例如单目摄像头、超声波传感器及激光传感器114实现对第二移动电子设备100的用户的目标定位与跟随。例如,第二移动电子设备140的用户可以通过手机APP设定跟随距离,从而移动电子设备100根据该跟随距离和实时测量的与第二移动电子设备140之间的角度,调整与第二移动电子设备140之间的距离和角度。跟随过程中移动电子设备100实时发送跟随路径坐标至智能充电桩180。
此外,移动电子设备100的显示屏118被配置为显示例如黑白色棋盘。处理器104中的图像处理器1040可通信地连接至第二摄像头144,被配置为接收来自第二摄像头144在运动模块110运动时拍摄的多个图像。例如,处理器104中的图像处理器1040可通过第一无线信号收发器102和第二无线信号收发器142,接收来自第二摄像头144所拍摄的多个图像。其中,多个图像包括移动电子设备100的显示为黑白色棋盘的显示屏118的图像。处理器104中的图像处理器1040还被配置为通过对多个图像进行拼接,提取多个图像中的特征信息和拍摄位置点信息,生成图像地图。在该方式中第二移动电子设备140的用户无需佩戴定位接收器,故第二移动设备140,例如手机相机的外参数需要通过标定图进行相机标定。标定图片是黑白相间的矩形构成的棋盘图,如图5所示。
例如,移动电子设备100,也即机器人包含第一摄像头112,例如为单目摄像头,及可显示黑白色相机校正棋盘的显示屏118。用户无需佩戴无线定位接收器手环,也无需用户手持装备无线定位接收器外设的手机,移动电子设备100通过视觉跟随用户,第二移动电子设备140的用户使用手机APP拍照完成建图。例如,每到达一个房间,第二移动电子设备140的用户通过手机APP启动房间建图应用,此时的移动电子设备100的液晶显示屏118显示用于校正相机的经典黑白色棋盘。移动电子设备100同时将此时自身的坐标及方向信息发送至定位模块106。此时,第二移动电子设备140的用户使用手机APP拍摄该房间环境,拍摄的照片需要包括移动 电子设备100的液晶显示屏中的黑白色棋盘。第二移动电子设备140的用户根据房间布局情况拍摄多张照片(照片均需要拍到机器人液晶屏中的黑白色棋盘),并通过手机APP将拍下的含房间环境及移动电子设备100,例如机器人100的图像经过本地无线通信网络(WIFI、蓝牙、ZigBee等)传送至存储器116中。根据移动电子设备100,例如机器人当时的位置和方向信息、摄像头112的高度和内外参数,经由处理器104中的图像处理器1040对第二移动电子设备140的用户拍摄的大量图像进行地图拼接创建,特征选择提取,特征点位置信息添加,生成室内图像特征点地图信息,再将处理后的图像地图信息存储在存储器116中。
方式三:移动电子设备100(机器人)不包含摄像头,第二移动电子设备140的用户佩戴定位接收器。
可选地,在另一个实施例中,第二移动电子设备140还包括第二无线信号收发器142和第二摄像头144。第二无线信号收发器142可通信地连接至多个参考无线信号源,配置为根据从多个参考无线信号源获取的信号强度,确定第二移动电子设备140的位置。第二摄像头144被配置为拍摄任务场所的多个图像。处理器104中的图像处理器1040可通信地连接至第二摄像头140,被配置为通过对多个图像进行拼接,提取多个图像中的特征信息和拍摄位置点信息,生成图像地图。
例如,在该实施例中,移动电子设备100,例如机器人不包含单目摄像头且机器人不跟随第二移动电子设备140的用户。第二移动电子设备140的用户佩戴无线定位接收器手环,或用户手持装备有无线定位接收器外设的手机,使用手机APP完成室内建图。例如,首次使用,第二移动电子设备140的用户通过手机APP或用户佩戴的无线定位接收器手环或手机装备的无线定位接收器外设实现室内建立地图。通过室内放置的固定位置的参考无线信号源(UWB等)作为参考点,第二移动电子设备140中的无线信号收发器142读取对各个参考无线信号源的信号强度(Received Signal Strength,RSS)来确定该第二移动电子设备140的用户在室内的位 置。每到达一个房间,第二移动电子设备140的用户通过手机APP启动房间建图程序。第二移动电子设备140的用户使用手机APP拍摄该房间环境,例如,根据房间布局情况可拍摄多张照片。第二移动电子设备140的手机APP将记录每次拍摄的第二摄像头144的位姿信息以及第二无线信号收发器142记录的第二移动电子设备140,例如手机相对地面的高度信息和其在室内的位置信息,并通过本地无线通信网络(WIFI、蓝牙、ZigBee等)传送至存储器116中。根据第二摄像头144的内外参数信息以及拍摄时的位姿信息、高度信息及位置信息,经由处理器104中的图像处理器对拍摄的大量图像进行地图拼接创建,特征选择提取,特征点位置信息添加,生成室内图像特征点地图信息,再将处理后的图像地图信息存储在存储器116中。
处理器104中的图像处理器1040可通信地连接至第一无线信号收发器102,配置为提取包含选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定与照片中的选定区域相对应的实际坐标范围。该位置信息指在建立地图过程中,图像地图中图像特征点的定位信息,即实际坐标位置。该位置信息例如包括充电桩180的位置和/或移动电子设备100本身的位置。例如,处理器104中的图像处理器1040可以将充电桩180的位置作为坐标原点。
方式四:用户可以在室内,例如天花板上布置至少一个摄像头,通过该至少一个摄像头采集的包括移动电子设备100在内的多个图片。该至少一个摄像头将图片信息经由移动电子设备100的第一无线信号收发器102传输至移动电子设备100的图像处理器1040。然后图像处理器1040识别任务区的图像中的移动电子设备100的特征信息,以及为图像建立坐标系,并对任务区内的每一个点赋予相应的坐标值,从而建立环境空间地图。
方式五:移动电子设备100利用第一摄像头112,例如深度摄像头在移动电子设备100运动的同时所采集的平面图形信息和图形中的物体的 距离信息,并将包括平面图形信息和距离信息的多个三维信息发送给图像处理器1040;图像处理器1040可通信地连接至第一无线信号收发器102,配置为处理所接收的多个三维信息;再由可通信地连接至图像处理器1040的地图模块根据图像处理器1040处理后的多个三维信息,通过绘制三维的任务区的图像,获取任务区的环境空间地图。
移动电子设备100中的处理器104可通信地连接至第一无线信号收发器102,配置为确定与目的任务区域的名称相对应的环境空间。例如,移动电子设备100可以先确定与目的任务区域的名称相对应的图片子空间;然后根据图片子空间,确定与图片子空间对应的环境空间。
例如,移动电子设备100的存储器116中存储了首次使用时建立室内环境地图过程中所建立的环境地图,例如室内图像地图信息,其中包括图像特征点及其位置信息。此外,移动电子设备100的存储器116中还包括图片子空间的名称与表示该子空间的至少一张代表性的图片之间的对应关系。例如,存储器116中可以存储示例客厅的代表性的图片,且该照片被命名为客厅。以下以客厅的代表性的图片为例进行说明。本领域技术人员可以理解,该实施例也适用于其他类型的房间。
处理器104首先通过语音识别等技术,确定与接收到的指令“客厅”所对应的图片子空间,例如代表性的图片。例如,处理器104检索保存在移动电子设备100中的图片库的名称,找到命名为“客厅”的代表性图片。
处理器104中包括图像处理器1040,然后,该图像处理器1040提取例如,该客厅的代表性图片中的特征信息和位置信息,并进一步利用图像特征点匹配算法(如SIFT,SURF等)与存储器116中的室内环境地图(含位置信息)进行快速比对分析。图像特征点可以采用基于尺度不变特征变换(Scale Invariant Feature Transform,SIFT)算法或加速稳健特征(Speeded Up Robust Features,SURF)算法识别上述特征。采用SIFT算法,需要在存储器116中存储参考图像。图像处理器1040首先识别存储在存储器110中的参考图像的对象的关键点,提取SIFT特征,然后通过比较存储器 110中的各个关键点SIFT特征与新采集的图像的SIFT特征,再基于K最邻近算法(K-Nearest Neighbor KNN)的匹配特征,来识别新图像中的对象。SURF算法是基于近似的2D哈尔小波(Haar wavelet)响应,并利用积分图像(integral images)进行图像卷积,使用了基于Hessian矩阵的测度去构造检测子(Hessian matrix-based measure for the detector),并使用了基于分布的描述子(a distribution-based descriptor)。
可选地或者附加地,确定与客厅的代表性的图片相对应的室内实际区域的坐标范围可以通过坐标映射转换确定任务区域的实际坐标范围。移动电子设备100中存储的客厅的代表性的图片中的特征点将与图像地图中的图像特征点匹配,即可确定移动电子设备100中的图像中的特征点的实际坐标位置。同时,匹配后可以计算出客厅的代表性的图片所在的相机坐标系相对充电桩所在的实际世界坐标系的坐标系转换关系。例如,在移动电子设备100的存储器中存储的图片库中的客厅的代表性的图片中包含了沙发、茶几和电视柜等特征点和这些家具各自的坐标范围。此外,移动电子设备100的存储器中存储的环境地图中也包括客厅中的沙发、茶几和电视柜。图像处理器1040将图片库中的客厅的图片和环境地图进行比对,提取特征信息,并且比较各自的坐标值,进行坐标变化,从而得出需要进行清扫的客厅的实际的世界坐标范围。
定位模块106可通信地连接至处理器104,配置为记录移动电子设备100的当前所在位置和环境空间,例如与客厅之间的距离范围。例如,定位模块106将充电桩180所在处设为坐标原点,图像中的每一个点对应一个坐标值(X,Y)。定位模块106和编码器使得移动电子设备100知道自己目前的位置。定位模块106是计算移动电子设备100在室内位置的模块。移动电子设备100在工作时需要时刻知道自己的室内位置,都通过定位模块106来实现。
路径规划模块108可通信地连接至处理器104,配置为根据任务区域的名称,生成路径规划方案。可选地,路径规划模块108还用于采用基 于网格的生成树路径规划算法,对选定区域进行路径规划。例如,路径规划模块108采用基于网格的生成树路径规划算法(Grid-based Spanning Tree Path Planning)实现对选定目标清扫区域的清扫路径规划。该方法将对应坐标区域采用网格化处理,对网格建立树节点并生成树,然后使用包围生成树的哈密尔顿回路(Hamiltonian path)作为清扫该区域的优化清扫路径。
此外,初始时,移动电子设备100位于智能充电桩180处。对于移动电子设备100如何从智能充电桩180到达选定区域的坐标范围区域,路径规划模块108将读取首次使用时移动电子设备100跟随到达该区域的路径(如果移动电子设备100采用跟随模式),或者采用第二移动电子设备140的用户建图过程中的行走路径作为到达该区域的路径(如果首次使用时,移动电子设备100不跟随用户的情况),并将该路径与选定区域优化清扫路径合成清扫任务路径。该合成可以将两段路径做简单顺序连接,第一段路径实现到达目标清扫区域,第二段路径实现对圈定清扫区域的优化覆盖,完成清洁任务。
然后,上述任务被发送至移动电子设备100自动执行。例如,运动模块110可通信地连接至路径规划模块108,配置为根据路径规划方案,进行运动。
可选地,移动电子设备100还包括第一摄像头112和存储器116,用于在进行任务的同时拍摄目的任务区域的图片。第一无线信号收发器102进一步可通信地连接至所述第一摄像头112,用于获取第一摄像头112拍摄的任务区域的图片,并将图片存储在所述存储器116中,并与图片子空间相对应。例如,移动电子设备100的第一摄像头112在拍摄了客厅的图像后,存储在存储器116中,例如,可以存储在名称为客厅的图片子空间之中。又例如,移动电子设备100在进行卧室清扫时,拍摄卧室的照片并将现在卧室的布局存入卧室子空间名称中,从而通过自学习的方式将图片添加到移动电子设备100的对应的图片库中。
可选地或者附加地,移动电子设备100,例如,机器人100还包括编码器和惯性测量模块(IMU),以辅助第一摄像头112获取移动电子设备100,例如机器人的位置和姿态。例如当机器人被遮蔽,不在第一摄像头112的视线中时,编码器和IMU都还能提供机器人的位置和姿态。例如,编码器可以作为里程计,通过记录机器人轮子的转动信息,来计算机器人走过的轨迹。
可选地或者附加地,移动电子设备100还可包含传感器114,传感器114将移动电子设备100周围的障碍物信息发送至运动模块110。运动模110还配置为调整移动电子设备100的运动方位以避开障碍物。可以理解,因为安装的高度不同,安装在移动电子设备100上的第一摄像头112与安装在移动电子设备100上的传感器114的高度不同,因此第一摄像头112所拍摄的障碍物信息与传感器所拍摄的障碍物可能不同,因为可能存在遮蔽。第一摄像头112可以通过旋转,俯仰等方式改变视觉方向,以获取更广的视觉范围。此外,传感器114可以安装在比较低的水平位置,而这个位置有可能是第一摄像头112的盲区,物体不出现在第一摄像头112的视角中,那么就得依靠这些传统传感器112来避障。可选地,摄像头112可以获取障碍物信息,并结合超声和激光传感器114的信息。单目摄像头112获得的图像做物体识别,超声和激光传感器114测距。
可选地或者可替代地,传感器114包括超声波传感器和/或激光传感器。第一摄像头112和传感器114可以相互辅助。例如,如有遮蔽时,在被遮蔽的局部,移动电子设备100需要靠自身的激光传感器、超声波传感器114等来进行避障。
例如,移动电子设备100搭载的激光传感器、超声波传感器对移动电子设备100周围静态、动态环境进行检测,辅助避开静态、动态障碍物以及调整最优路径。
图2示出了根据本发明的一个实施例的在移动电子设备中的方法200的流程图。在移动电子设备中用于处理任务区域的任务的方法200。
图2示出了在移动电子设备中用于处理任务区域的任务的方法200。移动电子设备包括第一无线信号收发器、处理器、定位模块、路径规划模块以及运动模块,其中,方法200包括,在块210中,通过可通信地连接至第二移动电子设备的所述第一无线信号收发器,获取来自所述第二移动电子设备的指令,所述指令包括所述移动电子设备待处理的目的任务区域的名称,所述目的任务区域的名称与移动电子设备中的图片库中的图片子空间相关联;在块220中,通过可通信地连接至所述第一无线信号收发器的所述处理器,确定与所述目的任务区域的名称相对应的环境空间;在块230中,通过可通信地连接至所述处理器的所述定位模块,记录所述移动电子设备的当前所在位置与所述环境空间之间的距离范围;在块240中,通过可通信地连接至所述处理器的所述路径规划模块,根据所述任务区域的名称,生成路径规划方案;在块250中,通过可通信地连接至所述路径规划模块和所述定位模块的所述运动模块,根据所述路径规划方案和所述定位模块记录的所述距离范围,进行所述任务。
可选地或者可替代地,方法200还包括确定与所述目的任务区域的名称相对应的图片子空间;根据所述图片子空间,确定与所述图片子空间对应的所述环境空间。
可选地或者可替代地,其中,所述移动电子设备还包括摄像头和存储器,方法200还包括在进行所述任务的同时拍摄所述目的任务区域的图片;通过可通信地连接至所述摄像头的所述第一无线信号收发器;获取所述摄像头拍摄的任务区域的图片,将所述图片存储在所述存储器中,以及与所述图片子空间相对应。
可选地或者可替代地,其中,移动电子设备还包括可通信地连接至所述处理器的编码器和惯性测量模块,方法200还包括通过所述编码器和所述惯性测量模块,辅助所述摄像头获取所述移动电子设备的位置和姿态。
可选地或者可替代地,其中,移动电子设备还包括充电桩,其中所述充电桩包括所述处理器、所述路径规划模块和所述定位模块。
可选地或者可替代地,其中,移动电子设备还可包含传感器,方法200还包括通过所述传感器,将所述移动电子设备周围的障碍物信息发送至所述运动模块,通过所述运动模块,调整所述移动电子设备的运动方位以避开所述障碍物。
可选地或者可替代地,其中传感器包括超声波传感器和/或激光传感器。
在前面的描述中,已经参考具体示例性实施例描述了本发明;然而,应当理解,在不脱离本文所阐述的本发明的范围的情况下,可以进行各种修改和变化。说明书和附图应以示例性的方式来看待,而不是限制性的,并且所有这些修改旨在被包括在本发明的范围内。因此,本发明的范围应由本文的一般实施例及其合法等效物、而不是仅由上述具体实施例来确定。例如,任何方法或过程实施例中的步骤可以任何顺序执行,并且不限于在具体实施例中呈现的明确顺序。另外,在任何装置实施例中的部件和/或元件可以各种排列组装或以其他方式操作地配置,以产生与本发明基本相同的结果,因此不限于具体实施例中的具体配置。
以上已经关于具体实施例描述了益处、其他优点和问题的解决方案;然而,任何益处、优点或问题的解决方案,或可引起任何特定益处、优点或方案发生或变得更明显的任何元件不应被解释为关键的、必需的或基本的特征或部件。
如本文所使用的,术语“包括”、“包含”或其任何变型旨在引用非排他性的包含,使得包括元件列表的过程、方法、物品、组合物或装置不仅包括所述的那些元件,而且也可以包括未明确列出的或固有的主要的过程、方法、物品、组合物或装置。除了未具体叙述的那些之外,在本发明的实践中使用的上述结构、布局、应用、比例、元件、材料或部件的其它组合 和/或修改可以被改变,或者以其他方式特别适用于特定的环境、制造规格、设计参数或其他操作要求,而不脱离其大体原则。
虽然本文已经参考某些优选实施例描述了本发明,但是本领域技术人员将容易理解,在不脱离本发明的精神和范围的情况下,其他应用可以替代本文所阐述的那些。因此,本发明仅由下述权利要求书限定。

Claims (14)

  1. 一种用于处理任务区域的任务的移动电子设备,包括第一无线信号收发器、处理器、定位模块、路径规划模块以及运动模块,其中:
    所述第一无线信号收发器可通信地连接至第二移动电子设备,配置为获取来自所述第二移动电子设备的指令,所述指令包括所述移动电子设备待处理的目的任务区域的名称,所述目的任务区域的名称与所述移动电子设备中的图片库中的图片子空间相关联;
    所述处理器可通信地连接至所述第一无线信号收发器,配置为确定与所述目的任务区域的名称相对应的环境空间;
    所述定位模块可通信地连接至所述处理器,配置为记录所述移动电子设备的当前所在位置与所述环境空间之间的距离范围;
    所述路径规划模块可通信地连接至所述处理器,配置为根据所述任务区域的名称,生成路径规划方案;
    所述运动模块可通信地连接至所述路径规划模块和所述定位模块,配置为根据所述路径规划方案和所述定位模块记录的所述距离范围,进行所述任务。
  2. 根据权利要求1所述的移动电子设备,其中,所述处理器还配置为:
    确定与所述目的任务区域的名称相对应的图片子空间;
    根据所述图片子空间,确定与所述图片子空间对应的所述环境空间。
  3. 根据权利要求1所述的移动电子设备,所述移动电子设备还包括摄像头和存储器,用于在进行所述任务的同时拍摄所述目的任务区域的图片;
    所述第一无线信号收发器进一步可通信地连接至所述摄像头,用于获取所述摄像头拍摄的任务区域的图片,并将所述图片存储在所述存储器中,并与所述图片子空间相对应。
  4. 根据权利要求3所述的移动电子设备,还包括可通信地连接至所 述处理器的编码器和惯性测量模块,配置为辅助所述摄像头获取所述移动电子设备的位置和姿态。
  5. 根据权利要求1-4中任一项所述的移动电子设备,还包括充电桩,其中所述充电桩包括所述处理器、所述路径规划模块和所述定位模块。
  6. 根据权利要求1-4中任一项所述的移动电子设备,还可包含传感器,所述传感器将所述移动电子设备周围的障碍物信息发送至所述运动模块,所述运动模块还配置为调整所述移动电子设备的运动方位以避开所述障碍物。
  7. 根据权利要求6所述的移动电子设备,其中所述传感器包括超声波传感器和/或激光传感器。
  8. 一种用于处理任务区域的任务的方法,所述方法在移动电子设备中,所述移动电子设备包括第一无线信号收发器、处理器、定位模块、路径规划模块以及运动模块,其中:
    通过可通信地连接至第二移动电子设备的所述第一无线信号收发器,获取来自所述第二移动电子设备的指令,所述指令包括所述移动电子设备待处理的目的任务区域的名称,所述目的任务区域的名称与所述移动电子设备中的图片库中的图片子空间相关联;
    通过可通信地连接至所述第一无线信号收发器的所述处理器,确定与所述目的任务区域的名称相对应的环境空间;
    通过可通信地连接至所述处理器的所述定位模块,记录所述移动电子设备的当前所在位置与所述环境空间之间的距离范围;
    通过可通信地连接至所述处理器的所述路径规划模块,根据所述任务区域的名称,生成路径规划方案;
    通过可通信地连接至所述路径规划模块和所述定位模块的所述运动模块,根据所述路径规划方案和所述定位模块记录的所述距离范围,进行所述任务。
  9. 根据权利要求8所述的方法,还包括:
    确定与所述目的任务区域的名称相对应的图片子空间;
    根据所述图片子空间,确定与所述图片子空间对应的所述环境空间。
  10. 根据权利要求8所述的方法,其中,所述移动电子设备还包括摄像头和存储器,所述方法还包括:
    在进行所述任务的同时拍摄所述目的任务区域的图片;
    通过可通信地连接至所述摄像头的所述第一无线信号收发器,
    获取所述摄像头拍摄的任务区域的图片,
    将所述图片存储在所述存储器中,以及
    与所述图片子空间相对应。
  11. 根据权利要求10所述的方法,其中,所述移动电子设备还包括可通信地连接至所述处理器的编码器和惯性测量模块,所述方法还包括:
    通过所述编码器和所述惯性测量模块,辅助所述摄像头获取所述移动电子设备的位置和姿态。
  12. 根据权利要求8-11中任一项所述的方法,其中,所述移动电子设备还包括充电桩,其中所述充电桩包括所述处理器、所述路径规划模块和所述定位模块。
  13. 根据权利要求8-11中任一项所述的方法,其中,所述移动电子设备还可包含传感器,所述方法还包括
    通过所述传感器,将所述移动电子设备周围的障碍物信息发送至所述运动模块,
    通过所述运动模块,调整所述移动电子设备的运动方位以避开所述障碍物。
  14. 根据权利要求13所述的方法,其中所述传感器包括超声波传感器和/或激光传感器。
PCT/CN2018/090585 2017-08-24 2018-06-11 一种用于处理任务区域的任务的移动电子设备以及方法 WO2019037517A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710735143.0A CN108459598B (zh) 2017-08-24 2017-08-24 一种用于处理任务区域的任务的移动电子设备以及方法
CN201710735143.0 2017-08-24

Publications (1)

Publication Number Publication Date
WO2019037517A1 true WO2019037517A1 (zh) 2019-02-28

Family

ID=63220307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090585 WO2019037517A1 (zh) 2017-08-24 2018-06-11 一种用于处理任务区域的任务的移动电子设备以及方法

Country Status (2)

Country Link
CN (1) CN108459598B (zh)
WO (1) WO2019037517A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102302199B1 (ko) * 2018-11-21 2021-09-14 삼성전자주식회사 이동 장치 및 이동 장치의 객체 감지 방법

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007317112A (ja) * 2006-05-29 2007-12-06 Funai Electric Co Ltd 自走式装置及び自走式掃除機
CN105259898A (zh) * 2015-10-13 2016-01-20 江苏拓新天机器人科技有限公司 一种智能手机控制的扫地机器人
CN106444502A (zh) * 2016-09-28 2017-02-22 捷开通讯(深圳)有限公司 一种智能家具系统及其控制方法
CN106647766A (zh) * 2017-01-13 2017-05-10 广东工业大学 一种基于复杂环境的uwb‑视觉交互的机器人巡航方法及系统
CN106709937A (zh) * 2016-12-21 2017-05-24 四川以太原力科技有限公司 一种扫地机器人的控制方法
CN106725119A (zh) * 2016-12-02 2017-05-31 西安丰登农业科技有限公司 一种基于三维模型定位的扫地机器人导航系统
CN207067803U (zh) * 2017-08-24 2018-03-02 炬大科技有限公司 一种用于处理任务区域的任务的移动电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102866706B (zh) * 2012-09-13 2015-03-25 深圳市银星智能科技股份有限公司 一种采用智能手机导航的清扫机器人及其导航清扫方法
CN103884330B (zh) * 2012-12-21 2016-08-10 联想(北京)有限公司 信息处理方法、可移动电子设备、引导设备和服务器
CN103439973B (zh) * 2013-08-12 2016-06-29 桂林电子科技大学 自建地图家用清洁机器人及清洁方法
CN105629970A (zh) * 2014-11-03 2016-06-01 贵州亿丰升华科技机器人有限公司 一种基于超声波的机器人定位避障方法
CN105115498B (zh) * 2015-09-30 2019-01-01 长沙开山斧智能科技有限公司 一种机器人定位导航系统及其导航方法
CN105352508A (zh) * 2015-10-22 2016-02-24 深圳创想未来机器人有限公司 机器人定位导航方法及装置
CN106292697B (zh) * 2016-07-26 2019-06-14 北京工业大学 一种移动设备的室内路径规划与导航方法
CN106371446A (zh) * 2016-12-03 2017-02-01 河池学院 一种室内机器人导航定位系统
CN106774315B (zh) * 2016-12-12 2020-12-01 深圳市智美达科技股份有限公司 机器人自主导航方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007317112A (ja) * 2006-05-29 2007-12-06 Funai Electric Co Ltd 自走式装置及び自走式掃除機
CN105259898A (zh) * 2015-10-13 2016-01-20 江苏拓新天机器人科技有限公司 一种智能手机控制的扫地机器人
CN106444502A (zh) * 2016-09-28 2017-02-22 捷开通讯(深圳)有限公司 一种智能家具系统及其控制方法
CN106725119A (zh) * 2016-12-02 2017-05-31 西安丰登农业科技有限公司 一种基于三维模型定位的扫地机器人导航系统
CN106709937A (zh) * 2016-12-21 2017-05-24 四川以太原力科技有限公司 一种扫地机器人的控制方法
CN106647766A (zh) * 2017-01-13 2017-05-10 广东工业大学 一种基于复杂环境的uwb‑视觉交互的机器人巡航方法及系统
CN207067803U (zh) * 2017-08-24 2018-03-02 炬大科技有限公司 一种用于处理任务区域的任务的移动电子设备

Also Published As

Publication number Publication date
CN108459598A (zh) 2018-08-28
CN108459598B (zh) 2024-02-20

Similar Documents

Publication Publication Date Title
WO2019019819A1 (zh) 一种用于处理任务区域的任务的移动电子设备以及方法
JP7236565B2 (ja) 位置姿勢決定方法、装置、電子機器、記憶媒体及びコンピュータプログラム
US10060730B2 (en) System and method for measuring by laser sweeps
US9646384B2 (en) 3D feature descriptors with camera pose information
WO2019001237A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
CN207115193U (zh) 一种用于处理任务区域的任务的移动电子设备
US10896327B1 (en) Device with a camera for locating hidden object
CN105096382A (zh) 一种在视频监控图像中关联真实物体信息的方法及装置
CN207067803U (zh) 一种用于处理任务区域的任务的移动电子设备
US10979695B2 (en) Generating 3D depth map using parallax
WO2018228258A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
JP7379785B2 (ja) 3dツアーの比較表示システム及び方法
US10979687B2 (en) Using super imposition to render a 3D depth map
WO2019037517A1 (zh) 一种用于处理任务区域的任务的移动电子设备以及方法
JP2007011776A (ja) 監視システム及び設定装置
CN115830280A (zh) 数据处理方法、装置、电子设备及存储介质
US20160073087A1 (en) Augmenting a digital image with distance data derived based on acoustic range information
US20230177781A1 (en) Information processing apparatus, information processing method, and information processing program
WO2017057426A1 (ja) 投影装置、コンテンツ決定装置、投影方法、および、プログラム
KR102196683B1 (ko) 3d 투어 촬영 장치 및 방법
CN116685872A (zh) 用于移动设备的定位系统和方法
JP2021086268A (ja) 移動体、情報処理装置、及び撮像システム
CN117257170A (zh) 清洁方法、清洁展示方法、清洁设备和存储介质
CN111800732A (zh) 虚实信息整合空间定位系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18847996

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18847996

Country of ref document: EP

Kind code of ref document: A1