CN114531567A - Information transmission method, device and storage medium - Google Patents

Information transmission method, device and storage medium Download PDF

Info

Publication number
CN114531567A
CN114531567A CN202011233103.4A CN202011233103A CN114531567A CN 114531567 A CN114531567 A CN 114531567A CN 202011233103 A CN202011233103 A CN 202011233103A CN 114531567 A CN114531567 A CN 114531567A
Authority
CN
China
Prior art keywords
target
environment
image
information
reference object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011233103.4A
Other languages
Chinese (zh)
Inventor
张亮
于坤
刘达
付雷
顾陈洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN202011233103.4A priority Critical patent/CN114531567A/en
Publication of CN114531567A publication Critical patent/CN114531567A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Abstract

The embodiment of the application provides an information transmission method, information transmission equipment and a storage medium. In the embodiment of the application, the autonomous mobile device can acquire an environment image and transmit the environment image to the terminal device in the operation process, before the environment image is transmitted to the terminal device, whether the environment image contains a target subarea reference object or not can be judged, and under the condition that the environment image contains the target subarea reference object, environment information of a target area which is divided by the target subarea reference object and contained in the environment image is identified, the environment image is filtered according to the environment information, and the target image obtained through filtering processing is provided for the terminal device. The target images of some environmental information are filtered, data transmission quantity can be reduced on the basis of meeting requirements of remote monitoring or intelligent management and control, and/or user privacy is protected, and service quality is improved.

Description

Information transmission method, device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an information transmission method, an information transmission device, and a storage medium.
Background
The mobile robot gradually walks into the daily work and life of people, and brings great convenience to people. The cameras are installed on a plurality of mobile robots, and positioning and obstacle avoidance can be achieved based on environment images acquired by the cameras. By means of the camera, a plurality of robots support remote monitoring or intelligent management and control functions, specifically, the robot provides images collected by the camera for the server, the images are provided for the terminal equipment by the server, and a user realizes remote monitoring or intelligent management and control on the environment where the robot is located through the terminal equipment. However, the robot has some disadvantages in providing remote monitoring or intelligent management and control service for users, and the service quality is not high.
Disclosure of Invention
Aspects of the present application provide an information transmission method, an information transmission device, and a storage medium, which are used to improve quality of service for providing remote monitoring or intelligent management and control for a user by a robot.
The embodiment of the application provides an information transmission method, which is suitable for autonomous mobile equipment and comprises the following steps: in the operation process, acquiring an environment image by using a visual sensor, and identifying environment information in a target area contained in the environment image under the condition that a target partition reference object is contained in the environment image; according to the environment information in the target area, performing information filtering on the environment image to obtain a first target image; and transmitting the first target image to a terminal device bound with the autonomous mobile device, wherein the target area is a functional area divided by the target partition reference object in the working environment.
An embodiment of the present application further provides an autonomous mobile device, including: the device comprises a device body, wherein a memory and a processor are arranged on the device body; a memory for storing a computer program; a processor coupled with the memory for executing the computer program for: in the operation process, acquiring an environment image by using a visual sensor, and identifying environment information in a target area contained in the environment image under the condition that a target partition reference object is contained in the environment image; according to the environmental information in the target area, performing information filtering on the environmental image to obtain a first target image; and transmitting the first target image to a terminal device bound with the autonomous mobile device, wherein the target area is a functional area divided by the target partition reference object in the working environment.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps in the information transmission method provided in the embodiments of the present application.
In the embodiment of the application, the autonomous mobile device can acquire an environment image and transmit the environment image to the terminal device in the operation process, before the environment image is transmitted to the terminal device, whether the environment image contains a target partition reference object or not can be judged, and under the condition that the environment image contains the target partition reference object, environment information of a target area contained in the environment image is identified, the environment image is filtered accordingly, and the target image obtained through filtering processing is provided for the terminal device. The target images of some environmental information are filtered, data transmission quantity can be reduced on the basis of meeting requirements of remote monitoring or intelligent management and control, and/or user privacy is protected, and service quality is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of an information transmission method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a 3D environment map provided by an exemplary embodiment of the present application;
fig. 3a is a schematic diagram of a local environment map from the perspective of an autonomous mobile device according to an exemplary embodiment of the present application;
fig. 3b is a schematic diagram of an autonomous mobile device including target area environment information from the perspective of an exemplary embodiment of the present application;
fig. 3c is a schematic diagram of filtering target area environment information from the perspective of an autonomous mobile device according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of an autonomous mobile device according to an exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of a sweeping robot according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, the robot can realize the location and keep away the barrier based on the environmental image of camera collection, with the help of the camera, and a lot of robots support remote monitoring or intelligent management and control function, but nevertheless have the weak point, lead to quality of service not high. For example, there may be a problem that the amount of data transmission is large and resources are wasted. Or, in the process of remote monitoring or intelligent management and control, user privacy may be involved, optionally, the user privacy may be protected by turning off the camera, however, the camera carries other functions, such as a positioning or mapping function, and turning off the camera may affect normal operation of the robot. Aiming at the technical problem, in the embodiment of the application, the autonomous mobile equipment collects the environment image in the operation process and transmits the environment image to the terminal equipment bound with the autonomous mobile equipment; before the environment image is transmitted to the terminal device, whether the environment image contains the target subarea reference object or not can be judged, and in the case that the environment image contains the target subarea reference object, the environment image is identified to contain the environment information of the target area divided by the target subarea reference object, and the environment image is filtered according to the environment information, so that the target image obtained through filtering processing is provided for the terminal device. Therefore, on the basis of meeting the requirements of remote monitoring or intelligent management and control, the environment images with some environment information filtered out are transmitted, the data transmission quantity can be reduced, and/or the privacy of a user is protected, and the service quality is improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating an information transmission method according to an exemplary embodiment of the present application. As shown in fig. 1, the method includes:
s101, in the operation process, acquiring an environment image by using a visual sensor;
and S102, when the target partition reference object is contained in the environment image, identifying the environment information in a target area contained in the environment image, wherein the target area is a functional area divided by the target partition reference object in the working environment.
S103, filtering the environment image according to the environment information in the target area contained in the environment image to obtain a first target image;
and S104, transmitting the first target image to the terminal equipment bound with the autonomous mobile equipment.
The autonomous moving apparatus in the embodiment of the present application may be any mechanical apparatus capable of performing spatial movement highly autonomously in a specified environmental space, and may be, for example, a robot, a cleaner, an unmanned carrier vehicle, or the like. Wherein, the robot can be a household robot, such as a sweeping robot, a family accompanying robot, etc.; it may be a commercial robot such as a cruise robot used in a place such as a mall, a supermarket, a bank, a hospital, an airport, or a train station, a service robot that guides a robot or assists a user in handling various matters, a transfer robot used in various warehouses, or the like.
In the embodiment of the application, the autonomous mobile equipment is provided with a traveling device which can autonomously move in the current environment to execute the work task; in addition, the autonomous mobile device also has a vision sensor, which may be, for example, a monocular camera, a binocular camera, or a Depth camera (Red Green Blue-Depth map, RGB-D), etc. The autonomous mobile device can acquire an environment image of the environment in which the autonomous mobile device is positioned during operation by using the vision sensor.
On one hand, the autonomous mobile equipment can be positioned according to the acquired environment image and the existing environment map, namely the position and the posture of the autonomous mobile equipment are positioned, autonomous movement is realized, and a basis is provided for executing a task. Alternatively, the environment map may be constructed by the autonomous mobile device using a simultaneous localization and mapping (SLAM) technique, or may be generated in advance by another device and imported into the autonomous mobile device. The environment map may be 2D or 3D, and as shown in fig. 2, is an exemplary schematic diagram of a 3D environment map.
On the other hand, the autonomous mobile device can also provide remote monitoring or intelligent management and control service for the user based on the acquired environment image. For example, the acquired environment image is transmitted to a terminal device bound with the autonomous mobile device in advance, such as a mobile phone, a tablet computer or a wearable device of a user; a user checks an environment image in the working environment of the autonomous mobile equipment through the terminal equipment of the user, so that information in the working environment of the autonomous mobile equipment can be acquired, and remote monitoring is realized; or other devices in the working environment (such as an air conditioner, a water heater, a television and the like in the home working environment) can be intelligently controlled through the autonomous mobile device according to the known information.
In this embodiment, in the process of implementing the remote monitoring or intelligent management and control function through the autonomous mobile device, in order to improve the service quality, some environmental information in the environmental image may be filtered by using a functional area in the working environment as a granularity, so as to further satisfy other requirements on the basis of satisfying the remote monitoring or intelligent management and control requirements, and improve the service quality. The functional area is an area having a clear functionality in the work environment, and the functionality herein is a main function of the area. Taking a family scene as an example, a bedroom is taken as a functional area, and the main function of the bedroom is to provide a rest place for a user; the living room is used as a functional area, and the main function of the living room is to provide leisure entertainment and a place for meeting passengers for users; the kitchen is used as a functional area, and the main function of the kitchen is to provide a place for cooking food for a user; the study room is used as a functional area, and the main function of the study room is to provide places for users to study and read books; and so on.
In a remote monitoring or intelligent management scenario, a user may have different monitoring or intelligent management needs for different functional areas. For example, a user wants to monitor the conditions in areas such as a living room, a kitchen, and a balcony, but privacy security of a toilet or a bedroom is protected, and in view of this, the autonomous mobile device can filter environment information including the toilet or the bedroom in the environment image, so that the purpose of protecting privacy security of the user while meeting the requirements of remote monitoring or intelligent management and control is achieved, data security is guaranteed, and service quality is improved. For another example, if the study of the user has confidential documents, the user does not want to leak the information about the study area, and at the same time, also wants to monitor the information in areas such as the kitchen, the living room, the bedroom, and the balcony, the autonomous mobile device may filter the environment information including the study area in the environment image, so as to achieve the purpose of protecting the privacy of the study area while meeting the requirements of remote monitoring or intelligent management and control, ensure data security, and improve service quality. For another example, in some application scenarios, a user only wants to pay attention to a situation in a certain work area, for example, the user only wants to pay attention to a child room, but does not pay much attention to situations in other areas such as a living room, a balcony, and a kitchen, the autonomous mobile device may filter environment information in the environment image except for the child room, so as to achieve the purpose of saving transmission bandwidth while meeting the requirements of remote monitoring or intelligent control, improve data transmission speed and fluency, and improve service quality.
In this embodiment, the functional area is divided by the partition reference object and is not an arbitrary area in the work environment. The partition reference object is an object having a partition function in a working environment, and may be, for example, a wall surface, a door frame, a glass wall, a partition, a screen, or the like. Based on this, the autonomous mobile device can determine whether the captured environment image includes environment information in the target area according to the target partition reference object, for the target area in the work environment related to user privacy or needing important attention. The target subarea reference object is a subarea reference object which is divided into target areas and can be a door, a door frame, a glass wall, a partition, a screen or the like. That is, the autonomous mobile device may determine whether the acquired environment image includes the target partition reference object; if the environment image includes the target partitioned reference object, it is described that the environment image includes the environment information in the target area partitioned by the target partitioned reference object, and therefore it is possible to recognize that the environment image includes the environment information in the target area. Furthermore, the requirements of remote monitoring or intelligent control of a user can be combined, and the environment image is subjected to information filtering according to the environment information in the target area contained in the environment image to obtain a first target image; and transmitting the first target image to the terminal equipment bound with the autonomous mobile equipment. Therefore, the user can check the environment image of the working environment where the autonomous mobile equipment is located through the terminal equipment, obtain the information of the working environment where the autonomous mobile equipment is located, and accordingly remote monitoring or intelligent management and control are achieved.
In this embodiment, the autonomous mobile device may determine who the target area and the corresponding target-partition reference object are, but the embodiment of determining the target area and the corresponding target-partition reference object by the autonomous mobile device is not limited, which is exemplified below:
in an optional embodiment P1, the user may set a target area in advance through the terminal device, for example, taking a home environment as an example, in a privacy protection scenario, the user may set an area related to user privacy, such as a bathroom or a bedroom, as the target area; alternatively, in a scene where a certain area is focused, the user may set an area such as a child's house that the user wants to focus on as a target area. One way for the user to preset the target area through the terminal device includes: the terminal equipment can show an environment map to a user, wherein the environment map can be a 2D environment map or a 3D environment map; the user selects a target area based on the environment map displayed by the terminal equipment; the terminal equipment responds to the selection operation of a user and generates configuration information, the configuration information comprises identification information of a target area configured by the user, and the identification information of the target area is semantic information or position information of the target area in an environment map, and is specifically determined by the environment map; the terminal equipment sends the configuration information to the autonomous mobile equipment, the autonomous mobile equipment receives the configuration information sent by the terminal equipment, and the target area is marked in an environment map of the autonomous mobile equipment on the basis of the identification information of the target area carried in the configuration information; further, the identification information of the target partition reference object may be determined based on the identification information of the target area. In the present embodiment, the autonomous moving apparatus stores or maintains in advance identification information of various partition reference objects existing in the work environment and correspondence between each partition reference object and each functional partition. Alternatively, the identification information of each partition reference object and the correspondence between each partition reference object and each functional partition may be embodied in an environment map.
In yet another alternative embodiment P2, at least one of the area characteristic and the partitioned reference object characteristic may be preset on the autonomous mobile device, or configured on the autonomous mobile device by a user in advance, and the information is used for the autonomous mobile device to identify the target area and the corresponding target partitioned reference object. The autonomous moving apparatus may identify a target area and a corresponding target partitioned reference in the work environment based on at least one of the set area feature and the partitioned reference feature, and mark the target area and the target partitioned reference in the environment map. The area characteristics refer to some characteristic information that can describe the area, for example, the characteristic information may be items contained in the area, such as a toilet or a bathtub in a toilet, a bed in a bedroom, or semantic information of the area, such as information of the bedroom and a kitchen; the partition reference object feature refers to a feature that can describe an object with a partition function existing in the working environment, and may be, for example, the width, height, position, and the like of a door. The following examples illustrate:
example a 1: the autonomous mobile device may identify a target zone and a target partitioned reference in the work environment in conjunction with the zone feature and the partitioned reference feature. For example, if the preset area features are a toilet and a bathtub, and the partition reference feature is the position of a door, the autonomous mobile device may define a toilet area where the toilet and the bathtub are located according to the position of the door as a target area, and mark the toilet as the target area in the environment map, and at the same time, mark the toilet door as a target partition reference corresponding to the toilet.
Example a 2: the autonomous mobile device may identify a target zone and a target partitioned reference in the work environment based on the pre-set partitioned reference characteristics alone. For example, if the preset partitioned reference object features the width, height and position of the door, the autonomous mobile device may use the study area partitioned by the door as the target area according to the width, height and position of the door, mark the study area as the target area in the environment map, and simultaneously mark the door of the study area as the target partitioned reference object corresponding to the study.
In this embodiment, when the environment image includes the target-partition reference object, the environment information in the target area included in the environment image may be identified, and the environment image may be subjected to information filtering according to the identified environment information, specifically, the process of performing information filtering on the environment image is performed by combining the requirement of the user on the target area in the process of remote monitoring or intelligent management and control and combining the environment information in the target area included in the environment image. For example, if the user wants to protect the privacy information in the target area, the environment information in the target area contained in the environment image may be subjected to occlusion, blurring or replacement processing to obtain the first target image, so as to protect the privacy of the user and ensure the security of data transmission. For another example, if the user wants to pay attention to the information in the target region, the user may perform occlusion, blurring, or replacement processing on the other information in the environment image except the environment information in the target region to obtain the first target image, so that the user pays attention to the information in the target region, and the data transmission amount may be reduced, the data transmission efficiency may be improved, and network traffic resources may be saved.
Further, in some cases, the target zone reference may not be included in the environmental image captured by the autonomous mobile device. The environmental image does not include the target-partition reference object, but does not mean that the environmental image does not include the environmental information in the target area. For example, when the autonomous mobile device is outside the target area and moves to a position relatively far away from the target area, the field of view of the vision sensor cannot cover the target-partition reference object, and in this case, the environment image acquired by the vision sensor does not include the environment information in the target area. For another example, when the autonomous mobile apparatus is located within the target area, the visual field range of the visual sensor may not cover the target-section reference object, but in this case, the environmental image acquired by the visual sensor includes environmental information in the target area. In view of this, for the case that the environment image does not include the target partition reference object, the position relationship between the autonomous mobile device and the target area may be determined, and according to the position relationship, the environment image is subjected to information filtering to obtain a second target image, and the second target image is transmitted to the terminal device, so as to meet the requirements of remote monitoring or intelligent management and control of the user.
Under the condition that the environment image does not contain the target partition reference object, information filtering is carried out on the environment image according to the position relation between the autonomous mobile device and the target area, and the requirements of a user on the target area in the process of remote monitoring or intelligent management and control need to be combined. For example, if a user wishes to protect private information in a target area, when the position relationship between the autonomous mobile device and the target area indicates that the autonomous mobile device is located in the target area, it is considered that most or all of information in the environment image is from the target area, and at this time, if the environment information in the target area included in the environment image is subjected to occlusion, blurring, or replacement processing, the meaning of the obtained second target image is not large, so that a local environment map corresponding to the environment image can be acquired as the second target image, and the second target image is transmitted to the terminal device, thereby meeting the requirements of remote monitoring or intelligent control of the user. For another example, if the user wants to pay attention to the information in the target area, when the position relationship between the autonomous mobile device and the target area indicates that the autonomous mobile device is located in the target area, considering that most or all of the information in the environment image is from the target area, the environment image may not be filtered, but the environment image is directly provided to the terminal device as the second target image, so that the user pays attention to the information in the target area, and the requirement of remote monitoring or intelligent control of the user is met.
Further optionally, if the user wishes to protect the privacy information in the target area, when the location relationship between the autonomous mobile device and the target area indicates that the autonomous mobile device is located outside the target area, it means that the environmental image does not include the environmental information in the target area, so that the environmental image may not be filtered, but may be directly transmitted to the terminal device, thereby meeting the requirements of remote monitoring or intelligent control of the user. If the user wants to pay attention to the information in the target area, when the position relationship between the autonomous mobile device and the target area indicates that the autonomous mobile device is located outside the target area, the environment image does not contain the environment information in the target area, and at this time, the environment image has no meaning to the user and may not be transmitted to the terminal device.
In the embodiment of the application, firstly, whether the environment image contains the target partition reference object or not can be judged by combining the position and the posture of the autonomous mobile equipment; in the case where it is determined that the target-section reference object is not included in the environmental image, the position relationship between the autonomous mobile apparatus and the target area may be determined further in accordance with the position and posture of the autonomous mobile apparatus. Specifically, the position and the posture of the autonomous mobile equipment can be positioned according to an existing environment map and an acquired environment image; according to the position and the posture of the autonomous mobile equipment, the field range of a visual sensor on the autonomous mobile equipment can be determined; the pose of the autonomous mobile equipment mainly refers to the orientation of the autonomous mobile equipment, and further, the view field range of the vision sensor can be determined by combining the installation position relation of the vision sensor on the autonomous mobile equipment; if the target partition reference object is located in the field of view of the vision sensor, the vision sensor can shoot the environment information in the target area, so that the environment image can be determined to contain the environment information in the target area, information filtering is carried out on the environment image according to the environment information in the target area to obtain a first target image, and the first target image is transmitted to the terminal device.
If the target-partition reference object does not fall within the field of view of the vision sensor, it cannot be determined whether the vision sensor can capture the environmental information in the target area, which has a certain relationship with the positional relationship between the autonomous mobile device and the target area. In view of this, it may be identified whether the autonomous mobile device is located within or outside the target area based on the location of the autonomous mobile device. Under the condition that the environment image does not contain the target partition reference object, if the autonomous mobile equipment is positioned in the target area, the environment image is shown to contain environment information in the target area; if the autonomous mobile device is located outside the target area, it indicates that the environmental image does not contain environmental information in the target area.
Wherein, whether the reference object of the target partition is in the field of view of the vision sensor can be judged by adopting but not limited to the following modes:
mode D1: inputting the environment image into an image recognition model to detect a target partition reference object; if the environment image is detected to contain the target subarea reference object, the target subarea reference object is in the field of view of the visual sensor; and otherwise, determining that the target partition reference object does not fall within the field of view of the vision sensor.
Mode D2: the autonomous mobile equipment knows the world coordinates of the target partition reference object, namely the coordinates of the target partition reference object in the environment map, and knows the conversion relation between the world coordinate system and the sensor coordinate system, and according to the conversion relation, the world coordinates of the target partition reference object can be converted to be under the sensor coordinate system so as to obtain the coordinates of the target partition reference object under the sensor coordinate system; the visual field range of the visual sensor comprises a plurality of position coordinates under a sensor coordinate system, and if the position coordinates comprise coordinates of the target partition reference object under the sensor coordinate system, the target partition reference object is determined to fall in the visual field range of the visual sensor; and otherwise, determining that the target partition reference object does not fall within the field of view of the vision sensor. The sensor coordinate system refers to a coordinate system used by the vision sensor; the world coordinate system refers to a coordinate system used by the autonomous mobile device.
Mode D3: on one hand, an environment image is input into an image recognition model to detect a target partition reference object, and under the condition that the environment image is detected to contain the target partition reference object, the coordinate of the target partition reference object in the environment image is acquired as a first coordinate; on one hand, converting the world coordinate of the known target partition reference object into the sensor coordinate system according to the conversion relation between the world coordinate system and the sensor coordinate system to obtain the coordinate of the target partition reference object in the sensor coordinate system, and recording the coordinate as a second coordinate; thereafter, a third coordinate is calculated from the first coordinate and the second coordinate, and optionally, the third coordinate may be an average of the first coordinate and the second coordinate, but is not limited thereto. If the position coordinates under the sensor coordinate system covered by the visual field range of the visual sensor comprise the third coordinates, determining that the target partition reference object falls in the visual field range of the visual sensor; and otherwise, determining that the target partition reference object does not fall within the field of view of the vision sensor.
In the above-described modes D2 and D3, assuming that the environment map is a three-dimensional map, the world coordinate of the target division reference object obtained from the three-dimensional map is assumed to be Pw=(x,y,z)TThe known intrinsic and extrinsic parameters of a vision sensor are: the internal parameter matrix K and the external parameter matrix T can be based on the formula Puv=K T PwCalculating the coordinate P of the target partition reference object in the sensor coordinate systemuv=(u,v,1)T
In the embodiment of the present application, the manner of filtering the information of the environment image may be different according to different application scenarios and requirements. The following is an exemplary illustration of two application scenarios, each in four cases:
in an application scenario W1, a user desires to remotely monitor or intelligently manage conditions in a work environment through an autonomous mobile device, and at the same time desires to protect privacy of certain functional areas (e.g., target areas) in the work environment. In view of such application needs, autonomous mobile devices specifically perform the following operations to meet such application needs of users:
case B1: the autonomous mobile device utilizes an environment image acquired by a visual sensor to comprise a target partition reference object, and is located outside a target area, which means that a field of view range of the visual sensor at least partially covers the target area, that is, the environment image comprises environment information in the target area, and the environment information in the target area contained in the environment image is subjected to shielding, blurring or replacing processing to obtain a first target image; and transmitting the first target image to the terminal equipment bound with the autonomous mobile equipment. The first target image includes environmental information in the non-target region.
Case B2: the autonomous mobile equipment utilizes an environment image acquired by a visual sensor to comprise a target partition reference object, is positioned in a target area, namely comprises environment information in the target area, and carries out shielding, blurring or replacing treatment on the environment information in the target area contained in the environment image so as to obtain a first target image; and transmitting the first target image to the terminal equipment bound with the autonomous mobile equipment. The first target image includes environmental information in the non-target region.
In cases B1 and B2, the autonomous mobile device may recognize the environmental information in the target area included in the environmental image from the target-partition reference object included in the environmental image. Specifically, the world coordinates of the target partition reference object in the world coordinate system are also determined and known, so the coordinates of the target partition reference object in the environment image can be determined according to the known world coordinates of the target partition reference object; determining environmental information in a target area contained in the environmental image according to the coordinates of the target partition reference object in the environmental image; for example, the environmental information may be the environmental information behind, in front of, or to the side of the target partition reference object.
Case B3: the autonomous mobile equipment does not include a target subarea reference object in an environment image acquired by the visual sensor, and is located in a target area, most or all environment information in the acquired environment image comes from the target area no matter which angle the visual sensor faces, so that the image becomes basically meaningless or completely meaningless after the environment information in the target area contained in the environment image is completely shielded, blurred or replaced, and based on the fact, a local environment map corresponding to the environment image can be acquired, namely the local environment map under the current view angle of the autonomous mobile equipment is taken as the target image and is sent to the terminal equipment, a user cannot acquire detailed conditions in the target area through the local environment map, but at least can acquire some basic information under the current area and view angle of the autonomous mobile equipment, the image acquired by the user has certain meaning, and the basic information belongs to the information contained in the environment map and is not related to the privacy of the user generally.
In this embodiment, the embodiment of obtaining the local environment map corresponding to the environment image is not limited, for example, the environment image may be directly subjected to image recognition, the environment information in the environment image is recognized, the recognized environment information is used for matching in the existing environment map, and the local environment map matched by the environment information is used as the target image. For another example, the autonomous mobile device may acquire an environment image while positioning the autonomous mobile device, and then the autonomous mobile device may acquire the position information and the posture information, and based on the position information, the posture information, and the field angle of the visual sensor, may determine a field range of the autonomous mobile device, and acquire a local environment map corresponding to the environment image using the field range.
Case B4: the autonomous mobile equipment does not include the target partition reference object in the environment image that the vision sensor was gathered, and the autonomous mobile equipment is located outside the target area, then no matter which angle the vision sensor is towards, all do not contain the environmental information in the target area in the environment image that its was gathered, mean can not relate to user privacy, consequently, need not to filter the environment image, can directly transmit the environment image for terminal equipment.
In another application scenario W2, a user may wish to remotely monitor or intelligently manage a particular area (i.e., a target area) in a work environment via an autonomous mobile device and may not wish to consume excessive traffic resources and storage resources on the end device. In view of such application needs, autonomous mobile devices specifically perform the following operations to meet such application needs of users:
case C1: the autonomous mobile device utilizes the vision sensor to acquire an environment image which comprises a target partition reference object and is located in a target area, so that the environment image comprises environment information in the target area and possibly also comprises environment information in a non-target area, and the target area is an area which a user wants to pay important attention to, therefore, the environment information in the non-target area contained in the environment image can be subjected to shielding, blurring or replacing processing to obtain a first target image; and transmitting the first target image to the terminal equipment bound with the autonomous mobile equipment. The non-target area refers to other areas except the target area in the visual field range of the visual sensor.
Case C2: the autonomous mobile device utilizes the visual sensor to acquire an environment image which comprises a target partition reference object and is located outside a target area, which means that a field of view of the visual sensor at least partially covers the target area, that is, the environment image comprises environment information in the target area, further the environment image also comprises environment information in a non-target area, and in view of this, the environment information in the non-target area contained in the environment image can be shielded, blurred or replaced to obtain a first target image; and transmitting the first target image to the terminal equipment bound with the autonomous mobile equipment. The non-target area refers to other areas except the target area in the visual field range of the visual sensor.
Case C3: autonomous mobile device the images of the environment acquired by the autonomous mobile device using the vision sensor do not include the target zone reference, and the autonomous mobile device is located outside the target area, no matter which angle the vision sensor is oriented, the acquired environment image does not contain the environment information in the target area, optionally, if the environment image is completely shielded, blurred or replaced, the image becomes meaningless, and based on the situation, the local environment map corresponding to the environment image, that is, the local environment map at the current view angle of the autonomous mobile device, can be acquired and sent to the terminal device, although the user cannot acquire the details in the target area through the local environment map, but at least some basic information can be obtained in the area and under the view angle where the autonomous mobile device is currently located, so that the image taken by the user has a certain meaning. For an implementation of the autonomous mobile device obtaining the local environment map corresponding to the environment image, reference may be made to the foregoing embodiments, which are not described herein again. Or, the environment map may be discarded and not transmitted to the terminal device, thereby saving traffic resources.
Case C4: the autonomous mobile device does not include a target subarea reference object in an environment image acquired by the visual sensor, and is located in a target area, so that most or all environment information in the acquired environment image comes from the target area no matter which angle the visual sensor faces, and the target area is an area which a user wants to pay important attention to, the environment image can not be filtered, and the environment image can be directly transmitted to the terminal device bound with the autonomous mobile device.
In this embodiment, after the first target image or the second target image is generated by the above method, the first target image or the second target image may be transmitted to a terminal device bound to the autonomous mobile device. In the present embodiment, the embodiment in which the autonomous mobile device provides the first or second target image to the terminal device is not limited. In an optional embodiment, the user and the autonomous mobile device are in the same environment, the terminal device of the user may establish a connection relationship with the autonomous mobile device through wireless technologies such as bluetooth and wireless, and the autonomous mobile device directly transmits the first or second target image to the terminal device. In yet another alternative embodiment, the autonomous mobile device is not in the same environment as the terminal device, the autonomous mobile device establishes a binding relationship with the terminal device, and the autonomous mobile device may provide the first or second target image to the server, which provides the first or second target image to the terminal device based on the binding relationship between the terminal device and the autonomous mobile device.
In the embodiment of the application, the autonomous mobile device can acquire an environment image and transmit the environment image to the terminal device in the operation process, before the environment image is transmitted to the terminal device, whether the environment image contains a target partition reference object or not can be judged, under the condition that the environment image contains the target partition reference object, environment information containing a target area in the environment image is identified, the environment image is filtered according to the environment information, and the target image obtained through filtering processing is provided for the terminal device. The target images of some environmental information are filtered, data transmission quantity can be reduced on the basis of meeting requirements of remote monitoring or intelligent management and control, and/or user privacy is protected, and service quality is improved.
Scenario example 1:
in a home environment, at least a living room and a child's room are included. The autonomous mobile device is realized as an accompanying robot, a user wants to remotely monitor the condition of a baby in a child room through the accompanying robot and save network flow, and then the user establishes a connection relation with the accompanying robot through a terminal device (such as a mobile phone) and sets an attention area on the terminal device as the child room.
The user accessible terminal equipment goes out the instruction that children's room was cared for to the machine of accompanying and attending to under the circumstances of going out, and the machine of accompanying and attending to receives this instruction of accompanying and attending to after, fixes a position oneself and is located the sitting room, and further, the machine of accompanying and attending to plans the route of marcing from current position (sitting room) to children's room to remove to children's room according to this route. In the moving process, before entering the child room, the accompanying robot moves and collects an environment image, and judges whether a door of the child room exists in the environment image; if the judgment result is that the environment information except the child room in the environment image is blurred, so that the data volume in the transmission process is reduced, the transmission speed is improved, the environment image subjected to local blurring processing is transmitted to the terminal equipment of the user, and the user can remotely monitor the condition of the child room in a targeted manner through the environment image subjected to local blurring processing, for example, whether the child has dangerous or abnormal behaviors or the like is judged, and the experience of the user is enhanced.
After getting into children's room, the environmental image that accompanies and attend to the robot and gather is the environmental information in children's room, then can not filter environmental image, but directly transmits this environmental image for user's terminal equipment to supply the user to monitor the condition in children's room, for example judge whether children have danger or unusual action etc.. It is explained here that in this scenario, the accompanying robot can continuously capture the environment image to form a video stream, and transmit the video stream to the terminal device of the user.
Scenario example 2:
in a home environment, the autonomous mobile device is realized as a sweeping robot, a user can remotely monitor the home environment in the sweeping task process executed by the sweeping robot, and the home environment of the user at least comprises a bedroom, a living room, a bathroom and a kitchen. The user establishes a binding relationship with the sweeping robot through the terminal device, and sets privacy areas as bedrooms and toilets through the terminal device.
The sweeping robot can start to execute the cleaning task when the cleaning time of the cleaning task is up according to a cleaning instruction sent by a user remotely or a set timing cleaning task. During the sweeping task process of the sweeping robot, the sweeping robot can collect surrounding environment images for positioning and navigation. In the process, when a user needs to check environment information of the sweeping robot under the visual angle, a remote monitoring instruction can be sent to the sweeping robot through the terminal device, the sweeping robot receives the remote monitoring instruction, the current position of the sweeping robot is located, the current position of the sweeping robot is assumed to be a bedroom, the bedroom is judged to be a private area, an environment image collected in the bedroom is replaced by a local environment map under the current visual angle by the sweeping robot, the local environment map is transmitted to the server, the server provides the local environment map for the terminal device, and the user watches the local environment map of the bedroom under the visual angle of the sweeping robot through the terminal device, as shown in fig. 3 a.
After a period of time, the sweeping robot moves to a living room for sweeping, at the moment, the sweeping robot is positioned to be located in the living room at present, the living room belongs to a non-privacy area, and the field range of a camera on the sweeping robot is determined according to the posture of the sweeping robot; it is found that the field of view of the camera includes the door of the bedroom, which means that the field of view of the camera can cover the bedroom area, as shown in fig. 3 b. In this case, although the sweeping robot is in the living room, the environment information in the bedroom can be collected through the door of the bedroom, in order to protect the privacy of the user, the sweeping robot can recognize that the environment image includes the door of the bedroom, and the door in the environment image is shielded by using the white board, as shown in fig. 3c, so that the environment information in the bedroom is shielded, and the purpose of protecting the privacy of the user is achieved. And then, the sweeping robot sends the environment image shielded by the white board to the server, and the server sends the environment image to the terminal equipment bound with the sweeping robot so that the user can monitor the home environment.
With the continuation of the cleaning task, the sweeping robot continuously repeats the actions until the cleaning task is finished or a remote monitoring finishing instruction sent by a user is received, so that the remote monitoring requirement of the user is met, and the privacy of the user can be protected.
Scenario example 3:
under the home environment, the user can regard the robot of sweeping the floor as the intelligent housekeeper in the intelligent house thing networking, carries out intelligent management and control to the intelligent house equipment in the home environment through the robot of sweeping the floor. The sweeping robot can execute a sweeping task in a home environment and can patrol in the home environment, so that a user can intelligently manage and control the smart home equipment in a family through the sweeping robot. In carrying out the task of cleaning or tour the in-process, the robot of sweeping the floor can gather the environmental image in the family environment through the camera, and the user can observe intelligent home equipment's user state through the environmental image that the robot of sweeping the floor gathered, and then realizes the intelligent management and control to intelligent home equipment. In order to realize intelligent control over the intelligent household equipment, a user is bound with the sweeping robot through the terminal equipment, and a privacy area is set to be a bedroom and a bathroom through the terminal equipment.
The sweeping robot can start to move to a living room to execute a cleaning task according to a cleaning instruction sent by a user remotely or when the cleaning time of a set timing cleaning task is reached. During the process that the sweeping robot executes the sweeping task, the sweeping robot collects surrounding environment images for positioning and navigation. In the process, a user sends an instruction for acquiring the working state of the air conditioner to the sweeping robot, and after receiving the instruction, the sweeping robot positions the current position of the sweeping robot to be the living room, further determines the field range of the camera on the sweeping robot by combining the current posture of the sweeping robot, and judges whether the air conditioner arranged in the living room is located in the current field range. If the air conditioner is not located in the current view field range, the sweeping robot can adjust the position and the posture of the sweeping robot, so that the air conditioner is located in the current view field range. Under the condition that the air conditioner is located in the current view field range, the sweeping robot further judges whether a bedroom door is included in the current view field range, if the bedroom door is included in the current view field range, the fact that the bedroom door is included in an environment image acquired by the sweeping robot under the current position and posture means that the environment image includes environment information in the bedroom, and the bedroom belongs to a privacy area means that the sweeping robot can identify the door included in the environment image and shields the door in the environment image by using a white board; and then, providing the shielded environment image for a server, and sending the image to the terminal equipment by the server. The shielded environment image contains the working state information of the air conditioner, such as the current gear, the wind speed, the current indoor temperature and other information of the air conditioner, and a user can check the working state of the air conditioner.
Further, if the user thinks that the current indoor temperature is appropriate, the user can also send an instruction for controlling the air conditioner to be closed to the sweeping robot through the terminal equipment, and the sweeping robot sends a closing instruction to the air conditioner to close the air conditioner; or if the user thinks that the current air conditioner is too high in air speed, an instruction for controlling the air conditioner to reduce the air speed can be sent to the sweeping robot through the terminal equipment, and the sweeping robot sends an instruction for reducing the air speed to the air conditioner so as to control the air conditioner to reduce the air speed; or if the user thinks that current air conditioner gear is crossed lowly, can also send the instruction of control air conditioner increase gear to the robot of sweeping the floor through terminal equipment, send the increase gear instruction to the air conditioner by the robot of sweeping the floor to control air conditioner increase gear, thereby realize the intelligent management and control to the air conditioner.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps S101 to S104 may be device a; for another example, the execution subjects of steps S101-S103 may be device a, and the execution subject of step S104 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that these operations may be executed out of the order they appear herein or in parallel, and the order of the operations such as S101, S102, etc. is merely used to distinguish various operations, and the order itself does not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Fig. 4 is a schematic structural diagram of an autonomous mobile device according to an exemplary embodiment of the present application. As shown in fig. 4, the autonomous mobile apparatus includes: the device body 40 is provided with a memory 44 and a processor 45 on the device body 40.
Memory 44 for storing computer programs and may be configured to store other various data to support operations on the autonomous mobile device. Examples of such data include instructions, messages, pictures, videos, etc. for any application or method operating on the autonomous mobile device.
The memory 44 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 45 is coupled to the memory 44 for executing computer programs in the memory 44 for: in the operation process, a vision sensor is used for acquiring an environment image; identifying environmental information in a target area included in an environmental image when the target partitioned reference object is included in the environmental image; according to the environment information in the target area, information filtering is carried out on the environment image to obtain a first target image; and transmitting the first target image to a terminal device bound with the autonomous mobile device, wherein the target area is a functional area divided by a target partition reference object in the working environment.
In an alternative embodiment, processor 45 is further configured to: determining the position relation between the autonomous mobile equipment and the target area under the condition that the environment image does not contain the target partition reference object; and if the position relation indicates that the autonomous mobile equipment is positioned in the target area, acquiring a local environment map corresponding to the environment image as a second target image, and transmitting the second target image to the terminal equipment.
In an alternative embodiment, the processor 45, when obtaining the first target image, is specifically configured to: carrying out shielding, blurring or replacing processing on the environment information in the target area contained in the environment image to obtain a first target image; or carrying out shielding, blurring or replacing processing on other information except the environmental information in the target area in the environmental image to obtain a first target image.
In an alternative embodiment, processor 45 is further configured to: positioning the position and the posture of the autonomous mobile equipment according to the existing environment map and the environment image; determining a field of view range of the vision sensor according to the position and the posture of the autonomous mobile device; and if the target partition reference object is within the field of view of the visual sensor, determining that the environment image contains the target partition reference object.
Further optionally, when determining whether the target-partition reference object falls within the field of view of the vision sensor, the processor 45 is specifically configured to: inputting the environment image into an image recognition model to detect a target partition reference object, and acquiring a coordinate of the target partition reference object in the environment image as a first coordinate when the environment image is detected to contain the target partition reference object; converting the world coordinates of the known target partition reference object into a sensor coordinate system to obtain second coordinates, and calculating third coordinates according to the first coordinates and the second coordinates; and if the third coordinate is located in the field of view of the visual sensor, determining that the target partition reference object is located in the field of view of the visual sensor.
In an alternative embodiment, processor 45 is further configured to: receiving configuration information sent by terminal equipment, wherein the configuration information comprises identification information of a target area configured by a user, and determining the identification information of a target partition reference object according to the identification information of the target area; the identification information of the target area or the target-section reference object is semantic information or position information of the target area or the target-section reference object in the environment map.
In an alternative embodiment, processor 45 is further configured to: a target area and a target partitioned reference object in the work environment are identified based on at least one of the set area feature and the partitioned reference object feature, and the target area and the target partitioned reference object are marked in the environment map.
Further, as shown in fig. 4, the autonomous mobile apparatus further includes: communication components 46, display 47, power components 48, audio components 49, and the like. Only some of the components are schematically shown in fig. 4, and the autonomous mobile device is not meant to include only the components shown in fig. 4. It should be noted that the components within the dashed box in fig. 4 are optional components, not necessary components, and may be determined according to the product form of the autonomous mobile apparatus.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the autonomous mobile device in the above-mentioned information transmission method embodiment.
The autonomous mobile equipment can be a sweeping robot, a purifier, a cleaning machine and the like. The autonomous mobile device may also contain other components or modules depending on the particular implementation. Taking the autonomous mobile device as a sweeping robot as an example, as shown in fig. 5, the sweeping robot 500 includes: the machine body 501 is provided with one or more processors 502, one or more memories 503 for storing computer instructions, and a communication component 504, on the machine body 501. The communication component 504 may be a Wifi module, an infrared module, or a bluetooth module, etc.
Besides one or more processors 502, a communication component 504 and one or more memories 503, some basic components of the sweeping robot 500, such as a distance measuring sensor (single-point laser sensor or multi-point laser sensor) 509 for measuring distance upwards, a vision sensor 506, a power supply component 507, a driving component 508, etc., are arranged on the machine body 501. The vision sensor may be a camera, or the like. Alternatively, the drive assembly 508 may include drive wheels, drive motors, universal wheels, and the like. Further, as shown in fig. 5, the sweeping robot 500 may further include a sweeping assembly 505, and the sweeping assembly 505 may include a sweeping motor, a sweeping brush, a dust collection fan, and the like. The basic components and the configurations of the basic components included in different sweeping robots 500 are different, and the embodiments of the present disclosure are only some examples. It is to be noted that the components shown in the dashed line box in fig. 5 are optional components, not essential components.
It is noted that the one or more processors 502 and the one or more memories 503 may be disposed inside the machine body 501, or may be disposed on the surface of the machine body 501.
The machine body 501 is an execution mechanism by which the robot 500 performs a task, and can execute an operation designated by the processor 502 in a certain environment. The mechanical body 501 represents the appearance of the sweeping robot 500 to a certain extent. In the present embodiment, the appearance of the sweeping robot 500 is not limited, and may be, for example, a circle, an ellipse, a triangle, a convex polygon, or the like.
The one or more memories 503 are used primarily to store computer instructions that are executable by the one or more processors 502 to cause the one or more processors 502 to control the robot 500 to perform corresponding tasks. In addition to storing computer instructions, the one or more memories 503 may also be configured to store other various data to support operations on the robot 500. Examples of such data include instructions for any application or method operating on the robot 500, an environment map of the environment/scene in which the robot 500 is located, a signal strength map, and so forth.
The one or more processors 502, which may be considered a control system for the sweeping robot 500, may be configured to execute computer instructions stored in the one or more memories 503 to control the sweeping robot 500 to perform corresponding tasks. For example, the one or more processors 502 may execute the computer instructions stored in the one or more memories 503 to implement the steps of the information transmission method in the foregoing method embodiments, so that the execution of the job task can meet the scene requirement and the user's mind more and more, the service quality of the sweeping robot is improved, and the user experience is improved.
In addition to the operations described above, the one or more processors 502 executing the computer instructions stored in the one or more memories 503 may also be configured to: in the operation process, a vision sensor is used for acquiring an environment image, and under the condition that a target partition reference object is contained in the environment image, environment information in a target area contained in the environment image is identified; according to the environment information in the target area, information filtering is carried out on the environment image to obtain a first target image; and transmitting the first target image to a terminal device bound with the autonomous mobile device, wherein the target area is a functional area divided by the target partition reference object in the working environment. For details, reference may be made to the foregoing embodiments, which are not described herein again.
An embodiment of the present application further provides an information transmission apparatus, which may be applied to an autonomous mobile device, and the apparatus includes: the device comprises an acquisition module, an identification module, a filtering module and a transmission module. The acquisition module is used for acquiring an environment image by using a visual sensor on the autonomous mobile equipment in the operation process of the autonomous mobile equipment. And the identification module is used for identifying the environment information in the target area contained in the environment image under the condition that the target partition reference object is contained in the environment image. And the filtering module is used for filtering the environment image according to the environment information in the target area to obtain a first target image. And the transmission module is used for transmitting the first target image to the terminal equipment bound with the autonomous mobile equipment, and the target area is a functional area divided by the target partition reference object in the working environment.
In an optional embodiment, the apparatus further comprises: the device comprises a determining module and an obtaining module. The determining module is used for determining the position relation between the autonomous mobile equipment and the target area under the condition that the environment image does not contain the target subarea reference object. And the acquisition module is used for acquiring a local environment map corresponding to the environment image as a second target image when the position relation indicates that the autonomous mobile equipment is positioned in the target area. The transmission module is further configured to: and transmitting the second target image to the terminal equipment.
In an optional embodiment, the filtering module is specifically configured to: carrying out shielding, blurring or replacing treatment on environment information in a target area contained in an environment image to obtain a first target image; or carrying out occlusion, blurring or replacement processing on other information except the environmental information in the target area in the environmental image to obtain the first target image.
In an optional embodiment, the apparatus further comprises: the device comprises a positioning module, a determining module and a judging module. The positioning module is used for positioning the position and the posture of the autonomous mobile equipment according to the existing environment map and the environment image; the determining module is used for determining the field range of the vision sensor according to the position and the posture of the autonomous mobile equipment; and the judging module is used for determining that the environment image contains the target subarea reference object if the target subarea reference object is within the field range of the visual sensor.
Further optionally, the determining module is specifically configured to: inputting the environment image into an image recognition model to detect a target partition reference object, and acquiring a coordinate of the target partition reference object in the environment image as a first coordinate when the environment image is detected to contain the target partition reference object; converting the world coordinates of the known target partition reference object into a sensor coordinate system to obtain second coordinates, and calculating third coordinates according to the first coordinates and the second coordinates; and if the third coordinate is positioned in the field of view range of the vision sensor, determining that the target partition reference object is positioned in the field of view range of the vision sensor.
The information transmission device provided in this embodiment may acquire an environment image and transmit the environment image to the terminal device during the operation of the autonomous mobile device, before transmitting the environment image to the terminal device, may determine whether the environment image includes the target partition reference object, and in a case that the environment image includes the target partition reference object, identify the environment information of the target area, which is included in the environment image and is partitioned by the target partition reference object, and filter the environment image accordingly, thereby providing the target image obtained through the filtering processing to the terminal device. The target images of some environmental information are filtered, data transmission quantity can be reduced on the basis of meeting requirements of remote monitoring or intelligent management and control, and/or user privacy is protected, and service quality is improved.
The communication components of fig. 4-5 described above are configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The displays of fig. 4-5 described above include screens, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply components of fig. 4-5 described above provide power to the various components of the device in which the power supply components are located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio components of fig. 4-5 described above may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An information transmission method applicable to an autonomous mobile device, the method comprising:
in the operation process, a vision sensor is used for acquiring an environment image;
identifying environmental information in a target area included in the environmental image when a target partition reference object is included in the environmental image;
according to the environment information in the target area, performing information filtering on the environment image to obtain a first target image;
and transmitting the first target image to a terminal device bound with the autonomous mobile device, wherein the target area is a functional area divided by the target partition reference object in the working environment.
2. The method of claim 1, further comprising:
determining a positional relationship of the autonomous mobile device with the target area in a case where a target-partition reference object is not included in the environment image;
and if the position relation indicates that the autonomous mobile equipment is located in the target area, acquiring a local environment map corresponding to the environment image as a second target image, and transmitting the second target image to the terminal equipment.
3. The method of claim 1, wherein performing information filtering on the environment image to obtain a first target image according to environment information in the target area comprises:
and carrying out shielding, blurring or replacing treatment on the environment information in the target area contained in the environment image to obtain a first target image.
4. The method of claim 1, wherein performing information filtering on the environment image to obtain a target image according to environment information in the target area comprises:
and carrying out shielding, blurring or replacing processing on other information except the environmental information in the target area in the environmental image to obtain a first target image.
5. The method of any of claims 1-4, further comprising:
positioning the position and the posture of the autonomous mobile equipment according to the existing environment map and the environment image;
determining a field of view range of the vision sensor according to the position and the posture of the autonomous mobile device;
and if the target partition reference object is within the field of view of the visual sensor, determining that the environment image contains the target partition reference object.
6. The method of claim 5, wherein determining whether the target-zone reference falls within a field of view of the vision sensor comprises:
inputting the environment image into an image recognition model to detect a target partition reference object, and acquiring a coordinate of the target partition reference object in the environment image as a first coordinate when the environment image is detected to contain the target partition reference object;
converting the world coordinates of the known target partition reference object into a sensor coordinate system to obtain second coordinates, and calculating third coordinates according to the first coordinates and the second coordinates;
and if the third coordinate is located in the field of view of the visual sensor, determining that the target partition reference object is located in the field of view of the visual sensor.
7. The method according to any one of claims 1-4, further comprising:
receiving configuration information sent by terminal equipment, wherein the configuration information comprises identification information of a target area configured by a user, and determining the identification information of a target partition reference object according to the identification information of the target area;
wherein the identification information of the target area or the target partition reference object is semantic information or position information of the target area or the target partition reference object in an environment map.
8. The method of any of claims 1-4, further comprising:
a target area and a target partitioned reference object in the work environment are identified based on at least one of the set area feature and the partitioned reference object feature, and the target area and the target partitioned reference object are marked in the environment map.
9. An autonomous mobile device, comprising: the device comprises a device body, wherein a memory and a processor are arranged on the device body;
the memory for storing a computer program;
the processor, coupled with the memory, to execute the computer program to:
in the operation process, a vision sensor is used for acquiring an environment image;
identifying environmental information in a target area included in the environmental image when a target partition reference object is included in the environmental image;
according to the environment information in the target area, performing information filtering on the environment image to obtain a first target image;
and transmitting the first target image to a terminal device bound with the autonomous mobile device, wherein the target area is a functional area divided by the target partition reference object in the working environment.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 8.
CN202011233103.4A 2020-11-06 2020-11-06 Information transmission method, device and storage medium Pending CN114531567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011233103.4A CN114531567A (en) 2020-11-06 2020-11-06 Information transmission method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011233103.4A CN114531567A (en) 2020-11-06 2020-11-06 Information transmission method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114531567A true CN114531567A (en) 2022-05-24

Family

ID=81619377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011233103.4A Pending CN114531567A (en) 2020-11-06 2020-11-06 Information transmission method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114531567A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115268471A (en) * 2022-09-27 2022-11-01 深圳市智绘科技有限公司 Route generation method and device of sweeping robot and storage medium
CN116091607A (en) * 2023-04-07 2023-05-09 科大讯飞股份有限公司 Method, device, equipment and readable storage medium for assisting user in searching object

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115268471A (en) * 2022-09-27 2022-11-01 深圳市智绘科技有限公司 Route generation method and device of sweeping robot and storage medium
CN115268471B (en) * 2022-09-27 2023-01-03 深圳市智绘科技有限公司 Route generation method and device of sweeping robot and storage medium
CN116091607A (en) * 2023-04-07 2023-05-09 科大讯飞股份有限公司 Method, device, equipment and readable storage medium for assisting user in searching object
CN116091607B (en) * 2023-04-07 2023-09-26 科大讯飞股份有限公司 Method, device, equipment and readable storage medium for assisting user in searching object

Similar Documents

Publication Publication Date Title
CN110338708B (en) Sweeping control method and device of sweeping robot
CN106395198B (en) The control method and device of intelligent garbage bin
EP3424395B1 (en) Method and apparatus for performing cleaning operation by cleaning device
JP6186517B2 (en) Task setting method, apparatus, program, and recording medium
US20170263091A1 (en) Virtual enhancement of security monitoring
CN106737709B (en) Cleaning method and device
CA3050098A1 (en) Dynamic drone navigation
KR20140126539A (en) Moving robot, user terminal apparatus, and control method thereof
CN114531567A (en) Information transmission method, device and storage medium
CN104769962A (en) Environmental management systems including mobile robots and methods using same
EP3738724B1 (en) Mobile robot and method for controlling mobile robot
KR20180039437A (en) Cleaning robot for airport and method thereof
US11457183B2 (en) Dynamic video exclusion zones for privacy
EP3398029B1 (en) Intelligent smart room control system
KR101162982B1 (en) Remote control system for Robot
KR20130060952A (en) Remote control system for service robot
KR20180039436A (en) Cleaning robot for airport and method thereof
CN114158980A (en) Job method, job mode configuration method, device, and storage medium
CA3110072A1 (en) Assisted creation of video rules via scene analysis
CN111343696A (en) Communication method of self-moving equipment, self-moving equipment and storage medium
KR102623190B1 (en) Artificial intelligence device and artificial intelligence system for caring air state of indoor
CN109976177B (en) Intelligent household security robot with face recognition function
CN111338330A (en) Job position determination method, self-moving device, and storage medium
CN111830998B (en) Operation method, virtual wall adding method, equipment and storage medium
CN114332289A (en) Environment map construction method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination