WO2023279697A1 - 宠物看护方法、装置、电子设备及存储介质 - Google Patents

宠物看护方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023279697A1
WO2023279697A1 PCT/CN2022/071840 CN2022071840W WO2023279697A1 WO 2023279697 A1 WO2023279697 A1 WO 2023279697A1 CN 2022071840 W CN2022071840 W CN 2022071840W WO 2023279697 A1 WO2023279697 A1 WO 2023279697A1
Authority
WO
WIPO (PCT)
Prior art keywords
pet
target
image
care
mobile device
Prior art date
Application number
PCT/CN2022/071840
Other languages
English (en)
French (fr)
Inventor
刘家铭
张展鹏
吴华栋
谭龙欢
成慧
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023279697A1 publication Critical patent/WO2023279697A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to the technical field of robots, and relates to a pet care method, device, electronic equipment and storage medium.
  • Embodiments of the present disclosure at least provide a pet care method, device, electronic equipment, and storage medium.
  • an embodiment of the present disclosure provides a pet care method, including:
  • the target detection result including a target object included in the environmental image and object attribute information corresponding to the target object
  • the target detection result determine the target image including the pet object in the target object
  • a pet care task is performed.
  • At least one environmental image obtained by shooting the current environment by the autonomous mobile device may be obtained, and the target detection is performed on the environmental image to obtain the target detection result, and then the target object is determined according to the target detection result A target image of a pet object is included in the target image, and a pet care task is performed according to the target image.
  • the pet care task is realized through the autonomous mobile device, which can reduce accidents caused by the owner leaving home for a long time.
  • the autonomous mobile device includes a cleaning robot; the method further includes:
  • the target detection result it is determined that the target object includes a target image of the item object, and according to the object attribute information corresponding to the item object, the autonomous mobile device is controlled to avoid the movement of the item object.
  • the cleaning robot is used to perform the pet care task, which can reduce the cost compared with a simple pet companion robot.
  • the performing a pet care task according to the target image includes:
  • a pet-sitting task is performed.
  • the pet care task is executed according to the state information of the pet object, so that the executed care task can match the current state of the pet object, and the effectiveness of the performed care task is improved.
  • the performing a pet care task based on the status information of the pet object includes:
  • a pet photo album is generated.
  • a pet photo album is generated based on screening target images whose corresponding state information satisfies the first preset active condition, which can not only record beautiful moments of the pet object, but also improve the quality of the pet photo album.
  • the generating a pet photo album based on the filtered target image includes:
  • the pet photo album After performing at least one beautification operation on the selected target image, the pet photo album is generated.
  • the performing a pet care task based on the status information of the pet object includes:
  • the pet object when the state information indicates that the pet object satisfies the second preset activation condition, the pet object can move to the position of the pet object and interact with the pet, achieving the effect of accompanying the pet.
  • the performing a pet care task based on the status information of the pet object includes:
  • a reminder message can be sent to remind the owner, thereby realizing effective monitoring of the pet object and reducing the risk of the pet object being in a depression for a long time. An unexpected situation occurs due to the state.
  • the object attribute information includes position information of the pet object in the target image, and the object corresponding to the pet object in the target image Attribute information, identifying the state information of the pet object, including:
  • the state information of the pet object is determined by the first evaluation score and the second evaluation score, which can improve the accuracy of determining the pet state information.
  • the determining the second evaluation score based on the target detection results respectively corresponding to at least one target image includes:
  • the second evaluation score is determined based on the position change, posture change, and expression change of the pet object in the target detection results respectively corresponding to at least one target image.
  • the method further includes:
  • the determining the state information of the pet object according to the first evaluation score and the second evaluation score corresponding to the at least one target image includes:
  • the associated target images among the multiple target images are aggregated to obtain at least one aggregated image group, which further realizes the segmented determination of the state information of the pet object and further improves the accuracy of the determination of the state information. Spend.
  • the determining the state information of the pet object according to the first evaluation score and the second evaluation score corresponding to the at least one aggregated image group includes :
  • the first evaluation score corresponding to the aggregated image group is greater than a first score threshold, and the second evaluation score is greater than a second score threshold, determine that the state information of the pet object satisfies the first preset activity condition; or,
  • the acquiring the location information of the pet object includes:
  • the step of moving to the position of the pet object based on the location information, and performing accompanying tasks includes:
  • the purpose of the autonomous mobile device moving in a direction close to the pet object is realized through the transformation of the pose information and the coordinate system, and the precision is high.
  • the method further includes:
  • the multimedia file includes at least one of the following: audio file, video file.
  • the interaction with the pet object can also be realized in response to the user's instruction, realizing more effective care and companionship, and improving the quality of care.
  • an embodiment of the present disclosure provides a pet care method, including:
  • the target image is an image including a pet object in the at least one environmental image
  • the user can send the target care instruction to the autonomous mobile device to realize the purpose of pet care, and can issue targeted care instructions according to the image of the pet object collected by the autonomous mobile device, which can improve the care instruction. , to reduce accidents caused by the owner leaving home for a long time.
  • an embodiment of the present disclosure provides a pet care device, including:
  • the image acquisition part is configured to acquire a plurality of environmental images obtained by taking pictures of the current environment by the autonomous mobile device;
  • the target detection part is configured to perform target detection on the environment image to obtain a target detection result, the target detection result including target objects contained in the environment image and object attribute information corresponding to the target objects; wherein , the target object includes a pet object;
  • An image determining part configured to determine a target image including a pet object among the target objects according to the target detection result
  • the task execution part is configured to execute a pet care task according to the target image.
  • the task execution part is further configured to:
  • the target detection result it is determined that the target object includes a target image of the item object, and according to the object attribute information corresponding to the item object, the cleaning robot is controlled to avoid moving of the item object.
  • the task execution part is further configured to:
  • a pet-sitting task is performed.
  • the task execution part is further configured to:
  • a pet photo album is generated.
  • the task execution part is further configured to:
  • the pet photo album After performing at least one beautification operation on the selected target image, the pet photo album is generated.
  • the task execution part is further configured to:
  • the task execution part is further configured to:
  • the object attribute information includes position information of the pet object in the target image
  • the task execution part is further configured to:
  • the task execution part is further configured to:
  • the second evaluation score is determined based on the position change, posture change, and expression change of the pet object in the target detection results respectively corresponding to at least one target image.
  • the task execution part is further configured to:
  • the task execution part is further configured to:
  • the first evaluation score corresponding to the aggregated image group is greater than a first score threshold, and the second evaluation score is greater than a second score threshold, determine that the state information of the pet object satisfies the first preset activity condition; or,
  • the task execution part is further configured to:
  • the task execution part is further configured to:
  • the multimedia file includes at least one of the following: audio file, video file.
  • an embodiment of the present disclosure provides a pet care device, including:
  • the image display part is configured to respond to the user's viewing instruction for the target image, and display at least one environmental image obtained by taking the current environment by the autonomous mobile device;
  • the target image is an image including a pet object in the at least one environmental image;
  • the instruction issuing part is configured to receive the target care instruction issued by the user, and send the target care instruction to the autonomous mobile device, so as to control the autonomous mobile device to perform pet care tasks.
  • an embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the steps of the pet care method as described in the first aspect or the second aspect are executed.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the computer program described in the first aspect or the second aspect is executed. Steps of pet sitting method.
  • an embodiment of the present disclosure provides a computer program, including computer readable code, when the computer readable code is run in an electronic device, a processor in the electronic device executes the program as described in the first aspect Or the steps of the pet care method described in the second aspect.
  • Fig. 1 shows a flow chart of a pet care method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of an implementation environment of a pet care method provided by an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of an environment image marked with a detection frame provided by an embodiment of the present disclosure
  • Fig. 4 shows a flow chart of a method for performing a pet care task according to a target image provided by an embodiment of the present disclosure
  • Fig. 5 shows a flowchart of a method for determining status information of a pet object provided by an embodiment of the present disclosure
  • Fig. 6 shows a flow chart of a method for performing pet care tasks based on the state information of a pet object provided by an embodiment of the present disclosure
  • Fig. 7 shows a flow chart of another method for performing a pet care task based on the status information of a pet object provided by an embodiment of the present disclosure.
  • Fig. 8 shows a flow chart of another method for performing pet care tasks based on the status information of pet objects provided by an embodiment of the present disclosure
  • FIG. 9 shows a flowchart of another pet care method provided by an embodiment of the present disclosure.
  • Fig. 10 shows a schematic structural diagram of a pet care device provided by an embodiment of the present disclosure
  • Fig. 11 shows a schematic structural diagram of another pet care device provided by an embodiment of the present disclosure.
  • Fig. 12 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • the present disclosure provides a pet care method, which can acquire at least one environmental image obtained by shooting the current environment by an autonomous mobile device, and perform target detection on the environmental image to obtain the target detection result, and then according to the target As a result of the detection, it is determined that the target object includes a target image of a pet object, and then a pet care task is performed according to the target image.
  • the pet care task is realized through the autonomous mobile device, which can reduce accidents caused by the owner leaving home for a long time.
  • the pet care method includes the following S101 to S104:
  • FIG. 2 is a schematic diagram of an implementation environment of a pet care method provided by an embodiment of the present disclosure.
  • the execution body of the pet care method provided by the embodiment of the present disclosure may be the autonomous mobile device 100, or the server 200 capable of communicating with the autonomous mobile device 100.
  • the server 200 may communicate with the autonomous mobile device 100.
  • the autonomous mobile device 100 maintains a communication connection or establishes a communication connection when data transmission is required, which is not limited here.
  • the pet care method provided by the embodiments of the present disclosure may also be implemented by a processor executing a computer program.
  • the autonomous mobile device 100 may include a cleaning robot, a mobile robot, and the like.
  • the autonomous mobile device 100 is described by taking a cleaning robot as an example, wherein the cleaning robot is a kind of smart household appliances, which can automatically complete floor cleaning work in a room by virtue of certain artificial intelligence.
  • the autonomous mobile device 100 may also be other types of mobile robots, such as a lawn mowing robot, as long as it can move autonomously and perform corresponding tasks.
  • the server 200 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or a basic cloud server that provides cloud services, cloud databases, cloud computing, cloud storage, big data, and artificial intelligence platforms. Cloud server for computing services.
  • the autonomous mobile device 100 may be equipped with a camera (not shown in the figure), which is used to capture images of the current environment where the autonomous mobile device 100 is located during the autonomous movement process.
  • the camera device may be an RGB camera.
  • the autonomous mobile device 100 may also be equipped with a laser radar, a depth sensor, an ultrasonic sensor, and the like.
  • the autonomous mobile device 100 can also communicate with the mobile terminal 300 , and then can accept instructions issued by the mobile terminal 300 to perform corresponding tasks.
  • the mobile terminal 300 includes, but is not limited to, a mobile phone, a tablet computer, and a wearable device.
  • the autonomous mobile device 100 may accept a cleaning instruction issued by the mobile terminal 300 to complete the cleaning task, that is, the user may interact with the autonomous mobile device 100 through remote control.
  • S102 Perform target detection on the environment image to obtain a target detection result, where the target detection result includes a target object contained in the environment image and object attribute information corresponding to the target object.
  • a multi-object detection neural network may be used to perform target detection on the environment image to obtain a target detection result.
  • the multi-object detection neural network can detect objects of different types in the environment.
  • the multi-object detection neural network can not only detect object objects in the environment, but also detect pet objects in the environment .
  • items refer to household common daily necessities, such as static objects such as vases placed on the ground, tables, chairs and stools; pet objects refer to small animals kept at home, such as dynamic objects such as kittens, puppies, and birds.
  • FIG. 3 is a schematic diagram of an object detection result of an environment image provided by an embodiment of the present disclosure.
  • the target detection result includes the detected target dog 11 and the target water bottle 12 , which are shown in corresponding detection boxes.
  • the target detection result includes not only the target object, but also attribute information corresponding to the target object.
  • the corresponding attribute information includes, but is not limited to, the category of the target object, the detection frame, the tracking target, the confidence level, and the like.
  • the category of the target object is used to indicate whether the target object is an item object or a pet object
  • the detection frame is used to indicate the position of the target object in the current environment
  • the tracking target is used to uniformly identify the target objects belonging to the same object to be compared with other objects.
  • the confidence level is used to measure the reliability of the current detection results.
  • the multi-object detection neural network can be obtained through self-supervised training. For example, several images or videos of family scenes and pet scenes can be collected as training data, and then the labeled training data can be input to the target network for multiple rounds of training to obtain a multi-object detection neural network that meets the requirements.
  • the deep neural network since the deep neural network is used for statistical learning, instead of the traditional method of object recognition based on artificially designed feature points, it has adaptability and generalization to a certain extent, and can solve the problem of items in unknown family scenes. and pet identification issues.
  • the multi-object detection neural network combines object recognition and pet recognition into a network structure, reuses parameters and calculation results, increases network computing efficiency, and reduces performance consumption.
  • the target object includes the target image of the pet object.
  • performing the pet care task may be tracking and photographing the pet, and may also issue an alarm when the pet is in an abnormal state.
  • the pet's position can be determined according to the target image
  • the autonomous mobile device can be controlled to track and shoot the pet according to the pet's position, and whether the pet's state is abnormal can also be determined according to the target image.
  • the following steps S1041 to S1042 may be included:
  • the state information of the pet object can be identified, for example, whether the current pet state is an active state or a depressed state, and then based on the state information of the pet object, Execute the corresponding pet care tasks, and different state information can correspond to different care tasks, which can match the currently executed care tasks with the current state of the pet, and then better realize the care tasks.
  • the object attribute information includes position information of the pet object in the target image, that is, position information of the detection frame in the target image. Therefore, for the above step S1041, when identifying the state information of the pet object according to the object attribute information corresponding to the pet object in the target image, as shown in FIG. 5 , the following steps S10411 to 10414 may be included:
  • S10411 Determine a first evaluation score based on the position information of the pet object in the target image and the quality of the target image.
  • the quality of the target image refers to the clarity of the target image
  • the position information of the pet object in the target image can be determined by the position of the detection frame in the target image.
  • the first base score can be determined according to the target image quality
  • the second base score can be determined according to the position of the detection frame, and then the first base score and the second base score are multiplied by corresponding weights and added together , the first evaluation score can be obtained.
  • the attenuation item and the basic constant can also be preset, and the first basic score and the second basic score are multiplied by the corresponding weights and then combined with the attenuation item and the After the basic constants are summed, the first evaluation score is obtained.
  • the attenuation item is used to measure the attenuation degree between the current target image and the previously associated target image.
  • the second evaluation score may be determined based on the position change, pose change, and expression change of the pet object in the target detection results respectively corresponding to at least one target image.
  • the third base score can be determined according to the emotion of the pet object in the target image. For example, if the pet object in the target image has both eyes open, the third base score can be higher.
  • the fourth basic score is determined based on the location of the environment, and the second evaluation score can be obtained by multiplying the third basic score and the fourth basic score by corresponding weights and adding them together.
  • the image can be scored according to the attribute information of the pet object through the deep learning network to obtain the first evaluation score and the second evaluation score.
  • the target image includes multiple images
  • at least one of the target images may also be converted based on the first evaluation score and the second evaluation score.
  • the associated target images in the images are aggregated to obtain at least one aggregated image group, that is, according to the first evaluation score and the second evaluation score of multiple target images in time series, the associated target images are further aggregated into corresponding videos, thereby Integrate multiple target images into segmented structured data in various forms such as pictures and videos. Among them, it can be judged according to the time sequence whether the evaluation scores corresponding to each target image satisfy the correlation condition, and if the evaluation scores corresponding to multiple consecutive target images satisfy the correlation condition, the multiple consecutive target images are used as the correlation condition.
  • the state information of the pet object can be determined according to the first evaluation score and the second evaluation score corresponding to at least one group of aggregated images, thereby realizing segmental identification and determination of the state information of the pet object, and improving the Accuracy of pet object state recognition.
  • the first evaluation score corresponding to the aggregated image group is greater than a first score threshold, and the second evaluation score is greater than a second score threshold, it is determined that the state information of the pet object satisfies the first score threshold.
  • a preset active condition when the first evaluation score corresponding to the aggregated image group is greater than a first score threshold, and the second evaluation score is greater than a second score threshold, it is determined that the state information of the pet object satisfies the first score threshold.
  • the first score threshold and the second score threshold may be set according to actual conditions.
  • the corresponding first score threshold can be set according to the volume of the pet object. For example, if the volume of the pet object is large, even if the quality of the target image is slightly worse, it will not affect the viewing. The smaller size of the pet object should require a higher first score threshold.
  • different second score thresholds can be set according to the type or habit of the pet object. For example, in the case of a cat, the second score threshold can be set slightly lower because the cat is relatively quiet. Some, and if the pet object is a dog, since the dog is more active, the second score threshold can be set slightly higher.
  • the first evaluation score is greater than the first score threshold and the second evaluation score is greater than the second score threshold, it indicates that the quality of the target image in the aggregated image group is relatively high and the emotional state of the pet object is relatively good. Therefore, the The target images in the aggregated image group are added to the pet album for the user to view.
  • the pet photo album may be generated after performing at least one beautification operation on the selected target images.
  • the pet photo album may be generated after performing at least one beautification operation on the selected target images.
  • multiple styles of templates can be provided, and corresponding cartoon stickers can also be generated according to the position and posture of the pet object.
  • the third score threshold is similar to the above-mentioned second score threshold, and can also be set according to actual conditions, which will not be repeated here.
  • the second evaluation scores corresponding to a plurality of consecutive aggregated image groups are greater than the third score threshold, it indicates that the pet object's mood during this period is relatively lively and relatively stable. For example, you can interact with pet objects by controlling autonomous mobile devices to play sounds, and moving and chasing.
  • the positioning position information of the pet object may be determined based on the coordinate information of the pet object in the image coordinate system corresponding to the target image and the parameter information corresponding to the autonomous mobile device; based on the The pose information corresponding to the autonomous mobile device and the positioning position information of the pet object are used to control the autonomous mobile device to move in a direction close to the pet object to perform the accompanying task.
  • the parameter information corresponding to the autonomous mobile device may include internal parameters for converting the image coordinate system to the camera coordinate system, and external parameters for converting the camera coordinate system to the world coordinate system.
  • the pose information of the autonomous mobile device in the world coordinate system corresponding to the real scene can be determined based on the environmental image taken by the autonomous mobile device and the pre-established three-dimensional scene map representing the real scene.
  • the 3D scene map representing the real scene can completely coincide with the real scene, so the pose information of the autonomous mobile device in the world coordinate system corresponding to the real scene can be determined based on the environmental image captured by the autonomous mobile device and the 3D scene map.
  • the pose information of the autonomous mobile device in the world coordinate system corresponding to the real scene may include the position coordinate value of the autonomous mobile device in the world coordinate system, and may also include the orientation of the autonomous mobile device in the world coordinate system Angle, where the orientation angle can be represented by the included angle with the coordinate axis of the world coordinate system.
  • the pixel coordinates corresponding to the center point of the detection frame can be used as the two-dimensional detection information of the pet object in the image coordinate system, and then combined with the autonomous mobile device corresponding internal parameters (pre-stored internal parameters), determine the coordinate values of the pet object along the X-axis and Y-axis of the camera coordinate system in the camera coordinate system corresponding to the autonomous mobile device, and then further combine the external parameters corresponding to the autonomous mobile device (can be Determine the pose information of the pet object in the world coordinate system by determining the pose information of the autonomous mobile device in the world coordinate system corresponding to the real scene, so that the autonomous mobile device can be controlled to approach the pet object direction to move, execute the accompanying task, and emit corresponding sounds during the process of controlling the autonomous mobile device to move closer to the pet object.
  • the fourth score threshold is smaller than the third score threshold, similar to the third score threshold, and the fourth score threshold can also be set according to actual conditions, which is not limited here.
  • the second evaluation scores corresponding to a plurality of consecutive aggregated image groups are all less than the fourth score threshold, it means that the second evaluation scores are low, and it can be determined that the pet object is in a quiet state during this time period, and this time period can be selected.
  • the graphic information is remotely pushed to the pet owner, so that the owner can know the current state of the pet object in time.
  • the owner can remotely view the real-time images and videos of the pet object in real time through the mobile terminal 300 in FIG.
  • S1042g Receive a multimedia file delivered by the user, and play the multimedia file through the autonomous mobile device to interact with the pet object; wherein the multimedia file includes at least one of the following: an audio file and a video file.
  • the above S1042f and S1042g may also be executed at the same time, so as to perform diversified interactions with the pet object.
  • FIG. 9 is a flowchart of another pet care method provided by the disclosed embodiment.
  • the pet care method includes the following S201 to S202:
  • S201 Responding to a viewing instruction of a user for a target image, display at least one environmental image obtained by taking a current environment by an autonomous mobile device; the target image is an image including a pet object in the at least one environmental image.
  • S202 Receive a target care instruction issued by the user, and send the target care instruction to the autonomous mobile device, so as to control the autonomous mobile device to perform a pet care task.
  • the user can view at least one environmental image obtained by taking the current environment by the mobile device independently through the mobile terminal 300, and can observe the state of the pet object through the environmental image, and can click on the mobile terminal 300 Send the corresponding target care instructions, and then realize the remote interaction with the pet object.
  • the autonomous mobile device can be controlled to perform pet care tasks through the target care instructions issued by the user, thereby reducing accidents caused by the owner leaving home for a long time.
  • the embodiment of the present disclosure also provides an augmented reality pet care device corresponding to the augmented reality pet care method, because the problem-solving principle of the device in the embodiment of the present disclosure is the same as the above-mentioned augmented reality pet care method in the embodiment of the present disclosure Similar, therefore, the implementation of the device can refer to the implementation of the method, and repeated descriptions will not be repeated.
  • the pet care device 500 includes:
  • the image acquisition part 501 is configured to acquire multiple environmental images obtained by taking pictures of the current environment by the autonomous mobile device;
  • the target detection part 502 is configured to perform target detection on the environment image to obtain a target detection result, the target detection result including a target object included in the environment image and object attribute information corresponding to the target object; Wherein, the target object includes a pet object;
  • the image determining part 503 is configured to determine a target image including a pet object among the target objects according to the target detection result;
  • the task execution part 504 is configured to execute a pet care task according to the target image.
  • the task execution part 504 is further configured to:
  • the target detection result it is determined that the target object includes a target image of the item object, and according to the object attribute information corresponding to the item object, the cleaning robot is controlled to avoid moving of the item object.
  • the task execution part 504 is further configured to:
  • a pet-sitting task is performed.
  • the task execution part 504 is further configured to:
  • a pet photo album is generated.
  • the task execution part 504 is further configured to:
  • the pet photo album After performing at least one beautification operation on the selected target image, the pet photo album is generated.
  • the task execution part 504 is further configured to:
  • the task execution part 504 is further configured to:
  • the object attribute information includes position information of the pet object in the target image
  • the task execution part 504 is further configured to:
  • the task execution part 504 is further configured to:
  • the second evaluation score is determined based on the position change, posture change, and expression change of the pet object in the target detection results respectively corresponding to at least one target image.
  • the task execution part 504 is further configured to:
  • the task execution part 504 is further configured to:
  • the first evaluation score corresponding to the aggregated image group is greater than a first score threshold, and the second evaluation score is greater than a second score threshold, determine that the state information of the pet object satisfies the first preset activity condition; or,
  • the task execution part 504 is further configured to:
  • the task execution part 504 is further configured to:
  • the multimedia file includes at least one of the following: audio file, video file.
  • the pet care device 600 includes:
  • the image presentation part 601 is configured to respond to the user's viewing instruction for the target image, and display at least one environmental image obtained by shooting the current environment by the autonomous mobile device;
  • the target image is an image including a pet object in the at least one environmental image ;
  • the instruction issuing part 602 is configured to receive the target care instruction issued by the user, and send the target care instruction to the autonomous mobile device, so as to control the autonomous mobile device to perform pet care tasks.
  • an embodiment of the present disclosure also provides an electronic device.
  • the electronic device can be an autonomous mobile device or a smart terminal.
  • FIG. 12 it is a schematic structural diagram of an electronic device 700 provided by an embodiment of the present disclosure, including a processor 701 , a memory 702 , and a bus 703 .
  • the memory 702 is used to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 701 and exchange data with an external memory 7022 such as a hard disk.
  • the processor 701 exchanges data with the external memory 7022 through the memory 7021 .
  • the memory 702 is specifically used to store application program codes executing the solution of the present application, and the execution is controlled by the processor 701 . That is, when the electronic device 700 is running, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the application program code stored in the memory 702, and then executes the method described in any of the foregoing embodiments.
  • memory 702 can be, but not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read-only memory (Programmable Read-Only Memory, PROM), can Erasable Programmable Read-Only Memory (EPROM), Electric Erasable Programmable Read-Only Memory (EEPROM), etc.
  • RAM Random Access Memory
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electric Erasable Programmable Read-Only Memory
  • the processor 701 may be an integrated circuit chip with signal processing capabilities.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (DSP), an application-specific integrated circuit (ASIC) , field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the structure shown in the embodiment of the present application does not constitute a limitation on the electronic device 700 .
  • the electronic device 700 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored. When the computer program is run by a processor, the steps of the pet care method in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the pet care method in the above method embodiment, please refer to the above method embodiment, I won't repeat them here.
  • the above-mentioned computer program product may be realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the working process of the above-described system and device can refer to the corresponding process in the foregoing method embodiments, which will not be repeated here.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: various media capable of storing program codes such as U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk.
  • the present disclosure provides a pet care method, device, electronic equipment, and storage medium.
  • the pet care method includes: acquiring at least one environment image obtained by shooting the current environment by an autonomous mobile device; performing target detection on the environment image to obtain a target A detection result, the target detection result including the target object contained in the environment image, and object attribute information corresponding to the target object; according to the target detection result, determine that the target object includes a target image of a pet object ; According to the target image, perform a pet care task.
  • the care of pets is realized through autonomous mobile devices, and the problem of unaccompanied pets caused by the owner leaving home is solved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种宠物看护方法、装置、电子设备及存储介质,该宠物看护方法包括:获取自主移动设备拍摄当前环境得到的至少一张环境图像(S101);对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息(S102);根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像(S103);根据所述目标图像,执行宠物看护任务(S104)。

Description

宠物看护方法、装置、电子设备及存储介质
相关申请的交叉引用
本公开基于申请号为202110763715.2、申请日为2021年7月6日、申请名称为“宠物看护方法、装置、电子设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及机器人技术领域,涉及一种宠物看护方法、装置、电子设备和存储介质。
背景技术
随着人们的生活水平提升,养宠物的人越来越多,但由于各种原因(比如工作繁忙),导致主人无法时刻陪伴在宠物身边。在主人离家时,宠物可能会存在各类行为,比如活跃行为、低迷行为等,若主人长时间的缺失,可能会导致宠物发生意外情况,因此,如何在主人离家时为宠物提供有效的看护陪伴,成为亟待解决的问题。
发明内容
本公开实施例至少提供一种宠物看护方法、装置、电子设备及存储介质。
第一方面,本公开实施例提供了一种宠物看护方法,包括:
获取自主移动设备拍摄当前环境得到的至少一张环境图像;
对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息;
根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像;
根据所述目标图像,执行宠物看护任务。
本公开实施例中,可以获取自主移动设备拍摄当前环境得到的至少一张环境图像,并对所述环境图像进行目标检测,得到目标检测结果,再根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像,进而根据所述目标图像,执行宠物看护任务。如此,通过自主移动设备实现了对宠物的看护任务,能够减少因主人长时间离家而导致的意外情况发生。
根据第一方面,在一种可能的实施方式中,所述自主移动设备包括清洁机器人;所述方法还包括:
根据所述目标检测结果,确定所述目标对象中包括物品对象的目标图像,并根据所述物品对象对应的对象属性信息,控制所述自主移动设备避开所述物品对象移动。
本公开实施例中,通过清洁机器人来执行宠物看护任务,相较于单纯的宠物陪伴机器人可以降低成本。
根据第一方面,在一种可能的实施方式中,所述根据所述目标图像,执 行宠物看护任务,包括:
根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息;
基于所述宠物对象的状态信息,执行宠物看护任务。
本公开实施例中,根据宠物对象的状态信息,执行宠物看护任务,可以使得所执行的看护任务与宠物对象的当前状态相匹配,提高了执行的看护任务的有效性。
根据第一方面,在一种可能的实施方式中,所述基于所述宠物对象的状态信息,执行宠物看护任务,包括:
根据所述宠物对象的状态信息,筛选状态信息满足第一预设活跃条件的宠物对象对应的所述目标图像;
基于筛选出的所述目标图像,生成宠物相册。
本公开实施例中,基于筛选对应的状态信息满足第一预设活跃条件的目标图像,生成宠物相册,不仅能够记录宠物对象的美好时刻,还能够提升宠物相册的质量。
根据第一方面,在一种可能的实施方式中,所述基于筛选出的所述目标图像,生成宠物相册,包括:
对筛选出的所述目标图像进行至少一种美化处理操作后,生成所述宠物相册。
本公开实施例中,由于在生成相册的过程中,还对筛选出的所述目标图像进行至少一种美化处理操作,提高了生成的宠物相册的趣味性。
根据第一方面,在一种可能的实施方式中,所述基于所述宠物对象的状态信息,执行宠物看护任务,包括:
在所述状态信息指示所述宠物对象满足第二预设活跃条件的情况下,获取所述宠物对象的定位位置信息;
基于所述定位位置信息,移动到所述宠物对象的位置,执行陪伴任务。
本公开实施例中,在状态信息指示宠物对象满足第二预设活跃条件的情况下,能够移动到宠物对象的位置,与宠物进行互动,达到了陪伴宠物的效果。
根据第一方面,在一种可能的实施方式中,所述基于所述宠物对象的状态信息,执行宠物看护任务,包括:
在所述状态信息指示所述宠物对象满足预设低迷条件的情况下,发送提醒信息。
本公开实施例中,在状态信息指示宠物对象满足预设低迷条件的情况下,能够发送提醒信息,以对主人进行提醒,进而实现了对宠物对象的有效监护,能够减少因宠物对象长期处于低迷状态而引发的意外情况发生。
根据第一方面,在一种可能的实施方式中,所述对象属性信息包括所述宠物对象在所述目标图像中的位置信息,所述根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息,包括:
基于所述宠物对象在所述目标图像中的位置信息以及所述目标图像的质 量,确定第一评价分数;
基于至少一张所述目标图像分别对应的目标检测结果,确定第二评价分数,所述第二评价分数用于衡量所述宠物对象的情绪状态;
根据所述至少一张所述目标图像对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
本公开实施例中,通过第一评价分数以及第二评价分数确定宠物对象的状态信息,能够提高宠物状态信息确定的准确度。
根据第一方面,在一种可能的实施方式中,所述基于至少一张所述目标图像分别对应的目标检测结果,确定第二评价分数,包括:
基于至少一张所述目标图像分别对应的目标检测结果中的所述宠物对象的位置变换、姿态变化及表情变化,确定所述第二评价分数。
根据第一方面,在一种可能的实施方式中,在所述目标图像包括多张的情况下,所述方法还包括:
基于所述第一评价分数以及所述第二评价分数,将多张所述目标图像中关联的目标图像进行聚合,得到至少一个聚合图像组;
所述根据所述至少一张所述目标图像对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息,包括:
根据所述至少一个聚合图像组对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
本公开实施例中,将多张所述目标图像中关联的目标图像进行聚合,得到至少一个聚合图像组,进而实现了对宠物对象的状态信息的分段确定,进一步提升了状态信息确定的准确度。
根据第一方面,在一种可能的实施方式中,所述根据所述至少一个聚合图像组对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息,包括:
在所述聚合图像组对应的所述第一评价分数大于第一分数阈值,且所述第二评价分数大于第二分数阈值的情况下,确定所述宠物对象的状态信息满足第一预设活跃条件;或者,
在连续多个聚合图像组所对应的所述第二评价分数均大于第三分数阈值的情况下,确定所述宠物对象的状态信息满足第二预设活跃条件;或者,
在连续多个聚合图像组所对应的第二评价分数均小于第四分数阈值的情况下,确定所述宠物对象的状态信息满足预设低迷条件。
根据第一方面,在一种可能的实施方式中,所述获取所述宠物对象的定位位置信息,包括:
基于所述宠物对象在所述目标图像对应的图像坐标系下的坐标信息以及所述自主移动设备对应的参数信息,确定所述宠物对象的定位位置信息;
所述基于所述定位位置信息,移动到所述宠物对象的位置,执行陪伴任务,包括:
基于所述自主移动设备对应的位姿信息,以及所述宠物对象的定位位置信息,控制所述自主移动设备向靠近所述宠物对象的方向移动,执行所述陪 伴任务。
本公开实施中,通过位姿信息以及坐标系的转换实现了自主移动设备向靠近所述宠物对象的方向移动的目的,且精度较高。
根据第一方面,在一种可能的实施方式中,所述方法还包括:
响应用户下发的逗弄指令,控制所述自主移动设备执行相应的动作,以与所述宠物对象进行互动;和/或,
接收用户下发的多媒体文件,并通过所述自主移动设备播放所述多媒体文件,以与所述宠物对象进行互动;其中,所述多媒体文件包括以下至少之一:音频文件、视频文件。
本公开实施例中,还可以响应用户的指令来实现与宠物对象的互动,实现了更加有效的看护和陪伴,提升了看护质量。
第二方面,本公开实施例提供了一种宠物看护方法,包括:
响应用户针对目标图像的查看指令,展示自主移动设备拍摄当前环境得到的至少一张环境图像;所述目标图像为所述至少一张环境图像中包括宠物对象的图像;
接收所述用户下发的目标看护指令,并将所述目标看护指令发送至所述自主移动设备,以控制所述自主移动设备执行宠物看护任务。
本公开实施例中,用户可以向自主移动设备下发的目标看护指令,实现宠物看护的目的,并且可以根据查看自主移动设备采集的宠物对象的图像下发针对性的看护指令,能够提高看护指令,减少因主人长时间离家而导致的意外情况发生。
第三方面,本公开实施例提供了一种宠物看护装置,包括:
图像获取部分,被配置为获取自主移动设备拍摄当前环境得到的多张环境图像;
目标检测部分,被配置为对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息;其中,所述目标对象中包括宠物对象;
图像确定部分,被配置为根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像;
任务执行部分,被配置为根据所述目标图像,执行宠物看护任务。
根据第三方面,在一种可能的实施方式中,所述任务执行部分还被配置为:
根据所述目标检测结果,确定所述目标对象中包括物品对象的目标图像,并根据所述物品对象对应的对象属性信息,控制所述清洁机器人避开所述物品对象移动。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息;
基于所述宠物对象的状态信息,执行宠物看护任务。
根据第第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
根据所述宠物对象的状态信息,筛选状态信息满足第一预设活跃条件的宠物对象对应的所述目标图像;
基于筛选出的所述目标图像,生成宠物相册。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
对筛选出的所述目标图像进行至少一种美化处理操作后,生成所述宠物相册。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
在所述状态信息指示所述宠物对象满足第二预设活跃条件的情况下,获取所述宠物对象的定位位置信息;
基于所述定位位置信息,移动到所述宠物对象的位置,执行陪伴任务。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
在所述状态信息指示所述宠物对象满足预设低迷条件的情况下,发送提醒信息。
根据第三方面,在一种可能的实施方式中,所述对象属性信息包括所述宠物对象在所述目标图像中的位置信息,任务执行部分还被配置为:
基于所述宠物对象在所述目标图像中的位置信息以及所述目标图像的质量,确定第一评价分数;
基于至少一张所述目标图像分别对应的目标检测结果,确定第二评价分数,所述第二评价分数用于衡量所述宠物对象的情绪状态;
根据所述至少一张所述目标图像对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
基于至少一张所述目标图像分别对应的目标检测结果中的所述宠物对象的位置变换、姿态变化及表情变化,确定所述第二评价分数。
根据第三方面,在一种可能的实施方式中,在所述目标图像包括多张的情况下,任务执行部分还被配置为:
基于所述第一评价分数以及所述第二评价分数,将多张所述目标图像中关联的目标图像进行聚合,得到至少一个聚合图像组;
根据所述至少一个聚合图像组对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
在所述聚合图像组对应的所述第一评价分数大于第一分数阈值,且所述第二评价分数大于第二分数阈值的情况下,确定所述宠物对象的状态信息满足第一预设活跃条件;或者,
在连续多个聚合图像组所对应的所述第二评价分数均大于第三分数阈值的情况下,确定所述宠物对象的状态信息满足第二预设活跃条件;或者,
在连续多个聚合图像组所对应的第二评价分数均小于第四分数阈值的情况下,确定所述宠物对象的状态信息满足预设低迷条件。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
基于所述宠物对象在所述目标图像对应的图像坐标系下的坐标信息以及 所述自主移动设备对应的参数信息,确定所述宠物对象的定位位置信息;
基于所述自主移动设备对应的位姿信息,以及所述宠物对象的定位位置信息,控制所述自主移动设备向靠近所述宠物对象的方向移动,执行所述陪伴任务。
根据第三方面,在一种可能的实施方式中,任务执行部分还被配置为:
响应用户下发的逗弄指令,控制所述自主移动设备执行相应的动作,以与所述宠物对象进行互动;和/或,
接收用户下发的多媒体文件,并通过所述自主移动设备播放所述多媒体文件,以与所述宠物对象进行互动;其中,所述多媒体文件包括以下至少之一:音频文件、视频文件。
第四方面,本公开实施例提供了一种宠物看护装置,包括:
图像展示部分,被配置为响应用户针对目标图像的查看指令,展示自主移动设备拍摄当前环境得到的至少一张环境图像;所述目标图像为所述至少一张环境图像中包括宠物对象的图像;
指令下发部分,被配置为接收所述用户下发的目标看护指令,并将所述目标看护指令发送至所述自主移动设备,以控制所述自主移动设备执行宠物看护任务。
第五方面,本公开实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面或第二方面所述的宠物看护方法的步骤。
第六方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面或第二方面所述的宠物看护方法的步骤。
第七方面,本公开实施例提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于如第一方面或第二方面所述的宠物看护方法的步骤。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种宠物看护方法的流程图;
图2示出了本公开实施例所提供的一种宠物看护方法的实施环境的示意图;
图3示出了本公开实施例所提供的一种标注检测框的环境图像的示意图;
图4示出了本公开实施例所提供的一种根据目标图像执行宠物看护任务的方法流程图;
图5示出了本公开实施例所提供的一种确定宠物对象的状态信息方法流程图;
图6示出了本公开实施例所提供的一种基于宠物对象的状态信息执行宠物看护任务的方法流程图;
图7示出了本公开实施例所提供的另一种基于宠物对象的状态信息执行宠物看护任务的方法流程图。
图8示出了本公开实施例所提供的再一种基于宠物对象的状态信息执行宠物看护任务的方法流程图;
图9示出了本公开实施例所提供的另一种宠物看护方法的流程图;
图10示出了本公开实施例所提供的一种宠物看护装置的结构示意图;
图11示出了本公开实施例所提供的另一种宠物看护装置的结构示意图;
图12示出了本公开实施例所提供的一种电子设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
随着人们的生活水平提升,养宠物的人越来越多,但由于各种原因(比如工作繁忙),导致主人无法时刻陪伴在宠物身边。在主人离家时,宠物可能会存在各类行为,比如活跃行为、低迷行为等,若主人长时间的缺失,可能会导致宠物发生意外情况。
例如,若长时间保持宠物单独在家,是的宠物长期处于孤独的状态,会导致宠物情绪低落、焦躁,甚至还表现出破坏性的行为,进而导致出乎意料的情况发生,因此,如何在主人离家时为宠物提供有效的看护陪伴,成为亟 待解决的问题。
基于上述研究,本公开提供了一种宠物看护方法,可以获取自主移动设备拍摄当前环境得到的至少一张环境图像,并对所述环境图像进行目标检测,得到目标检测结果,再根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像,进而根据所述目标图像,执行宠物看护任务。如此,通过自主移动设备实现了对宠物的看护任务,能够减少因主人长时间离家而导致的意外情况发生。
为便于对本实施例进行理解,下面结合附图对本公开实施例所提供的宠物看护方法进行详细的介绍。参见图1所示,为本公开实施例提供的宠物看护方法的流程图,该宠物看护方法包括以下S101至S104:
S101,获取自主移动设备拍摄当前环境得到的至少一张环境图像。
请参见图2,为本公开实施例提供的宠物看护方法的实施环境示意图。如图2所示,本公开实施例提供的宠物看护方法的执行主体可以为自主移动设备100,也可以为能够与自主移动设备100之间进行通信的服务器200,通常情况下,服务器200可以与自主移动设备100保持通信连接或是在需要进行数据传输的情况下建立通信连接,在此不进行限定。本公开实施例提供的宠物看护方法还可以通过处理器执行计算机程序实现。
示例性地,自主移动设备100可以包括清洁机器人和移动机器人等。本公开实施例中,自主移动设备100以清洁机器人为例来进行说明,其中,清洁机器人是智能家用电器的一种,能凭借一定的人工智能,自动在房间内完成地板清理工作。
其他实施例中,自主移动设备100还可以是其他类型的移动机器人,例如割草机器人等,只要能够自主移动且能够执行相应的工作即可。服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云存储、大数据和人工智能平台等基础云计算服务的云服务器。
自主移动设备100可搭载有摄像装置(图未示),用于在自主移动的过程中拍摄自主移动设备100所处的当前环境的图像。例如,该摄像装置可以是RGB摄像头,此外,自主移动设备100还可以搭载激光雷达、深度传感器、超声波传感器等。
此外,自主移动设备100还可以与移动终端300进行通信,进而可以接受移动终端300所下发的指令,而执行相应的工作。其中,移动终端300包括但不限于手机、平板电脑及可穿戴设备等。例如,自主移动设备100可以接受移动终端300下发的清扫指令而完成清扫任务,也即,用户可以通过远程控制的方式实现与自主移动设备100的交互。
S102,对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息。
示例性地,可以通过多物体检测神经网络对所述环境图像进行目标检测,得到目标检测结果。本公开实施方式中,多物体检测神经网络可以对环境中 不同类别的物体进行目标检测,比如,该多物体检测神经网络不仅可以检测出环境中的物品对象,还可以检测出环境中的宠物对象。其中,物品是指家用常见的日用品,比如,地面上摆放的花瓶、桌椅凳等静态对象;宠物对象是指家中饲养的小动物,比如,小猫、小狗、小鸟等动态对象。
示例性地,参见图3所示,为本公开实施例提供的一种环境图像的目标检测结果的示意图。在实际检测过程中,将获取的环境图像输入至该过多物体检测神经网络后,即可输出相应的目标检测结果。其中,如图3所示,所述目标检测结果中包括检测出来的目标对象狗11以及目标对象水瓶12,并以相应的检测框示出。
另外,为了提供更详细的检测信息以利后续的判断,该目标检测结果中不只包括目标对象,还包括该目标对象对应的属性信息。其中,该对应的属性信息包括但不限于,目标对象的类别、检测框、跟踪标的及置信度等。其中,目标对象的类别用于指示目标对象是物品对象还是宠物对象,检测框用于指示目标对象位于当前环境中的位置,跟踪标的用于将属于同一物体的目标对象进行统一标识以和其他物体进行区分,置信度用于衡量当前检测结果的可靠性。
示例性地,可以通过自监督训练的方法来得到该多物体检测神经网络。例如,可以采集若干家庭场景、宠物场景等图像或者视频作为训练数据,然后将带有标签的训数据输入至目标网络进行多轮训练,即可得到符合需求的多物体检测神经网络。
本公开实施例中,由于使用了深度神经网络进行统计学习,而非传统的基于人工设计特征点进行物体识别的方法,在一定程度上具有适应性和泛化性,解决未知家庭场景下的物品和宠物识别的问题。同时,该多物体检测神经网络将物品的识别和宠物识别结合到一个网络结构中,复用了参数和计算结果,增加了网络计算的效率,减少了性能消耗。
S103,根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像。
示例性地,可以根据目标检测结果中的目标对象的类别,确定所述目标对象中包括宠物对象的目标图像。
在一些实施方式中,还可以根据目标检测结果中的目标对象的类别,确定所述目标对象中包括物品对象的目标图像,并根据所述物品对象对应的对象属性信息,控制所述自主移动设备避开所述物品对象移动。如此,可以实现自主移动设备在行进过程中的避障功能。
S104,根据所述目标图像,执行宠物看护任务。
在一种可能的实现方式中,根据宠物的目标图像,执行宠物看护任务可以是对宠物进行跟踪拍摄,还可以在宠物状态异常时进行告警。在一些实施例中,可以根据目标图像确定宠物位置,可根据宠物位置控制自主移动设备对宠物进行跟踪拍摄,还可以根据目标图像确定宠物状态是否异常。
示例性地,参见图4所示,在一些实施方式中,在根据所述目标图像,执行宠物看护任务时,可以包括以下S1041至S1042:
S1041,根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息。
S1042,基于所述宠物对象的状态信息,执行宠物看护任务。
可以理解,根据所述目标图像中所述宠物对象所对应的对象属性信息,可以识别宠物对象的状态信息,比如,当前宠物状态是活跃状态还是低迷状态,然后基于所述宠物对象的状态信息,执行对应的宠物看护任务,不同状态信息可对应不同的看护任务,可以使得当前所执行的看护任务与宠物的当前状态相匹配,进而能够较好的实现看护任务。
在一些实施方式中,所述对象属性信息包括所述宠物对象在所述目标图像中的位置信息,也即检测框在目标图像中的位置信息。因此,针对上述步骤S1041,在根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息时,如图5所示,可以包括以下S10411至10414:
S10411,基于所述宠物对象在所述目标图像中的位置信息以及所述目标图像的质量,确定第一评价分数。
其中,目标图像的质量是指目标图像的清晰度,宠物对象在所述目标图像中的位置信息可以通过检测框在目标图像中的位置确定。本公开实施方式中,目标图像越清晰,且检测框在目标图像中越完整,检测框位置越接近所述目标图像的中央,第一评价分数越高。在一些实施例中,根据目标图像质量可以确定第一基础分,再根据检测框的位置可以确定第二基础分,然后将第一基础分和第二基础分分别乘以对应的权重后相加,即可得到第一评价分数。
在另一些实施方式中,为了提高第一评价分数的准确性,还可以预先设定衰减项以及基础常数,将第一基础分和第二基础分分别乘以对应的权重后再与衰减项以及基础常数求和后,得到第一评价分数。其中,衰减项用于衡量当前目标图像与之前相关联的目标图像之间的衰减度。
S10412,基于至少一张所述目标图像的分别对应的目标检测结果,确定第二评价分数,所述第二评价分数用于衡量所述宠物对象的情绪状态。
示例性地,可以基于至少一张所述目标图像分别对应的目标检测结果中的所述宠物对象的位置变换、姿态变化及表情变化,确定所述第二评价分数。在一些实施例中,可以根据目标图像中的宠物对象的情绪确定第三基础分,比如,若目标图像中宠物对象是睁大双眼的,则第三基础分可以较高,再根据宠物对象相对于环境的位置确定第四基础分,将第三基础分和第四基础分分别乘以对应的权重后相加,即可得到第二评价分数。
同理,为了提高第二评价分数的准确性,还可以设定衰减项以及基础常数,将第三基础分和第四基础分分别乘以对应的权重后再与衰减项以及基础常数求和后,得到第二评价分数。
本申请实施例中,可以通过深度学习网络根据宠物对象的属性信息对图像进行打分,得到第一评价分数、第二评价分数。
S10413,根据所述至少一张所述目标图像对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
在一些实施方式中,为了便于后续执行宠物看护任务,在所述目标图像包括多张的情况下,还可以基于所述第一评价分数以及所述第二评价分数,将至少一张所述目标图像中关联的目标图像进行聚合,得到至少一个聚合图像组,也即,根据时序上多张目标图像的第一评价分数和第二评价分数,将相关联的目标图像进一步聚合成相应视频,从而将多张目标图像整合成具有图片和视频等多种形式表示的分段结构化数据。其中,可以按照时序,判断每一目标图像对应的评价分数之间是否满足关联条件,在连续的多个目标图像对应的评价分数满足该关联条件的情况下,将该连续多个目标图像作为相关联的目标图像,并形成一个聚合图像组。如此,可以根据至少一个聚合图像组对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息,进而实现了对宠物对象状态信息的分段识别确定,提高了宠物对象状态识别的精度。
示例性地,在所述聚合图像组对应的所述第一评价分数大于第一分数阈值,且所述第二评价分数大于第二分数阈值的情况下,确定所述宠物对象的状态信息满足第一预设活跃条件。
其中,第一分数阈值和第二分数阈值可以根据实际情况而设定。在一些实施例中,可以根据宠物对象的体积大小设定相应的第一分数阈值,例如,若该宠物对象的体积较大,则即使目标图像的质量稍微差一点,也不影响观看,而若该宠物对象的体积较小,则应需要较高的第一分数阈值。另外,可以根据宠物对象的类型或者习性不同,设定不同的第二分数阈值,比如,在宠物对象为猫的情况下,由于猫属于比较安静型的,因此,第二分数阈值可以设置稍微低一些,而若宠物对象为狗,由于狗比较好动,则第二分数阈值可以设置稍高。
在第一评价分数大于第一分数阈值,且第二评价分数大于第二分数阈值的情况下,说明聚合图像组中的目标图像的质量较高且宠物对象的情绪状态较好,因此,可以将聚合图像组中的目标图像加入到宠物相册中,以供用户查看。
因此,在本公开实施方式中,参见图6所示,针对上述S1042,在基于所述宠物对象的状态信息,执行宠物看护任务时,可以包括以下S1042a至S1042b:
S1042a,根据所述宠物对象的状态信息,筛选状态信息满足第一预设活跃条件的宠物对象对应的所述目标图像。
S1042b,基于筛选出的所述目标图像,生成宠物相册。
示例性地,为了增加生成的宠物相册的趣味性,还可以对筛选出的所述目标图像进行至少一种美化处理操作后,生成所述宠物相册。例如,可以提供多种风格的模板,并且还可以根据宠物对象的位置和姿态生成相应的卡通贴纸。
示例性地,在连续多个聚合图像组所对应的所述第二评价分数均大于第三分数阈值的情况下,确定所述宠物对象的状态信息满足第二预设活跃条件。其中,该第三分数阈值与上述第二分数阈值类似,也可根据实际情况而设定, 在此不再赘述。
可以理解,若在连续多个聚合图像组所对应的所述第二评价分数均大于第三分数阈值,则说明宠物对象在此期间的情绪较活泼且为比较稳定,此时,可以与宠物对象进行互动,比如可以通过控制自主移动设备播放声音、以及移动追逐等形式与宠物对象互动。
因此,本公开实施方式中,参见图7所示,针对上述S1042,在基于所述宠物对象的状态信息,执行宠物看护任务时,可以包括以下S1042c至S1042d:
S1042c,在所述状态信息指示所述宠物对象满足第二预设活跃条件的情况下,获取所述宠物对象的定位位置信息。
S1042d,基于所述定位位置信息,移动到所述宠物对象的位置,执行陪伴任务。
在一些实施例中,可以基于所述宠物对象在所述目标图像对应的图像坐标系下的坐标信息以及所述自主移动设备对应的参数信息,确定所述宠物对象的定位位置信息;基于所述自主移动设备对应的位姿信息,以及所述宠物对象的定位位置信息,控制所述自主移动设备向靠近所述宠物对象的方向移动,执行所述陪伴任务。
其中,自主移动设备对应的参数信息可以包括用于将图像坐标系转换至相机坐标系的内部参数,以及用于将相机坐标系转换至世界坐标系下的外部参数。
示例性地,这里可以基于自主移动设备拍摄的环境图像,以及预先建立的表征现实场景的三维场景地图来确定自主移动设备在现实场景对应的世界坐标系下的位姿信息,在世界坐标系下,表征现实场景的三维场景地图可以与现实场景完全重合,因此可以基于自主移动设备拍摄的环境图像以及三维场景地图,来确定自主移动设备在现实场景对应的世界坐标系下的位姿信息。
在一些实施例中,自主移动设备在现实场景对应的世界坐标系下的位姿信息可以包括自主移动设备在世界坐标系下的位置坐标值,也可以包括自主移动设备在世界坐标系下的朝向角度,其中朝向角度可以通过与世界坐标系的坐标轴之间的夹角表示。
另外,根据宠物对象对应的检测框在图像坐标系下的像素坐标,这里可以将检测框的中心点对应的像素坐标作为宠物对象在图像坐标系下的二维检测信息,然后结合自主移动设备对应的内部参数(预先存储的内部参数),确定宠物对象在自主移动设备对应的相机坐标系下沿相机坐标系的X轴和Y轴的坐标值,接着进一步结合自主移动设备对应的外部参数(可以通过自主移动设备在现实场景对应的世界坐标系下的位姿信息确定),确定宠物对象在世界坐标系下的位姿信息,如此,即可控制所述自主移动设备向靠近所述宠物对象的方向移动,执行所述陪伴任务,并在控制所述自主移动设备向靠近所述宠物对象移动的过程中,发出相应的声音。
在一些实施方式中,在连续多个聚合图像组所对应的第二评价分数均小于第四分数阈值的情况下,确定所述宠物对象的状态信息满足预设低迷条件。其中,第四分数阈值小于第三分数阈值,与第三分数阈值类似,该第四分数 阈值也可根据实际情况而设定,此处不做限定。
可以理解,如果连续多个聚合图像组所对应的第二评价分数均小于第四分数阈值,则说明第二评价分数较低,可以确定宠物对象在该时间段处于安静状态,此时可以选取该时间段内的第一评价分数较高的目标图片,远程推送图文信息给宠物的主人,使得主人可以及时知悉宠物对象的当前状态。此外,主人还可以通过图2中的移动终端300远程实时查看宠物对象的实时图像、视频,并进一步的可以通过移动终端300下发相应的指令或者多媒体文件与宠物对象进行互动。
因此,本公开实施方式中,参见图8所示,针对上述S1042,在基于所述宠物对象的状态信息,执行宠物看护任务时,可以包括以下S1042e至S1042g:
S1042e,在所述状态信息指示所述宠物对象满足预设低迷条件的情况下,发送提醒信息。
S1042f,响应用户下发的逗弄指令,控制所述自主移动设备执行相应的动作,以与所述宠物对象进行互动;和/或
S1042g,接收用户下发的多媒体文件,并通过所述自主移动设备播放所述多媒体文件,以与所述宠物对象进行互动;其中,所述多媒体文件包括以下至少之一:音频文件、视频文件。
在本公开实施方式中,还可以同时执行上述S1042f和S1042g,以与所述宠物对象进行多样化的互动。
请参阅图9,为公开实施例提供的另一种宠物看护方法的流程图,如图9所示,该宠物看护方法包括以下S201至S202:
S201,响应用户针对目标图像的查看指令,展示自主移动设备拍摄当前环境得到的至少一张环境图像;所述目标图像为所述至少一张环境图像中包括宠物对象的图像。
S202,接收所述用户下发的目标看护指令,并将所述目标看护指令发送至所述自主移动设备,以控制所述自主移动设备执行宠物看护任务。
示例性地,请再次参见图2,用户可以通过移动终端300查看自主移动设备拍摄当前环境得到的至少一张环境图像,并可以通过该环境图像观察宠物对象的状态,并可以通过移动终端300下发相应的目标看护指令,进而实现与宠物对象的远程互动。
本公开实施例中,通过所述用户下发的目标看护指令,可控制自主移动设备执行宠物看护任务,进而能够减少因主人长时间离家而导致的意外情况发生。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。
基于同一技术构思,本公开实施例中还提供了与增强现实宠物看护方法对应的增强现实宠物看护装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述增强现实宠物看护方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图10所示,为本公开实施例提供的一种宠物看护装置500的示意图,该宠物看护装置500包括:
图像获取部分501,被配置为获取自主移动设备拍摄当前环境得到的多张环境图像;
目标检测部分502,被配置为对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息;其中,所述目标对象中包括宠物对象;
图像确定部分503,被配置为根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像;
任务执行部分504,被配置为根据所述目标图像,执行宠物看护任务。
在一种可能的实施方式中,所述任务执行部分504还被配置为:
根据所述目标检测结果,确定所述目标对象中包括物品对象的目标图像,并根据所述物品对象对应的对象属性信息,控制所述清洁机器人避开所述物品对象移动。
在一种可能的实施方式中,任务执行部分504还被配置为:
根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息;
基于所述宠物对象的状态信息,执行宠物看护任务。
在一种可能的实施方式中,任务执行部分504还被配置为:
根据所述宠物对象的状态信息,筛选状态信息满足第一预设活跃条件的宠物对象对应的所述目标图像;
基于筛选出的所述目标图像,生成宠物相册。
在一种可能的实施方式中,任务执行部分504还被配置为:
对筛选出的所述目标图像进行至少一种美化处理操作后,生成所述宠物相册。
在一种可能的实施方式中,任务执行部分504还被配置为:
在所述状态信息指示所述宠物对象满足第二预设活跃条件的情况下,获取所述宠物对象的定位位置信息;
基于所述定位位置信息,移动到所述宠物对象的位置,执行陪伴任务。
在一种可能的实施方式中,任务执行部分504还被配置为:
在所述状态信息指示所述宠物对象满足预设低迷条件的情况下,发送提醒信息。
在一种可能的实施方式中,所述对象属性信息包括所述宠物对象在所述目标图像中的位置信息,任务执行部分504还被配置为:
基于所述宠物对象在所述目标图像中的位置信息以及所述目标图像的质量,确定第一评价分数;
基于至少一张所述目标图像分别对应的目标检测结果,确定第二评价分数,所述第二评价分数用于衡量所述宠物对象的情绪状态;
根据所述至少一张所述目标图像对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
在一种可能的实施方式中,任务执行部分504还被配置为:
基于至少一张所述目标图像分别对应的目标检测结果中的所述宠物对象的位置变换、姿态变化及表情变化,确定所述第二评价分数。
在一种可能的实施方式中,在所述目标图像包括多张的情况下,任务执行部分504还被配置为:
基于所述第一评价分数以及所述第二评价分数,将多张所述目标图像中关联的目标图像进行聚合,得到至少一个聚合图像组;
根据所述至少一个聚合图像组对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
在一种可能的实施方式中,任务执行部分504还被配置为:
在所述聚合图像组对应的所述第一评价分数大于第一分数阈值,且所述第二评价分数大于第二分数阈值的情况下,确定所述宠物对象的状态信息满足第一预设活跃条件;或者,
在连续多个聚合图像组所对应的所述第二评价分数均大于第三分数阈值的情况下,确定所述宠物对象的状态信息满足第二预设活跃条件;或者,
在连续多个聚合图像组所对应的第二评价分数均小于第四分数阈值的情况下,确定所述宠物对象的状态信息满足预设低迷条件。
在一种可能的实施方式中,任务执行部分504还被配置为:
基于所述宠物对象在所述目标图像对应的图像坐标系下的坐标信息以及所述自主移动设备对应的参数信息,确定所述宠物对象的定位位置信息;
基于所述自主移动设备对应的位姿信息,以及所述宠物对象的定位位置信息,控制所述自主移动设备向靠近所述宠物对象的方向移动,执行所述陪伴任务。
在一种可能的实施方式中,任务执行部分504还被配置为:
响应用户下发的逗弄指令,控制所述自主移动设备执行相应的动作,以与所述宠物对象进行互动;和/或,
接收用户下发的多媒体文件,并通过所述自主移动设备播放所述多媒体文件,以与所述宠物对象进行互动;其中,所述多媒体文件包括以下至少之一:音频文件、视频文件。
参见图11所示,为本公开实施例提供的另一种宠物看护装置600的示意图,该宠物看护装置600包括:
图像展示部分601,被配置为响应用户针对目标图像的查看指令,展示自主移动设备拍摄当前环境得到的至少一张环境图像;所述目标图像为所述至少一张环境图像中包括宠物对象的图像;
指令下发部分602,被配置为接收所述用户下发的目标看护指令,并将所述目标看护指令发送至所述自主移动设备,以控制所述自主移动设备执行宠物看护任务。
关于装置中的各部分的处理流程、以及各部分之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
基于同一技术构思,本公开实施例还提供了一种电子设备。其中,电子 设备可以是自主移动设备或智能终端。参照图12所示,为本公开实施例提供的电子设备700的结构示意图,包括处理器701、存储器702、和总线703。其中,存储器702用于存储执行指令,包括内存7021和外部存储器7022;这里的内存7021也称内存储器,用于暂时存放处理器701中的运算数据,以及与硬盘等外部存储器7022交换的数据,处理器701通过内存7021与外部存储器7022进行数据交换。
本申请实施例中,存储器702具体用于存储执行本申请方案的应用程序代码,并由处理器701来控制执行。也即,当电子设备700运行时,处理器701与存储器702之间通过总线703通信,使得处理器701执行存储器702中存储的应用程序代码,进而执行前述任一实施例中所述的方法。
其中,存储器702可以是,但不限于,随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,EEPROM)等。
处理器701可能是一种集成电路芯片,具有信号的处理能力。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备700的限定。在本申请另一些实施例中,电子设备700可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中的宠物看护方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中的宠物看护方法的步骤,可参见上述方法实施例,在此不再赘述。
其中,上述计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、 装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。
工业实用性
本公开提供了一种宠物看护方法、装置、电子设备及存储介质,该宠物看护方法包括:获取自主移动设备拍摄当前环境得到的至少一张环境图像;对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息;根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像;根据所述目标图像,执行宠物看护任务。本公开实施例,通过自主移动设备实现了对宠物的看护,解决了因主人离家而导致的宠物无人陪伴问题。

Claims (19)

  1. 一种宠物看护方法,包括:
    获取自主移动设备拍摄当前环境得到的至少一张环境图像;
    对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息;
    根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像;
    根据所述目标图像,执行宠物看护任务。
  2. 根据权利要求1所述的方法,其中,所述自主移动设备包括清洁机器人;所述方法还包括:
    根据所述目标检测结果,确定所述目标对象中包括物品对象的目标图像,并根据所述物品对象对应的对象属性信息,控制所述清洁机器人避开所述物品对象移动。
  3. 根据权利要求1或2所述的方法,其中,所述根据所述目标图像,执行宠物看护任务,包括:
    根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息;
    基于所述宠物对象的状态信息,执行宠物看护任务。
  4. 根据权利要求3所述的方法,其中,所述基于所述宠物对象的状态信息,执行宠物看护任务,包括:
    根据所述宠物对象的状态信息,筛选状态信息满足第一预设活跃条件的宠物对象对应的所述目标图像;
    基于筛选出的所述目标图像,生成宠物相册。
  5. 根据权利要求4所述的方法,其中,所述基于筛选出的所述目标图像,生成宠物相册,包括:
    对筛选出的所述目标图像进行至少一种美化处理操作后,生成所述宠物相册。
  6. 根据权利要求3所述的方法,其中,所述基于所述宠物对象的状态信息,执行宠物看护任务,包括:
    在所述状态信息指示所述宠物对象满足第二预设活跃条件的情况下,获取所述宠物对象的定位位置信息;
    基于所述定位位置信息,移动到所述宠物对象的位置,执行陪伴任务。
  7. 根据权利要求3所述的方法,其中,所述基于所述宠物对象的状态信息,执行宠物看护任务,包括:
    在所述状态信息指示所述宠物对象满足预设低迷条件的情况下,发送提醒信息。
  8. 根据权利要求3至7任一所述的方法,其中,所述对象属性信息包括所述宠物对象在所述目标图像中的位置信息,所述根据所述目标图像中所述宠物对象所对应的对象属性信息,识别所述宠物对象的状态信息,包括:
    基于所述宠物对象在所述目标图像中的位置信息以及所述目标图像的质 量,确定第一评价分数;
    基于至少一张所述目标图像分别对应的目标检测结果,确定第二评价分数,所述第二评价分数用于衡量所述宠物对象的情绪状态;
    根据所述至少一张所述目标图像对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
  9. 根据权利要求8所述的方法,其中,所述基于至少一张所述目标图像分别对应的目标检测结果,确定第二评价分数,包括:
    基于至少一张所述目标图像分别对应的目标检测结果中的所述宠物对象的位置变换、姿态变化及表情变化,确定所述第二评价分数。
  10. 根据权利要求8或9所述的方法,其中,在所述目标图像包括多张的情况下,所述方法还包括:
    基于所述第一评价分数以及所述第二评价分数,将多张所述目标图像中关联的目标图像进行聚合,得到至少一个聚合图像组;
    所述根据所述至少一张所述目标图像对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息,包括:
    根据所述至少一个聚合图像组对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息。
  11. 根据权利要求10所述的方法,其中,所述根据所述至少一个聚合图像组对应的所述第一评价分数以及所述第二评价分数,确定所述宠物对象的状态信息,包括:
    在所述聚合图像组对应的所述第一评价分数大于第一分数阈值,且所述第二评价分数大于第二分数阈值的情况下,确定所述宠物对象的状态信息满足第一预设活跃条件;或者,
    在连续多个聚合图像组所对应的所述第二评价分数均大于第三分数阈值的情况下,确定所述宠物对象的状态信息满足第二预设活跃条件;或者,
    在连续多个聚合图像组所对应的所述第二评价分数均小于第四分数阈值的情况下,确定所述宠物对象的状态信息满足预设低迷条件。
  12. 根据权利要求6所述的方法,其中,所述获取所述宠物对象的定位位置信息,包括:
    基于所述宠物对象在所述目标图像对应的图像坐标系下的坐标信息以及所述自主移动设备对应的参数信息,确定所述宠物对象的定位位置信息;
    所述基于所述定位位置信息,移动到所述宠物对象的位置,执行陪伴任务,包括:
    基于所述自主移动设备对应的位姿信息,以及所述宠物对象的定位位置信息,控制所述自主移动设备向靠近所述宠物对象的方向移动,执行所述陪伴任务。
  13. 根据权利要求7至11任一所述的方法,其中,所述方法还包括:
    响应用户下发的逗弄指令,控制所述自主移动设备执行相应的动作,以与所述宠物对象进行互动;和/或,
    接收用户下发的多媒体文件,并通过所述自主移动设备播放所述多媒体 文件,以与所述宠物对象进行互动;其中,所述多媒体文件包括以下至少之一:音频文件、视频文件。
  14. 一种宠物看护方法,包括:
    响应用户针对目标图像的查看指令,展示自主移动设备拍摄当前环境得到的至少一张环境图像;所述目标图像为所述至少一张环境图像中包括宠物对象的图像;
    接收所述用户下发的目标看护指令,并将所述目标看护指令发送至所述自主移动设备,以控制所述自主移动设备执行宠物看护任务。
  15. 一种宠物看护装置,包括:
    图像获取部分,被配置为获取自主移动设备拍摄当前环境得到的多张环境图像;
    目标检测部分,被配置为对所述环境图像进行目标检测,得到目标检测结果,所述目标检测结果包括所述环境图像中所包含的目标对象,以及所述目标对象对应的对象属性信息;其中,所述目标对象中包括宠物对象;
    图像确定部分,被配置为根据所述目标检测结果,确定所述目标对象中包括宠物对象的目标图像;
    任务执行部分,被配置为根据所述目标图像,执行宠物看护任务。
  16. 一种宠物看护装置,包括:
    图像展示部分,被配置为响应用户针对目标图像的查看指令,展示自主移动设备拍摄当前环境得到的至少一张环境图像;所述目标图像为所述至少一张环境图像中包括宠物对象的图像;
    指令下发部分,被配置为接收所述用户下发的目标看护指令,并将所述目标看护指令发送至所述自主移动设备,以控制所述自主移动设备执行宠物看护任务。
  17. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至13任一所述的宠物看护方法的步骤,或执行如权利要求14所述的宠物看护方法的步骤。
  18. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行如权利要求1至13任一所述的宠物看护方法的步骤,或执行如权利要求14所述的宠物看护方法的步骤。
  19. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行如权利要求1至13任一所述的宠物看护方法的步骤,或执行如权利要求14所述的宠物看护方法的步骤。
PCT/CN2022/071840 2021-07-06 2022-01-13 宠物看护方法、装置、电子设备及存储介质 WO2023279697A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110763715.2 2021-07-06
CN202110763715.2A CN113420708A (zh) 2021-07-06 2021-07-06 宠物看护方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023279697A1 true WO2023279697A1 (zh) 2023-01-12

Family

ID=77720370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071840 WO2023279697A1 (zh) 2021-07-06 2022-01-13 宠物看护方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113420708A (zh)
WO (1) WO2023279697A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116439155A (zh) * 2023-06-08 2023-07-18 北京积加科技有限公司 一种宠物陪伴方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420708A (zh) * 2021-07-06 2021-09-21 深圳市商汤科技有限公司 宠物看护方法、装置、电子设备及存储介质
CN114543302A (zh) * 2022-01-24 2022-05-27 青岛海尔空调器有限总公司 智能家居的控制方法及其控制系统、电子设备和储存介质
CN117877070A (zh) * 2023-05-24 2024-04-12 武汉星巡智能科技有限公司 婴幼儿与宠物互动内容评估方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008282073A (ja) * 2007-05-08 2008-11-20 Matsushita Electric Ind Co Ltd ペット誘導ロボットおよびペット誘導方法
CN103766228A (zh) * 2014-02-17 2014-05-07 深圳维帷光电科技有限公司 一种宠物看护系统及看护方法
CN111401215A (zh) * 2020-03-12 2020-07-10 杭州涂鸦信息技术有限公司 一种多类别目标检测的方法及系统
CN111914657A (zh) * 2020-07-06 2020-11-10 浙江大华技术股份有限公司 一种宠物行为检测方法、装置、电子设备及存储介质
CN112167093A (zh) * 2020-09-07 2021-01-05 珠海格力电器股份有限公司 宠物看护方法、宠物看护装置和扫地机器人
CN113420708A (zh) * 2021-07-06 2021-09-21 深圳市商汤科技有限公司 宠物看护方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008282073A (ja) * 2007-05-08 2008-11-20 Matsushita Electric Ind Co Ltd ペット誘導ロボットおよびペット誘導方法
CN103766228A (zh) * 2014-02-17 2014-05-07 深圳维帷光电科技有限公司 一种宠物看护系统及看护方法
CN111401215A (zh) * 2020-03-12 2020-07-10 杭州涂鸦信息技术有限公司 一种多类别目标检测的方法及系统
CN111914657A (zh) * 2020-07-06 2020-11-10 浙江大华技术股份有限公司 一种宠物行为检测方法、装置、电子设备及存储介质
CN112167093A (zh) * 2020-09-07 2021-01-05 珠海格力电器股份有限公司 宠物看护方法、宠物看护装置和扫地机器人
CN113420708A (zh) * 2021-07-06 2021-09-21 深圳市商汤科技有限公司 宠物看护方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116439155A (zh) * 2023-06-08 2023-07-18 北京积加科技有限公司 一种宠物陪伴方法及装置
CN116439155B (zh) * 2023-06-08 2024-01-02 北京积加科技有限公司 一种宠物陪伴方法及装置

Also Published As

Publication number Publication date
CN113420708A (zh) 2021-09-21

Similar Documents

Publication Publication Date Title
WO2023279697A1 (zh) 宠物看护方法、装置、电子设备及存储介质
JP7096925B2 (ja) 直方体検出のための深層機械学習システム
US11010601B2 (en) Intelligent assistant device communicating non-verbal cues
Wario et al. Automatic detection and decoding of honey bee waggle dances
US20180231653A1 (en) Entity-tracking computing system
US9129404B1 (en) Measuring physical objects and presenting virtual articles
JP7237096B2 (ja) 仮想ペットの情報表示方法並びにその、装置、端末、サーバ、コンピュータプログラム及びシステム
KR20210088600A (ko) 전시 영역 상태 인식 방법, 장치, 전자 디바이스 및 기록 매체
US20190287297A1 (en) Three-dimensional environment modeling based on a multi-camera convolver system
US11074451B2 (en) Environment-based application presentation
US11321588B2 (en) System for identifying pests and monitoring information through image analysis, and monitoring method using same
JP7479019B2 (ja) ペット状況推定システム、ペットカメラ、サーバ、ペット状況推定方法、及びプログラム
WO2022050092A1 (ja) ペット状況推定システム、ペットカメラ、サーバ、ペット状況推定方法、及びプログラム
WO2019090901A1 (zh) 图像显示的选择方法、装置、智能终端及存储介质
US20190187780A1 (en) Determination apparatus and determination method
US10743061B2 (en) Display apparatus and control method thereof
WO2019016870A1 (ja) 画像認識装置、画像認識方法及びプログラム
EP3502866B1 (en) Systems and methods for audio-based augmented reality
Plum et al. replicAnt: a pipeline for generating annotated images of animals in complex environments using Unreal Engine
WO2023155394A1 (zh) 虚拟空间融合方法及相关装置、电子设备、介质及程序
CN111814665A (zh) 基于宠物情绪识别的陪护方法、装置、服务器及存储介质
KR20210088940A (ko) 동물 정보 판별용 어플리케이션을 구동하는 장치, 서버 및 이들을 포함하는 어플리케이션 관리 시스템
KR20220084666A (ko) 반려동물 회상 서비스 제공 방법 및 시스템
CN113191462A (zh) 信息获取方法、图像处理方法、装置及电子设备
WO2019190812A1 (en) Intelligent assistant device communicating non-verbal cues

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE