CN113870390A - Target marking processing method and device, electronic equipment and readable storage medium - Google Patents

Target marking processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113870390A
CN113870390A CN202111139731.0A CN202111139731A CN113870390A CN 113870390 A CN113870390 A CN 113870390A CN 202111139731 A CN202111139731 A CN 202111139731A CN 113870390 A CN113870390 A CN 113870390A
Authority
CN
China
Prior art keywords
target object
labeling
detection area
layout diagram
subspace
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111139731.0A
Other languages
Chinese (zh)
Inventor
叶峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lumi United Technology Co Ltd
Lumi United Technology Co Ltd
Original Assignee
Lumi United Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi United Technology Co Ltd filed Critical Lumi United Technology Co Ltd
Priority to CN202111139731.0A priority Critical patent/CN113870390A/en
Publication of CN113870390A publication Critical patent/CN113870390A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Abstract

The embodiment of the application provides a target labeling processing method and device, electronic equipment and a readable storage medium, and relates to the technical field of intelligent home. The method comprises the following steps: displaying a scene labeling interface, wherein the scene labeling interface comprises a spatial layout diagram of a detection area; responding to the labeling operation aiming at the selected target object, and acquiring a labeling position of the target object in the spatial layout diagram based on the real position of the target object in the detection area; and displaying the mark of the labeled target object at the labeled position in the spatial layout diagram. Therefore, the mark of the target object is automatically marked on the scene marking interface based on the real position of the target object in the detection area in response to the marking operation aiming at the selected target object, and the marking of the target object can be quickly and accurately finished.

Description

Target marking processing method and device, electronic equipment and readable storage medium
Technical Field
The application relates to the technical field of smart home, in particular to a target labeling processing method and device, electronic equipment and a readable storage medium.
Background
At present, a home environment image is generally obtained in a manual labeling mode. However, since each object in the image is manually labeled, that is, drawn manually, there is a high possibility that the position of the object in the image does not correspond to the real position, which results in poor quality of the image and low labeling efficiency. Based on this, how to complete the labeling quickly and accurately has become a technical problem to be solved by the needs of those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a target labeling processing method, a target labeling processing device, electronic equipment and a readable storage medium, wherein the mark of a target object is automatically labeled on a scene labeling interface based on the real position of the target object in a detection area by responding to the labeling operation aiming at the selected target object, so that the labeling of the target object can be quickly and accurately completed.
The embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides a target annotation processing method, where the method includes:
displaying a scene labeling interface, wherein the scene labeling interface comprises a spatial layout diagram of a detection area;
responding to the labeling operation aiming at the selected target object, and acquiring a labeling position of the target object in the spatial layout diagram based on the real position of the target object in the detection area;
and displaying the mark of the marked target object at the marked position in the spatial layout diagram.
In a second aspect, an embodiment of the present application provides a target annotation processing apparatus, where the apparatus includes:
the display module is used for displaying a scene labeling interface, wherein the scene labeling interface comprises a spatial layout diagram of a detection area;
the processing module is used for responding to the labeling operation aiming at the selected target object, and acquiring the labeling position of the target object in the spatial layout diagram based on the real position of the target object in the detection area;
the display module is further configured to display an identifier of the labeled target object at the labeled position in the spatial layout diagram.
In an optional embodiment, the processing module is specifically configured to:
and responding to the labeling operation aiming at the selected target object, determining that the reference object is positioned at the target object, acquiring the real position information of the detected reference object in the detection area, and determining the labeling position of the target object according to the real position information.
In an optional embodiment, the processing module is specifically configured to:
in response to a selection operation for the target object, presenting prompt information for indicating that the reference object moves to the target object;
and responding to the labeling operation aiming at the target object, determining that the reference object is positioned at the target object, and determining the labeling position of the target object according to the real position information of the reference object.
In an alternative embodiment, the real position information is obtained by millimeter wave radar.
In an alternative embodiment, the target object comprises a first target object, the first target object being a controllable device having communication capabilities;
the processing module is specifically configured to:
for the first target object, determining the equipment identification of the selected first target object in response to the equipment selection operation for the first target object;
and responding to the labeling operation aiming at the selected first target object, and acquiring the labeling position of the first target object in the spatial layout diagram based on the equipment identification and the real position of the first target object in the detection area.
In an alternative embodiment, the target object comprises a first target object, the first target object being a controllable device having communication capabilities;
the processing module is specifically configured to:
receiving an equipment identifier reported by the selected first target object;
and responding to the labeling operation aiming at the selected first target object, and acquiring the labeling position of the first target object in the spatial layout diagram based on the equipment identification and the real position of the first target object in the detection area.
In an alternative embodiment, the target object comprises a second target object comprising an uncontrollable item and an empty object;
the processing module is specifically configured to:
for the second target object, determining the identifier of the selected second target object in response to the identifier selection operation for the second target object;
and responding to the labeling operation aiming at the selected second target object, and acquiring the labeling position of the second target object in the spatial layout diagram based on the identification of the second target object and the real position of the second target object in the detection area.
In an alternative embodiment, the detection area comprises at least one subspace,
the processing module is further configured to: for the subspace, responding to the labeling operation for the selected subspace, and acquiring the labeling position of the subspace in the scene labeling interface based on the size of the subspace in the detection area;
the display module is further used for displaying the marked mark of the subspace at the marked position corresponding to the subspace.
In an alternative embodiment, the size of the subspace is determined by millimeter wave radar.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor can execute the machine executable instructions to implement the target annotation processing method described in any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a target annotation processing method as described in any one of the foregoing embodiments.
The target labeling processing method, the target labeling processing device, the electronic device and the readable storage medium, which are provided by the embodiment of the application, display a scene labeling interface of a spatial layout diagram including a detection area, respond to a labeling operation when the labeling operation for a selected target object is received, obtain a labeling position of the target object in the spatial layout diagram based on a real position of the target object in the detection area, and further display an identifier of the labeled target object at the labeling position. Therefore, the mark of the target object can be automatically displayed at the marking position corresponding to the real position in the spatial layout diagram of the detection area, so that the situation that the position of the object in the image does not correspond to the real position is avoided, and the marking quality and efficiency are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram of an application environment provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of another application environment provided by an embodiment of the present application;
fig. 3 is a flowchart illustrating a target annotation processing method according to an embodiment of the present application;
FIG. 4 is a spatial layout diagram provided by an embodiment of the present application;
FIG. 5 is a schematic illustration of a label provided in accordance with an embodiment of the present application;
FIG. 6 is a second illustration of the label provided in the present application;
fig. 7 is a schematic flowchart of labeling different types of objects according to an embodiment of the present application;
fig. 8 is a second flowchart illustrating a target annotation processing method according to the embodiment of the present application;
fig. 9 is a schematic flow chart illustrating target labeling processing performed by the smart home system according to the embodiment of the present application;
FIG. 10 is a block diagram of a target annotation processing apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of an electronic device provided in the present application.
Icon: 10-an intelligent home system; 100-an electronic device; 110-a processor; 120-a memory; 200-a position detection device; 300-household equipment; 400-a gateway; 500-a router; 600-a server; 700-target annotation processing means; 710-a display module; 720-processing module.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
An application environment to which the present application relates will be described below.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment suitable for the embodiment of the present application. Fig. 1 provides an intelligent home system 10, where the intelligent home system 10 includes an electronic device 100, a location detection device 200 communicatively connected to the electronic device 100, and a home device 300. The number of the position detection devices 200 may be at least one, and the number of the home devices 300 may be at least one.
The position detection device 200 may be a millimeter-wave radar, an image acquisition device, or other devices. The position detection device 200 is used to obtain real position information of the target object by detection and transmit the real position information to the electronic device 100. The target object is any object which needs to be labeled and is selected by a user.
The electronic device 100 may include an intelligent interactive terminal such as a smart mirror, a smart phone, a large screen, a television, a wall-controlled small screen, a Personal Computer (PC), a tablet computer, a Personal Digital Assistant (PDA), and the like, which is not limited herein. The electronic device 100 can mark the target object in the space through certain interaction, so as to perform corresponding management. That is, the electronic device 100 may determine, according to the real position information of the target object sent by the position detection device 200, a labeled position of the target object in the spatial layout diagram of the detection area, and then label and display an identifier of the target object at the labeled position. Therefore, the labeling of the target object can be completed quickly, and the situation that the position of the object in the image does not correspond to the real position can be avoided.
Alternatively, the target object may comprise a device involved in smart home control, such as a curtain motor. Thus, the electronic device 100 can provide an intuitive visual home interface, so that a user can conveniently check the indoor object in the interface, and simultaneously, the identifier in the interface corresponds to the real object, so that the user can conveniently manage and control the electronic device. For example, the user quickly and accurately selects a certain home device 300 to be controlled in the interface, and the electronic device 100 controls the home device 300 through the communication connection with the home device 300 based on the selection and the specific control operation of the user.
In this embodiment, the smart home system 10 may further include a gateway 400 communicatively connected to the electronic device 100, the home device 300, and the position detection device 200. The number of gateways 400 may be at least one. The gateway 400 may be an intelligent gateway for intelligent home control, and may implement functions of system information acquisition, information input, information output, centralized control, remote control, coordinated control, and the like. The gateway 400 may be responsible for specific security alarm, appliance control, power consumption information acquisition, and the like. The gateway 400 can also perform information interaction with products such as an intelligent interactive terminal in a wireless manner. The gateway 400 also has wireless routing function, excellent wireless performance, network security and coverage area.
In this embodiment, the household device 300 may include various intelligent household devices, sensing devices, detection devices, and the like, which are disposed in an indoor space, for example, an intelligent television, an intelligent refrigerator, an intelligent air conditioner, a temperature and humidity sensor, a pressure sensor, a smoke sensor, a socket, an electric lamp, an infrared emitting device, and the like. The home devices 300 and the position detection devices 200 connected to the gateway 400 may interact with the gateway 400 for information and instructions. The gateway 400, the home device 300 and the position detection device 200 may be connected through communication methods such as bluetooth, WiFi (Wireless-Fidelity), ZigBee (ZigBee technology), and the like, and of course, the connection method of the gateway 400, the home device 300 and the position detection device 200 may not be specifically limited in this embodiment of the application.
Optionally, in this embodiment of the application, the smart home system 10 may further include a server 600 in communication connection with the gateway 400. The server 600 may be a local server, a cloud server, or the like, and a specific server type may not be limited in this embodiment of the application. The server 600 connected to the gateway 400 may wirelessly exchange information with the gateway 400. The gateways 400 disposed in different indoor spaces may be communicatively connected to the same server 600 through a network, so as to perform information interaction between the server 600 and the gateways 400.
The electronic device 100 can interact information with the server 600 in a wireless manner such as 2G/3G/4G/5G/WiFi. Of course, the connection manner between the electronic device 100 and the server 600 may not be limited in the embodiment of the present application. In some embodiments, the electronic device 100 may also be used for interaction with a user, so that the user may communicate with the gateway 400 wirelessly via the electronic device 100 based on the router 500. In addition, the user can add account information to the gateway 400 and the electronic device 100 at the same time, and the information synchronization between the gateway 400 and the electronic device 100 is realized through the account information.
In some embodiments, a user may set different trigger scenarios or automated linkages through an Application (APP) of electronic device 100. As one way, the electronic device 100 may upload the scenario configuration information or the automation scheme to the server 600, so that when the trigger condition of the trigger scenario or the automation is reached, the server 600 may find a device corresponding to the execution action in the scenario configuration information or the automation scheme according to the stored scenario configuration information or the automation scheme, so as to notify the device to perform the execution action to meet the execution result of the trigger scenario or the automation. Alternatively, the server 600 may also send the scenario configuration information or the automation scheme to the gateway 400, and the gateway 400 finds a device corresponding to an execution action in the scenario configuration information or the automation scheme according to the stored scenario configuration information or the automation scheme. Meanwhile, the gateway 400 may feed back the performance of the device to the server 600.
Referring to fig. 2, fig. 2 is a schematic view of another application environment provided in the embodiment of the present application. In the present embodiment, a millimeter wave radar may be employed as the position detection apparatus 200 in fig. 1. The target object can be divided into a controllable labeled object and an uncontrollable labeled object, wherein the controllable labeled object can be in communication connection with a gateway or a router so as to realize corresponding data communication.
In the environment shown in fig. 2, an APP for performing target annotation processing, a web page for performing target annotation processing, or the like may be installed in the electronic device. The user can operate the electronic device, the electronic device responds to the operation, obtains a labeling position in the spatial layout diagram corresponding to the real position information of the uncontrollable labeling object or the controllable labeling object obtained by the millimeter wave radar through detection through the cloud server, the gateway or the router, marks out the mark of the selected target object based on the labeling position, and displays the mark.
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a flowchart illustrating a target annotation processing method according to an embodiment of the present application. The object labeling processing method can be applied to the electronic device 100. The following describes a specific flow of the target labeling processing method in detail. The method may include steps S130 to S150.
And step S130, displaying a scene labeling interface.
In this embodiment, the electronic device 100 can display a scene annotation interface. The user may input a labeling operation for the selected target object in the scene labeling interface, so that the electronic device 100 may label the selected target object according to the received labeling operation. The labeling operation for the selected target object is used to indicate an operation that needs the electronic device 100 to label a certain object (i.e., the target object) and is input by the user. The labeling operation for the selected target object may be an operation of selecting the target object, which is input by the user in the scene labeling interface, an operation of confirming the labeling of the target object selected by the electronic device 100, which is input by the user in the scene labeling interface, an operation of selecting the target object, an operation of confirming the labeling, and the like, which are input by the user in the scene labeling interface, and specifically may be determined by combining actual requirements.
The scene labeling interface comprises a spatial distribution diagram of the detection area. The detection area is a real space area where a target object needing to be labeled is located. For example, when the scene labeling interface is used to obtain a home environment image in a control interface of an intelligent home, the detection area is a home space, and the scene labeling interface after the labeling is the home environment image.
The spatial profiles are used to represent the distribution of real space, which may include various rooms, hallways, outdoor locations, commercial spaces, public areas, and the like. The spatial distribution map may be established on the basis of a house type map as shown in fig. 4, on the basis of a regular or irregular two-dimensional plane as shown in fig. 5, or on the basis of a three-dimensional space, and may be specifically determined in accordance with actual requirements, as long as the size and distribution of the space in the spatial distribution map correspond to the size and distribution of the real space, and the user may determine the corresponding real object based on the identifier and the position of the identifier in the spatial distribution map.
Step S140, in response to the labeling operation on the selected target object, acquiring a labeling position of the target object in the spatial layout diagram based on the real position of the target object in the detection area.
In this embodiment, the selected target object is an object to be labeled in the detection area, and may be specifically determined by actual requirements. The user may input a labeling operation on the scene labeling interface to indicate that the electronic device 100 is required to label the selected target object in the spatial layout diagram. When the electronic device 100 receives the labeling operation, the real position of the target object in the detection area may be obtained in any manner, and then the labeling position of the target object in the spatial layout diagram is determined.
The label position represents a position when the label is labeled in the spatial layout diagram, that is, a certain position of the spatial layout diagram. Correspondingly, the labeling position of the target object in the spatial layout diagram indicates the position of the target object in the spatial layout diagram after the target object is labeled in the spatial layout diagram.
Step S150, displaying the mark of the marked target object at the marked position in the spatial layout diagram.
In the case of determining the labeling position of the selected target object in the spatial layout diagram, the identifier of the target object may be labeled at the labeling position in the spatial layout diagram, and after labeling, the identifier of the target object is displayed in a scene labeling interface, thereby completing labeling of the selected target object.
The identifier of the target object may be any identifier for indicating the target object. The identification may be an icon (e.g., icon or other icon) and/or text, etc. In the case that the identifier includes a word, the word may be a text field input by the user in the scene labeling interface, or may be a text field selected by the user in the scene labeling interface. In the case where the identifier includes an icon, the icon of the target object may be a target object icon corresponding to the target object selected by the user in the icon selection area, or may be automatically determined by the electronic device 100 based on the related information of the target object.
For example, as shown in 3b in fig. 3, a circular icon and a text field of "sleep band" are used as the identifier of the sleep band, and are marked at the marked position of the sleep band in the spatial layout diagram.
It can be understood that, after the labeling of one target object is completed through steps S130 to S150, the scene labeling interface labeled with the current target object obtained through step S150 may be used as the scene labeling interface for the next labeling. For example, as shown in fig. 3, if the target object during the first annotation is an intelligent panel, after the first annotation is completed, the scene annotation interface may include an identifier of the intelligent panel; when the target object during the second labeling is a sleep band, the scene labeling interface at the moment includes the identification of the intelligent panel, and the identification of the sleep band needs to be labeled in the spatial distribution map of the scene labeling interface.
Therefore, the mark of the target object can be displayed at the marking position corresponding to the real position in the spatial layout diagram of the detection area automatically based on the real position of the target object in the detection area by responding to the marking operation aiming at the selected target object, so that the automatic marking of the target object is completed, the marking efficiency and the marking quality are improved, and the condition that the position of the object in the image does not correspond to the real position can be avoided.
Under the condition that the home environment image is obtained by the method, an intuitive visual home interface can be provided. The home environment image is displayed for the user, which is equivalent to providing a space-based interaction mode, in the home environment image, the user can quickly find out the identification corresponding to the object to be controlled, namely, the user can quickly correspond the identification in the home environment image with the object in the real home space, and the user can conveniently manage.
Optionally, as a possible implementation manner, in a case that the electronic device 100 has a position detection function, the electronic device 100 may obtain real position information of the selected target object in the detection area through position detection, and further determine the labeling position of the target object according to the real position information of the target object.
For example, the electronic device 100 obtains an environment image by shooting, then obtains real position information of the target object in the detection area by recognition in combination with the self position and the environment image, and then determines the annotation position based on the real position information of the target object.
Alternatively, as another possible implementation manner, in a case that the electronic device 100 does not have a position detection function, the position detection device 200 may directly obtain real position information of the target object in the detection area through detection, and send the real position information to the electronic device 100, so that the electronic device 100 obtains the real position information of the target object, and then determines the annotation position according to the real position information.
As shown in fig. 1, the location detecting device 200 may implement data communication with the electronic device 100 sequentially through the gateway 400 and the router 500, or sequentially through the gateway 400, the router 500, and the server 600, so as to transmit the real location information to the electronic device 100. Alternatively, the electronic apparatus 100 may transmit a position request to the position detecting apparatus 200 in a case where it is necessary to obtain real position information of the target object, and the position detecting apparatus 200 may transmit the detected real position information to the electronic apparatus 100 based on the position request.
Optionally, as another possible implementation manner, the reference object may be used as a carrier, and when it is determined that the reference object is located at the target object, the real position information of the reference object in the detection area obtained through the detection is obtained, and then the labeling position of the target object is determined according to the real position information.
The reference object is an object used as a position detection carrier, and specifically an object that can move in an environment corresponding to the spatial layout (i.e., in the detection area). Alternatively, the reference objects may include living reference objects and non-living reference objects. For example, the living body object may be a person, a pet, or the like, and the non-living reference body object may be a mobile robot, a sweeping robot, or the like.
In this embodiment, the reference object is detected, that is, the target object is detected, and the real position information of the reference object may be used as the real position information of the target object, so as to determine the labeling position of the target object. Therefore, the user can conveniently and freely select the target object to be marked and accurately determine the real position information of the target object.
In the case of using the reference object, the reference object may be moved to the target object within a preset time period after the start position detection, and when the preset time period ends, it may be directly determined that the reference object is located at the target object at this time. Thus, it is possible to automatically determine whether or not the reference object has moved to the target object without human operation.
The user may also input a labeling operation to the electronic apparatus 100 in a case where it is determined that the reference object is located at the target object. When the annotation operation is received, in response to the annotation operation for the selected target object, it may be determined that the reference object is located at the target object at the time, and then the annotation position of the target object is determined according to the real position information of the reference object at the time. Thereby, the flexibility is stronger while obtaining the true position information in case the reference object is actually moved to the target object.
Alternatively, to increase the flexibility of the target labeling process, the user may input a selection operation for the target object to the electronic apparatus 100. Alternatively, in a case where the selection operation is received, the electronic device 100 may present prompt information indicating that the reference object moves to the target object in response to the selection operation for the target object. In the case where the electronic apparatus 100 displays the guidance information, the reference object may move toward the target object. When the reference object moves to the target object, the user may input a labeling operation for the target object to the electronic apparatus 100. In the case that the electronic device 100 receives the annotation operation for the target object, the electronic device 100 may determine that the reference object is located at the target object, further obtain the real position information of the reference object at this time, and determine the annotation position of the target object according to the real position information of the reference object.
Wherein the real position information of the reference object can be obtained by at least one of the following methods: real position information is obtained through millimeter wave radar monitoring; acquiring real position information through an Ultra Wide Band (UWB) positioning technology; the real position information is obtained by performing image analysis on an environment image including a reference object. The millimeter wave radar is a positioning device, and the relative position relation between the millimeter wave radar and the target is obtained by transmitting a millimeter wave signal and analyzing the returned signal.
In one implementation, the real position information of the reference object is obtained through millimeter wave radar monitoring, and the labeling position of the target object can be determined under the condition that the real position information of the reference object is used as the real position information of the target object at the reference object. Therefore, the marking position can be determined based on the real position information, and meanwhile, the privacy of the user cannot be influenced.
In this embodiment, the reference object has no relation to the target object itself, but in order to realize labeling of the target object, the reference object is detected by the millimeter wave radar, and when the reference object moves to the target object, a signal is fed back to the millimeter wave radar, which obtains real position information of the reference object based on the signal. The reference object is simply a bridge between the target object and the millimeter wave radar.
Only the position information of the reference object within the range covered by one millimeter wave radar can be detected by one millimeter wave radar. During detection, a user can manually select a space to be detected (for example, a living room or a bedroom) to be detected, and the space to be detected is used as a detection area corresponding to the space layout diagram. The space selected manually cannot be larger than the coverage area of one millimeter wave radar, otherwise, the situation that the real position information of the reference object cannot be obtained occurs. When the space to be detected is larger than the coverage range of one millimeter wave radar, a plurality of millimeter wave radars can be used for detection.
The millimeter wave radar can be placed in a default space, and then the detectable area of the millimeter wave radar is obtained according to the installation height, the pitching angle and other information of the millimeter wave radar in the space. In one possible example, the detection range of a millimeter wave radar is: 4M (wide) × 7M (long), the installation height thereof is 1.4M, and of course, the installation height can be adjusted according to the situation; wherein, the height of 1.4M refers to the height of the millimeter wave radar which is horizontally arranged. The millimeter wave radar can be horizontally installed or inclined downwards, and the detection range is largest when the millimeter wave radar is horizontally installed.
In the case where the real position information is obtained by the millimeter wave radar, the real position information may be mapped with the spatial coordinate system to obtain coordinates of the reference object in the spatial coordinate system, so that the coordinates of the reference object are displayed in a manner desired by the user. The specific mapping mode can be realized according to a coordinate conversion algorithm.
The target object may include a first target object and a second target object. The first target object is a controllable device with communication capability, namely, a controllable labeling object. The second target object includes any object that is not a controllable device, that is, an uncontrollable labeled object, for example, including an uncontrollable object (e.g., an uncontrollable device, an uncontrollable home-class object, etc.) and an empty object. The controllable and uncontrollable distinguishing standards may be protocols, for example, products that can be connected to a skatecat device, a mikim device, a homkit device, or a gateway 400 in the smart home system 10, smart home devices, etc. are controllable devices, for example, smart desk lamps, electric curtains, etc. The controllable object is a product capable of interacting with the cloud, is an object capable of being queried and controlled by an interactive page, and is mostly an Internet of Things (IOT) product.
The second target object can be conventional furniture objects such as a traditional sofa, a television cabinet, a curtain and the like; or an appliance that does not communicate with the protocol, such as a hel appliance; or conventional non-IOT appliances such as refrigerators, washing machines, routers, cleaners, etc.; it can also be an empty object, no product, only a specific coordinate position, not pointing to any item. Therefore, the second target object represents an uncontrollable object and is an object incapable of interacting with the cloud.
Optionally, during the labeling, the labeling position of the target object can be determined in different ways according to whether the target object is controllable or not, and the labeling is performed.
The electronic device 100 may display a scene annotation interface. In the case that the target object is a first target object, the user may input a device selection operation for the first target object in the scene annotation interface. The device selection operation may be an operation for selecting a unique device identification (e.g., device number) of the first target object. For example, a user may first select a device type of the first target object in the scene labeling interface, for example, select the device type of a refrigerator from several device types such as a refrigerator, a color tv, a sensor, a curtain motor, and the like; next, a unique device identification of the first target object is selected in the selected device type. The electronic device 100 may determine, for the first target object, a device identification of the first target object selected by the user in response to the device selection operation for the first target object. The identity of the first target object may subsequently be determined based on the device identity of the first target object.
The corresponding relation between the identifier of the first target object and the device identifier can be stored, so that the real device needing to be controlled can be determined when the household environment image obtained based on the target labeling processing method is controlled subsequently. For example, if a user selects a certain identifier in the home environment image, the specific device to be controlled by the user may be determined based on the correspondence between the identifier and the device identifier, and the specific device may be controlled.
After the user inputs a device selection operation for the first target object, a real position of the first target object in the detection area may be obtained, and a labeling position of the first target object in the spatial layout diagram is obtained based on the real position and the device identifier of the first target object. The real position is used for determining a labeling position, and the equipment identifier is used for determining an identifier of the first target object labeled in the spatial layout diagram.
Alternatively, after the user inputs a device selection operation for the first target object, the electronic device 100 may display prompt information for prompting the reference object to move to the real position of the first target object. When the reference object moves to the real position of the first target object, the user may input a labeling operation for the first target object to the electronic device 100, for example, the user clicks an "appliance label" button on the scene labeling interface. When the annotation operation for the first target object is received, the electronic device 100 may obtain the real position information of the reference object, and further determine the annotation position of the first target object.
In the case that the identifier of the controllable device is stored in advance, the electronic device 100 may store a corresponding relationship between the device identifier and the identifier in advance, and after the device identifier of the first target object is determined, the identifier of the first target object may be determined based on the corresponding relationship, and then the identifier may be marked at a marking position corresponding to the first target object in the spatial layout diagram.
In the embodiment, the user manually selects the first target object to be labeled, and labeling is performed under the condition that the user manually confirms the labeling, so that the flexibility is good, and the labeling requirement of the user can be conveniently met.
In a case that the target object is a first target object, the electronic device 100 may receive the device identifier reported by the selected first target object. The first target object may be to report its own device information under the control of a user, where the device information may include a device identifier. The electronic device 100 may further receive a tagging operation input by a user for the selected first target object, and then, in response to the tagging operation, obtain a tagged position of the first target object in the spatial layout diagram based on the received device identifier and the real position of the first target object in the detection area. Wherein the device identification may be used to determine an identification of the first target object. In this manner, the user is not required to manually select the device identification.
In the above manner of reporting the device identifier by the first target object, the user may input a device type selection operation for the first target object in the scene annotation interface. The device type selection operation represents an operation of selecting a device type to which the first target object belongs. The electronic apparatus 100 may display prompt information indicating that the reference object is moved to the first target object in response to the device type selection operation after receiving the device type selection operation. When the reference object moves to the real position of the first target object, the user can control the first target object to report the electrical appliance information through the electronic device 100; or, the user operates a button of the first target object, for example, presses a reset key, so as to trigger the first target object to report the electrical appliance information. Therefore, the identification of the first target object can be quickly determined according to the equipment type and the equipment identification.
After receiving the electrical appliance information, the electronic device 100 may determine the device identifier of the first target object according to the electrical appliance information. The identity of the first target object may subsequently be determined from the device identity of the first target object. The corresponding relation between the identification of the first target object and the equipment identification can be stored, so that the real equipment needing to be controlled can be determined when the control is carried out on the basis of the home environment image obtained in the mode in the following.
When the reference object moves to the real position of the first target object, the user may also input a labeling operation for the first target object to the electronic device 100, for example, the user clicks an "appliance label" button on the scene labeling interface. When the electronic device 100 receives the electrical apparatus information and the labeling operation, it may determine that the reference object is located at the device corresponding to the electrical apparatus information, and determine the labeling position of the device corresponding to the electrical apparatus information according to the current real position information of the reference object.
Therefore, the equipment identification of the first target object to be marked can be automatically determined without manually selecting the equipment identification by a user.
The following describes an example of a method for performing labeling processing based on automatic determination of device identifiers with reference to fig. 5.
As shown at 5a in fig. 5, the user has moved to a certain position within the detection area. A user can select a sleep band device of the interactive interface of the electronic device 100 to control the sleep band device, so that the sleep band device actively reports own electrical appliance information; or, a person can click a button of the real sleep belt device, so that the sleep belt device actively reports the own electric appliance information. The electronic device 100 may determine the device identifier of the sleep-band device based on the received electrical appliance information, determine the identifier, and determine that the user has moved to the location of the sleep-band device.
The user can also input a labeling operation on a scene labeling interface of the electronic device 100 when the user moves to the position of the sleep band device. When the electronic device 100 receives the labeling operation, it may determine, according to the current coordinates of the person, a labeling position at which the sleep band device is labeled in the spatial layout diagram shown in fig. 5, and further label the identifier of the sleep band device at the determined labeling position, so as to obtain a frame shown in 5b in fig. 5.
In the case that the target object is a second target object (i.e., an uncontrollable object), the user may input an identification selection operation for the second target object in the scene annotation interface. The electronic device 100 may determine the identity of the selected second target object in response to the identity selection operation. And further, under the condition that a labeling operation for the selected second target object is received, acquiring a labeling position of the second target object in the spatial layout diagram based on the identification of the second target object and the real position of the second target object in the detection area. Therefore, the user can conveniently define the object to be marked, namely the user can mark any object in the spatial layout drawing according to the actual requirement and determine the mark of the object in the spatial layout drawing.
Optionally, the user may select the type of the second target object in the scene labeling interface, such as refrigerator, sofa, key, fire extinguisher, etc. Then, among the identifiers included in the selected item type, an identifier of the second target object is selected. In this way, the electronic device 100 may determine the identifier of the second target object according to the received identifier selection operation input by the user.
Alternatively, after the user inputs the identification selection operation of the second target object, the electronic device 100 may present prompt information indicating that the reference object is moved to the second target object in response to the identification selection operation. When the reference object moves to the real position of the second target object, the user may also input a labeling operation for the second target object to the electronic device 100, for example, the user clicks an "item labeling" button on the scene labeling interface. When receiving the annotation operation, the electronic device 100 may determine that the reference object is located at the second target object, determine an annotation position of the second target object according to the current real position information of the reference object, and further annotate the identifier of the second target object at the annotation position.
It can be understood that, when the second target object is a null object, the user may directly select the identifier corresponding to the null object, and then, in the case that the real position corresponding to the null object is determined, the electronic device 100 marks the identifier corresponding to the null object at the marked position corresponding to the real position. The actual position corresponding to the empty object may be a position for marking an object or a problem, or a special scene trigger point.
How to label the uncontrollable objects is described in the following with reference to fig. 6.
As shown in fig. 6a, when the user moves to a position of a second target object to be labeled in the detection area, the user may input a labeling operation on the scene labeling interface; after receiving the labeling operation, the electronic device 100 may obtain the current actual location information of the user; if the user inputs an operation of selecting the identifier corresponding to the sofa in the scene labeling interface, the electronic device 100 may determine, according to the current real position information of the user, a labeling position at which the identifier corresponding to the sofa is labeled in the spatial layout diagram shown in fig. 6, and further label the identifier corresponding to the sofa at the determined labeling position, so as to obtain the picture shown in fig. 6 b.
As can be seen from the above description, in the case that the reference object is a person and the millimeter wave device is used to detect the reference object, the process of the target annotation processing method provided in the embodiment of the present application may be as shown in fig. 7.
S1, whether the millimeter wave equipment detects human body information or not. If so, go to S2; if not, execution continues with S1.
And S2, detecting human body information, namely detecting the position information of the target object.
And S3, under the condition that the target object is a controllable labeling object (namely, a first target object), labeling the controllable labeling object through rapid labeling.
The fast labeling mode is a mode of performing labeling processing based on the automatic determination device identifier in the above description, and is not described herein again.
And S4, under the condition that the target object is the controllable labeling object, labeling the controllable labeling object through common labeling.
The method of labeling the controllable labeling object through common labeling is a method of manually selecting an equipment identifier based on a user and then performing labeling processing in the above description, and is not described herein again.
And S5, under the condition that the target object is an uncontrollable labeling object (namely a second target object), labeling the uncontrollable labeling object through common labeling.
The way of labeling the uncontrollable labeled object through common labeling is a way of labeling the second target object in the above description, and is not described herein again.
The detection area corresponding to the spatial layout diagram in the scene labeling interface may include at least one subspace, and a target object exists in the subspace. The spatial layout diagram may include an identification of the at least one subspace. In this way, when the identification of the target object is marked in the subspace of the spatial layout diagram based on the above manner, the specific object included in each real subspace is conveniently and intuitively presented.
For example, the identification of the target object can be marked in the corresponding subspace through the method, so that a home environment image is obtained, and a user can uniformly manage and schedule the objects in a specific space based on the space according to the home environment image, so that the managed objects generate various linkage relations.
Referring to fig. 8, fig. 8 is a second flowchart illustrating a target annotation processing method according to the embodiment of the present application. In this embodiment, the target annotation processing method may further include step S110 and step S120.
Step S110, aiming at the subspace, responding to the marking operation aiming at the selected subspace, and acquiring the marking position of the subspace in the scene marking interface based on the size of the subspace in the detection area.
And step S120, displaying the mark of the marked subspace at the marked position corresponding to the subspace.
Alternatively, the subspace may be divided according to the spatial distribution of the detection regions. For example, when the detection area is a home space, the method can be divided into: a study room, a living room, a kitchen, a passageway, a bedroom 1, a bedroom 2 and other subspaces. The electronic device 100 may obtain the spatial information of each subspace selected by the user in any manner when receiving the annotation operation for the subspace, which is input by the user through the scene annotation interface. The spatial information may include the real spatial size of the subspace, i.e., the size of the subspace, and may also include the spatial size of the subspace when it is labeled in the spatial layout diagram (i.e., the size in the image being determined by the real spatial size of the subspace).
Alternatively, the size of the user-selected subspace in the detection area may be obtained by detection. For example, obtaining an image of the selected subspace, and obtaining a spatial size of the subspace through analysis according to the image of the subspace; alternatively, the reference object is moved to the edge of the subspace and then the size of the subspace is determined by detecting the reference object. For example, the user submits coordinates at the spatial portion location, thereby automatically generating the size of the subspace in the detection region.
Or, the received space size is directly taken as the size of the selected subspace. For example, the user directly inputs the space size for a certain subspace, and the space size can be directly used as the size of the subspace. The size of the space received here may be a size of a vacuum space or a size in a space layout diagram.
Alternatively, the size of the subspace (which may be represented herein as a size in the spatial layout diagram) may also be determined from the received drag action of the user. For example, a user drags a rectangular frame to change the size of the rectangular frame, and the size of the rectangular frame when the user stops dragging is the size of the subspace corresponding to the rectangular frame in the spatial layout diagram. For another example, the user may drag an edge line, an intersection, and the like of a certain subspace displayed in the display interface on the electronic device 100 to change the size of the subspace, so that the electronic device 100 obtains the size of the subspace.
As an alternative embodiment, the above-mentioned ways may also be combined to obtain the size of the subspace. For example, the size of the space of the selected subspace in the detection region may be obtained through detection, then the user manually modifies the size of the space, and then the electronic device 100 uses the modified size of the space as the size of the space in the spatial information of the subspace, where the size of the space is the size of the real space.
Under the condition of obtaining the size of the selected subspace, determining the labeling position of the subspace in the scene labeling interface according to the size, and then labeling the identifier of the subspace at the corresponding labeling position of the spatial layout diagram. The identifier of the subspace may be a rectangular frame as shown in fig. 5 and fig. 6, or may be in other forms, and may be specifically set in combination with actual requirements.
Optionally, the spatial information may further include a spatial type, for example, when the detection region is a home space, the spatial type of the subspace may be: study area, activity area, kitchen area, etc. The spatial type may be manually selected or entered by a user at a scene tagging interface. Optionally, the user may customize the space type in the scene annotation interface of the server 600 or the electronic device 100, and when in use, the setting of the space type of the specific subspace may be completed through selection. When labeling the subspaces, the spatial types can also be labeled at the corresponding subspaces in the spatial layout diagram as shown in fig. 4, so that the user can clearly understand the properties of the subspaces.
Optionally, the spatial information may also include location information. In the case of multiple subspaces, the multiple subspaces can be labeled in the image of the home environment in combination with the position information of each subspace. The location information may be input by the user based on the true distribution of the subspace. Therefore, the subspace distribution condition in the home environment image can be consistent with the real subspace distribution condition, and a user can conveniently correspond the object marked in the home environment image with the real object.
Optionally, the user may also set a background, a texture, and the like of the subspace, and the electronic device 100 may further mark the above information in the scene marking interface.
The application also provides an application scenario, and the application of the target labeling processing method in the application scenario is shown in fig. 9. In the utilization scene, the household environment image is obtained by utilizing the target labeling processing method.
First, the electronic device 100 displays a home scene labeling interface.
Firstly, carrying out space labeling.
The user can select the type of the subspace in the home scene labeling interface, such as a sofa area, a study area, an activity area, a television area, a kitchen area and the like. The subspace may be at the room level, e.g., a bedroom; but also small areas such as a sofa area, a television area, etc. Then, the size of the selected subspace may be defined.
Optionally, the size of the subspace may be set by the user in a dragging manner in the home scene labeling interface, or may be a size directly and manually input by the user, and the two sizes represent sizes of the subspace marked in the home scene labeling interface.
The size of the subspace may also be obtained as follows. The user moves to the edge of the selected subspace in the detection region and then clicks the "spatial annotation" button in the home scene annotation interface to enter a spatial annotation operation. After receiving the space labeling operation, the electronic device 100 sends a position request to the millimeter wave device through the cloud server, the gateway or the router in sequence; the millimeter wave device returns the position of the detected person to the gateway or the router. The position where the millimeter wave device returns the person may be the position of the person indicated with the millimeter wave device as a reference point. The gateway or the router may perform coordinate system conversion on the position of the person returned by the millimeter wave device to determine the size of the selected subspace (i.e., the spatial information in fig. 9) and the position of the subspace (i.e., the position coordinates at the edge of the subspace) in the preset coordinate system, and then return the position of the subspace and the size of the subspace to the electronic device 100 via the cloud server. The electronic device 100 may display the received position of the subspace and the size of the subspace, and further mark the subspace in the home scene labeling interface based on the position of the subspace and the size of the subspace, and display the home scene labeling interface after the labeling of the subspace is completed.
Optionally, the user may also input other attributes of the selected subspace in the home scene interface, and mark the input other attributes at the subspace in the home scene marking interface.
It will be appreciated that in the case of multiple subspaces, the above process may be repeated for other subspaces that have not yet been labeled, thereby completing labeling of each subspace. Therefore, the layout drawing of the home space can be marked.
And secondly, common marking can be carried out on the controllable equipment.
The user may first select an appliance type in the home scene tagging interface, for example, select a type of controllable device to be tagged from the appliance types of a refrigerator, a color tv, a sensor, a curtain motor, and the like. After the appliance type is selected, the electronic device 100 may display each unique device number under the selected appliance type in a home scene labeling interface; the user may select the unique device number of the controllable device to be labeled from the displayed unique device numbers.
After the user selects the unique device number, the electronic device 100 may present a prompt in the home scenario tagging interface instructing the user to move to the true location of the user-selected controllable device. After the user moves in place, the user can click an "appliance label" button on the home scene label interface to input a label operation for the selected controllable device.
The electronic device 100 may, upon receiving the annotation operation, sequentially send a location request to the millimeter wave device through the cloud server, the gateway, or the router. The millimeter wave device returns the position of the detected person to the gateway or the router. The position where the millimeter wave device returns the person may be the position of the person indicated with the millimeter wave device as a reference point. The gateway or the router may perform coordinate system conversion on the position of the person returned by the millimeter wave device to obtain the position of the person in the preset coordinate system, store the position as the position of the electrical appliance, and then return the position of the electrical appliance to the electronic device 100 through the cloud server. The electronic device 100 can display the position of the electric appliance, determine a labeled position in the home space layout diagram based on the position of the electric appliance, label an identifier corresponding to the unique device number selected by the user at the labeled position, and display a home scene labeling interface labeled with the identifier of the controllable device.
Thirdly, controllable equipment can be quickly marked.
The user may first select an appliance type in the home scene tagging interface, for example, select a type of controllable device to be tagged from the appliance types of a refrigerator, a color tv, a sensor, a curtain motor, and the like.
After the user selects the appliance type, the electronic device 100 may present a prompt in the home scenario tagging interface instructing the user to move to the true location of the user-selected controllable device. Before, during or after the user moves in place, the user may control the selected controllable device through the cloud server, the gateway or the router by using the electronic device 100 to trigger the reporting of the electrical appliance. After receiving the electric appliance information reported by the controllable device, the gateway or the router can report the electric appliance information to the cloud server. The unique equipment number is included in the electric appliance information, so that a user does not need to manually search for the unique equipment number.
After the user moves in place, the user can also click an 'appliance label' button on the home scene label interface to input a label operation for the selected controllable device.
The electronic device 100 may, upon receiving the annotation operation, sequentially send a location request to the millimeter wave device through the cloud server, the gateway, or the router. The millimeter wave device returns the position of the detected person to the gateway or the router. The position where the millimeter wave device returns the person may be the position of the person indicated with the millimeter wave device as a reference point. The gateway or the router can convert the coordinate system of the position of the person returned by the millimeter wave equipment to obtain the position of the person in the preset coordinate system, and the position is stored as the position of the electric appliance by combining with the electric appliance information reported by the gateway or the router; and then, the electrical appliance information and the electrical appliance position are returned to the electronic device 100 through the cloud server, or description information of an identifier corresponding to the electrical appliance information and the electrical appliance position are returned to the electronic device 100. The electronic device 100 can display information returned by the cloud server, determine a labeling position in the home space layout diagram based on the position of the electric appliance, label an identifier corresponding to the electric appliance information at the labeling position, and display a home scene labeling interface labeled with the identifier of the controllable device.
And fourthly, labeling the uncontrollable object.
The uncontrollable objects comprise uncontrollable electric appliances, uncontrollable household articles, empty objects and the like.
For example, a user may first select an uncontrollable item type in a home scenario tagging interface. Such as a refrigerator, a sofa, etc. After the uncontrollable item type is selected, the electronic device 100 may display each identifier under the selected uncontrollable item type in a home scene labeling interface; the user may select the identity of the uncontrollable object to be labeled from the displayed identities.
After the user selects the identifier, the electronic device 100 may present, in the home scenario tagging interface, a prompt for instructing the user to move to a real location corresponding to the identifier selected by the user. After the user moves in place, the user can click an 'item labeling' button on the home scene labeling interface to input a labeling operation for the selected uncontrollable object.
The electronic device 100 may, upon receiving the annotation operation, sequentially send a location request to the millimeter wave device through the cloud server, the gateway, or the router. The millimeter wave device returns the position of the detected person to the gateway or the router. The position where the millimeter wave device returns the person may be the position of the person indicated with the millimeter wave device as a reference point. The gateway or the router may perform coordinate system conversion on the position of the person returned by the millimeter wave device to obtain the position of the person in the preset coordinate system, store the position as an article position, and then return the article position to the electronic device 100 through the cloud server. The electronic device 100 may display the received position of the item, determine a label position in the spatial layout diagram of the home based on the position of the item, label the identifier selected by the user at the label position, and display a home scene label interface labeled with the identifier of the uncontrollable object.
Therefore, any object in the home space can be labeled, and the home environment image can be obtained.
In order to execute the corresponding steps in the above embodiments and various possible manners, an implementation manner of the object labeling processing device 700 is given below. Referring to fig. 10, fig. 10 is a block diagram illustrating a target annotation processing apparatus 700 according to an embodiment of the present application. It should be noted that the basic principle and the technical effect of the object labeling processing apparatus 700 provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and reference may be made to the corresponding contents in the above embodiments. The object labeling processing device 700 may be applied to the electronic apparatus 100 described above. The target annotation processing apparatus 700 may include: a display module 710 and a processing module 720.
And the display module 710 is used for displaying the scene labeling interface. The scene labeling interface comprises a spatial layout diagram of the detection area.
And the processing module 720 is configured to, in response to the labeling operation on the selected target object, obtain a labeling position of the target object in the spatial layout diagram based on the real position of the target object in the detection area.
The presentation module 710 is further configured to present, at the marked position in the spatial layout diagram, an identifier of the marked target object.
Optionally, in this embodiment, the processing module 720 is specifically configured to: and responding to the labeling operation aiming at the selected target object, determining that the reference object is positioned at the target object, acquiring the real position information of the detected reference object in the detection area, and determining the labeling position of the target object according to the real position information.
Optionally, in this embodiment, the processing module 720 is specifically configured to: in response to the selection operation for the target object, presenting prompt information for indicating that the reference object moves to the target object; and in response to the labeling operation aiming at the target object, determining that the reference object is positioned at the target object, and determining the labeling position of the target object according to the real position information of the reference object.
Alternatively, in the present embodiment, the real position information is obtained by a millimeter wave radar.
Optionally, in this embodiment, the target object includes a first target object, and the first target object is a controllable device with a communication capability; the processing module 720 is specifically configured to: for a first target object, in response to a device selection operation for the first target object, determining a device identifier of the selected first target object; and responding to the labeling operation aiming at the selected first target object, and acquiring a labeling position of the first target object in the spatial layout diagram based on the equipment identification and the real position of the first target object in the detection area.
Optionally, in this embodiment, the target object includes a first target object, and the first target object is a controllable device with a communication capability; the processing module 720 is specifically configured to: receiving an equipment identifier reported by the selected first target object; and responding to the labeling operation aiming at the selected first target object, and acquiring a labeling position of the first target object in the spatial layout diagram based on the equipment identification and the real position of the first target object in the detection area.
Optionally, in this embodiment, the target object includes a second target object, and the second target object includes an uncontrollable article and an empty object; the processing module 720 is specifically configured to: for a second target object, determining the identifier of the selected second target object in response to the identifier selection operation for the second target object; and responding to the labeling operation aiming at the selected second target object, and acquiring the labeling position of the second target object in the spatial layout diagram based on the identification of the second target object and the real position of the second target object in the detection area.
Optionally, in this embodiment, the detection area includes at least one subspace, and the processing module 720 is further configured to: aiming at the subspace, responding to the marking operation aiming at the selected subspace, and acquiring the marking position of the subspace in the scene marking interface based on the size of the subspace in the detection area; the presentation module 710 is further configured to: and displaying the mark of the marked subspace at the marked position corresponding to the subspace.
Alternatively, in the present embodiment, the size of the subspace is obtained by the millimeter wave radar.
Referring to fig. 11, fig. 11 is a block diagram of an electronic device 100 according to an embodiment of the present disclosure. As shown in fig. 11, the electronic device 100 may include one or more of the following components: a processor 110, a memory 120, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the target annotation processing method as described in the foregoing method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the electronic device 100 in use, and the like. It will be understood by those skilled in the art that the structure shown in fig. 11 is only an illustration and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
The embodiment of the present application further provides a readable storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps of the target annotation processing method are implemented.
To sum up, the embodiments of the present application provide a target annotation processing method, a device, an electronic device, and a readable storage medium, which display a scene annotation interface of a spatial layout diagram including a detection area, and when receiving an annotation operation for a selected target object, respond to the annotation operation, obtain an annotation position of the target object in the spatial layout diagram based on a real position of the target object in the detection area, and further display an identifier of the annotated target object at the annotation position. Therefore, the mark of the target object can be automatically displayed at the marking position corresponding to the real position in the spatial layout diagram of the detection area, so that the situation that the position of the object in the image does not correspond to the real position is avoided, and the marking quality and efficiency are improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The foregoing is illustrative of only alternative embodiments of the present application and is not intended to limit the present application, which may be modified or varied by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A target annotation processing method, characterized in that the method comprises:
displaying a scene labeling interface, wherein the scene labeling interface comprises a spatial layout diagram of a detection area;
responding to the labeling operation aiming at the selected target object, and acquiring a labeling position of the target object in the spatial layout diagram based on the real position of the target object in the detection area;
and displaying the mark of the marked target object at the marked position in the spatial layout diagram.
2. The method according to claim 1, wherein the obtaining of the labeled position of the target object in the spatial layout diagram based on the real position of the target object in the detection area in response to the labeling operation on the selected target object comprises:
and responding to the labeling operation aiming at the selected target object, determining that the reference object is positioned at the target object, acquiring the real position information of the detected reference object in the detection area, and determining the labeling position of the target object according to the real position information.
3. The method according to claim 2, wherein the determining that a reference object is located at the target object in response to a labeling operation for the selected target object, acquiring real position information of the detected reference object in the detection area, and determining a labeled position of the target object according to the real position information comprises:
in response to a selection operation for the target object, presenting prompt information for indicating that the reference object moves to the target object;
and responding to the labeling operation aiming at the target object, determining that the reference object is positioned at the target object, and determining the labeling position of the target object according to the real position information of the reference object.
4. A method according to claim 2 or 3, characterized in that the real position information is obtained by means of millimeter wave radar.
5. The method of claim 1, wherein the target object comprises a first target object, the first target object being a controllable device having communication capabilities;
the acquiring, in response to a labeling operation for the selected target object, a labeling position of the target object in the spatial layout diagram based on a real position of the target object in the detection area includes:
for the first target object, determining the equipment identification of the selected first target object in response to the equipment selection operation for the first target object;
and responding to the labeling operation aiming at the selected first target object, and acquiring the labeling position of the first target object in the spatial layout diagram based on the equipment identification and the real position of the first target object in the detection area.
6. The method of claim 1, wherein the target object comprises a first target object, the first target object being a controllable device having communication capabilities;
the acquiring, in response to a labeling operation for the selected target object, a labeling position of the target object in the spatial layout diagram based on a real position of the target object in the detection area includes:
receiving an equipment identifier reported by the selected first target object;
and responding to the labeling operation aiming at the selected first target object, and acquiring the labeling position of the first target object in the spatial layout diagram based on the equipment identification and the real position of the first target object in the detection area.
7. The method of claim 1, wherein the target object comprises a second target object, the second target object comprising an uncontrollable item and an empty object;
the acquiring, in response to a labeling operation for the selected target object, a labeling position of the target object in the spatial layout diagram based on a real position of the target object in the detection area includes:
for the second target object, determining the identifier of the selected second target object in response to the identifier selection operation for the second target object;
and responding to the labeling operation aiming at the selected second target object, and acquiring the labeling position of the second target object in the spatial layout diagram based on the identification of the second target object and the real position of the second target object in the detection area.
8. The method of any one of claims 1-3, wherein the detection region comprises at least one subspace, the method further comprising:
for the subspace, responding to the labeling operation for the selected subspace, and acquiring the labeling position of the subspace in the scene labeling interface based on the size of the subspace in the detection area;
and displaying the marked mark of the subspace at the marked position corresponding to the subspace.
9. The method of claim 8, wherein the subspace is dimensioned by millimeter wave radar.
10. An object labeling processing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying a scene labeling interface, wherein the scene labeling interface comprises a spatial layout diagram of a detection area;
the processing module is used for responding to the labeling operation aiming at the selected target object, and acquiring the labeling position of the target object in the spatial layout diagram based on the real position of the target object in the detection area;
the display module is further configured to display an identifier of the labeled target object at the labeled position in the spatial layout diagram.
11. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the target annotation processing method of any one of claims 1 to 9.
12. A readable storage medium on which a computer program is stored, which, when being executed by a processor, carries out the object annotation processing method according to any one of claims 1 to 9.
CN202111139731.0A 2021-09-28 2021-09-28 Target marking processing method and device, electronic equipment and readable storage medium Pending CN113870390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111139731.0A CN113870390A (en) 2021-09-28 2021-09-28 Target marking processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111139731.0A CN113870390A (en) 2021-09-28 2021-09-28 Target marking processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113870390A true CN113870390A (en) 2021-12-31

Family

ID=78991447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111139731.0A Pending CN113870390A (en) 2021-09-28 2021-09-28 Target marking processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113870390A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114356165A (en) * 2022-01-11 2022-04-15 瀚云科技有限公司 Monitoring area determining method and device of monitoring equipment and electronic equipment
CN115189978A (en) * 2022-06-24 2022-10-14 海信集团控股股份有限公司 Household equipment control method and device, electronic equipment and storage medium
CN116319236A (en) * 2023-03-20 2023-06-23 深圳绿米联创科技有限公司 Space configuration method, device, terminal equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114356165A (en) * 2022-01-11 2022-04-15 瀚云科技有限公司 Monitoring area determining method and device of monitoring equipment and electronic equipment
CN114356165B (en) * 2022-01-11 2024-01-23 瀚云科技有限公司 Monitoring area determining method and device of monitoring equipment and electronic equipment
CN115189978A (en) * 2022-06-24 2022-10-14 海信集团控股股份有限公司 Household equipment control method and device, electronic equipment and storage medium
CN115189978B (en) * 2022-06-24 2023-10-24 海信集团控股股份有限公司 Household equipment control method and device, electronic equipment and storage medium
CN116319236A (en) * 2023-03-20 2023-06-23 深圳绿米联创科技有限公司 Space configuration method, device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113870390A (en) Target marking processing method and device, electronic equipment and readable storage medium
CN108931924B (en) Control method and device of intelligent household system, processor and storage medium
CN106155002B (en) Intelligent household system
CN111937051B (en) Smart home device placement and installation using augmented reality visualization
US10928979B2 (en) Information apparatus control method, computer-readable recording medium, and information providing method to control devices connected to network via device icons displayed on floor plan
US9983592B2 (en) Moving robot, user terminal apparatus and control method thereof
CN105490897B (en) Control method and control device of household appliance and mobile terminal
CN106462238A (en) Augmented reality based management of a representation of a smart environment
US9398413B1 (en) Mapping electronic devices within an area
CN110196557B (en) Equipment control method, device, mobile terminal and storage medium
KR102371409B1 (en) Method and apparatus for managing a sensor
CN108803371B (en) Control method and device for electrical equipment
CN113110095A (en) HomeMap construction method and system based on sweeping robot
US20230128740A1 (en) Method for providing interface by using virtual space interior design, and device therefor
CN113341737B (en) Control method, system, device, equipment and storage medium of intelligent household equipment
CN109507904B (en) Household equipment management method, server and management system
CN111258357A (en) Environment distribution establishing method, intelligent device, cleaning robot and storage medium
CN115562053A (en) Household equipment control method and device, computer equipment and storage medium
CN109240098B (en) Equipment configuration method and device, terminal equipment and storage medium
CN108600062B (en) Control method, device and system of household appliance
WO2022042751A1 (en) Movement trajectory generation method and apparatus
CN111240217B (en) State detection method and device, electronic equipment and storage medium
CN113253622B (en) HomeMap-based network environment visualization control method and system
CN111314398A (en) Equipment control method, network distribution method, system and equipment
CN110962132B (en) Robot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination