CN117635881A - Target display method, target detection device, electronic equipment and medium - Google Patents

Target display method, target detection device, electronic equipment and medium Download PDF

Info

Publication number
CN117635881A
CN117635881A CN202311524125.XA CN202311524125A CN117635881A CN 117635881 A CN117635881 A CN 117635881A CN 202311524125 A CN202311524125 A CN 202311524125A CN 117635881 A CN117635881 A CN 117635881A
Authority
CN
China
Prior art keywords
target
gesture
sample
space
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311524125.XA
Other languages
Chinese (zh)
Inventor
余刚
孟钰婧
黄彬彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lumi United Technology Co Ltd
Original Assignee
Lumi United Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi United Technology Co Ltd filed Critical Lumi United Technology Co Ltd
Priority to CN202311524125.XA priority Critical patent/CN117635881A/en
Publication of CN117635881A publication Critical patent/CN117635881A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a target display method, a target detection method, a target display device, electronic equipment and a medium, and relates to the technical field of gesture detection. The target display method comprises the following steps: displaying a space page; the space page is used for displaying a target space; displaying the gesture mark of the target in the target space displayed by the space page; the gesture indicia is for indicating a target gesture of the target. The method and the device solve the problem that the gestures of the targets in the target space are difficult to intuitively acquire in the related art.

Description

Target display method, target detection device, electronic equipment and medium
Technical Field
The application relates to the technical field of gesture detection, in particular to a target display method, a target detection device, electronic equipment and a medium.
Background
Gesture detection has many applications, such as human-machine interaction, motion analysis, medical diagnostics, security monitoring, and the like. The technology of gesture detection is also rapidly developing due to the progress of the related technology, but visual display and presentation at a display layer are still not possible, and it is difficult for a user to intuitively acquire the gesture of each target in the target space.
From the above, the prior art has a technical problem that it is difficult to intuitively obtain the pose of each target in the target space.
Disclosure of Invention
The application provides a target display method, a target detection device, electronic equipment and a medium, which can solve the problem that the gesture of each target in a target space is difficult to intuitively acquire in the related technology. The technical scheme is as follows:
according to one aspect of the present application, a target display method includes: displaying a space page; the space page is used for displaying a target space; displaying the gesture mark of the target in the target space displayed by the space page; the gesture indicia is for indicating a target gesture of the target.
According to one aspect of the present application, a target display device includes: the space page display module is used for displaying space pages; the space page is used for displaying a target space; the gesture mark display module is used for displaying gesture marks of the targets in the target space displayed by the space page; the gesture indicia is for indicating a target gesture of the target.
In an exemplary embodiment, the target presentation device further comprises: the position acquisition module is used for acquiring the position of the target in the physical space; the position determining module is used for determining the position of the target in the target space according to the position of the target in the physical space based on the spatial mapping relation between the physical space and the target space, so as to display the gesture mark of the target in the space page based on the position of the target in the target space.
In an exemplary embodiment, the gesture flag display module includes: a target gesture determining unit, configured to determine a target gesture of the target, and find a gesture mark corresponding to the target gesture; and the gesture mark display unit is used for displaying the searched gesture mark in the target space displayed by the space page.
In an exemplary embodiment, the target presentation device further comprises: a time stamp display module for displaying, in the space page, a time stamp associated with the gesture stamp, the time stamp being for indicating a duration of time the target maintains the target gesture.
In an exemplary embodiment, the time stamp display module includes: a number determining unit configured to determine the number of the gesture marks in the space page; a trigger operation detection unit configured to detect a trigger operation for the posture mark in a case where the number exceeds a set threshold; and the time mark display unit is used for displaying the time mark associated with the selected gesture mark in the space page if the triggering operation of the gesture mark is detected.
In an exemplary embodiment, the target presentation device further comprises: the region display module is used for displaying a first region and a second region in the space page, wherein the first region is used for displaying the target space and/or the gesture mark of the target in the target space, and the second region is used for displaying whether the target is detected or not.
In an exemplary embodiment, the target presentation device further comprises: the target feature determining module is used for determining target features of the target and searching sample features matched with the target features; the sample gesture acquisition module is used for acquiring a sample gesture corresponding to the found sample feature if the sample feature matched with the target feature is found, and taking the sample gesture as the target gesture of the target so as to determine a gesture mark of the target according to the target gesture.
In an exemplary embodiment, the target presentation device further comprises: the sample characteristic acquisition module is used for acquiring sample characteristics of the sample in different postures; and the corresponding relation establishing module is used for establishing the corresponding relation between the sample characteristics of the sample and the sample gesture according to the sample characteristics of the sample in different gestures.
In an exemplary embodiment, the correspondence includes at least one of: the corresponding relation between the first height distribution and the first gesture; the first height distribution is used to describe a first height range of the sample in the first pose; the corresponding relation between the second height distribution and the second gesture; the second height profile is used to describe a second range of heights of the sample in the second pose; the corresponding relation between the third height distribution and the third gesture; the third height profile is used to describe a third height range of the sample in the third pose; wherein, in the first height range, the second height range and the third height range, the average height of each height in the first height range is the largest, and the average height of each height in the third height range is the smallest.
In an exemplary embodiment, the target feature determination module includes: a radar signal receiving unit, configured to receive a radar signal of the target sent by a radar device; the point cloud operation unit is used for carrying out point cloud operation on the radar signal of the target to obtain point cloud data of the target, wherein the point cloud data comprises the height data of each point cloud; a point cloud height distribution determining unit, configured to determine a point cloud height distribution of the target based on height data of each point cloud in the point cloud data of the target; and the target feature acquisition unit is used for taking the point cloud height distribution of the target as the target feature of the target.
In an exemplary embodiment, the sample pose acquisition module includes: the gesture detection result acquisition unit is used for determining a sample gesture corresponding to the first height distribution as a first gesture if the sample feature matched with the target feature is found to be the first height distribution, and taking the first gesture as a gesture detection result of the target; or if the sample feature matched with the target feature is found to be the second height distribution, determining the sample gesture corresponding to the second height distribution as a second gesture, and taking the second gesture as a gesture detection result of the target; or if the sample feature matched with the target feature is found to be the third height distribution, determining the sample gesture corresponding to the third height distribution as a third gesture, and taking the third gesture as a gesture detection result of the target.
In an exemplary embodiment, the sample pose acquisition module further comprises: a point cloud height distribution determining unit, configured to determine, when the point cloud height distribution in the point cloud data of the current frame of the target is the third height distribution, the point cloud height distribution in the point cloud data of the previous frame of the target; the support detection unit is used for detecting whether supports are arranged around the target if the point cloud height distribution in the point cloud data of the previous frame is the first height distribution; the gesture detection result acquisition unit is used for taking the fourth gesture as a gesture detection result of the target if no supporting object is arranged around the target; and the gesture detection result acquisition unit is used for taking the third gesture as a gesture detection result of the target if a support is arranged around the target or the point cloud height distribution in the point cloud data of the previous frame is the second height distribution.
In an exemplary embodiment, the sample pose acquisition module further comprises: the historical position acquisition unit is used for detecting historical positions of the target obtained based on the previous frames of point cloud data under the condition that the point cloud height distribution of the current frames of point cloud data of the target is the first height distribution; the gesture detection result obtaining unit is used for taking a fifth gesture as a gesture detection result of the target if the current position of the target obtained based on the current frame point cloud data changes compared with the historical position; and the gesture detection result acquisition unit is used for taking the first gesture as a gesture detection result of the target if the current position is unchanged compared with the historical position.
In an exemplary embodiment, the sample pose acquisition module further comprises: the biological feature extraction unit is used for extracting biological features based on radar signals of the target under the condition that the point cloud height distribution in the current frame of point cloud data of the target is third height distribution, and acquiring physiological data of the target; and the gesture detection result acquisition unit is used for taking the sixth gesture as a gesture detection result of the target if the physiological data of the target is matched with the sleep physiological parameter.
According to one aspect of the present application, a target detection method includes: acquiring a radar signal of a target; performing feature analysis on the radar signal of the target to obtain target features of the target; according to the target characteristics, searching sample characteristics matched with the target characteristics; if the sample characteristics matched with the target characteristics are found, determining a sample gesture corresponding to the found sample characteristics, and taking the sample gesture as a gesture detection result of the target; the gesture detection result is used for indicating a target gesture of the target.
According to one aspect of the present application, an object detection apparatus includes: the radar signal acquisition module is used for acquiring radar signals of targets; the feature analysis module is used for carrying out feature analysis on the radar signal of the target to obtain target features of the target; the searching module is used for searching sample characteristics matched with the target characteristics according to the target characteristics; the result acquisition module is used for determining a sample gesture corresponding to the found sample feature if the sample feature matched with the target feature is found, and taking the sample gesture as a gesture detection result of the target; the gesture detection result is used for indicating a target gesture of the target.
According to one aspect of the application, an electronic device comprises at least one processor and at least one memory, wherein the memory has computer readable instructions stored thereon; the computer readable instructions are executed by one or more of the processors to cause an electronic device to implement the method as described above.
According to one aspect of the present application, a storage medium has stored thereon computer readable instructions that are executed by one or more processors to implement the method as described above.
The beneficial effects that this application provided technical scheme brought are:
by displaying the target space and the gesture mark of the target in the space page, the gesture information of the target in the target space is visually displayed, so that the gesture information of the target in the target space can be intuitively known in real time, the target space and the target can be timely managed and controlled, and measures can be timely found and taken when the target falls. The problem that the gesture of each target in the target space is difficult to intuitively acquire in the related art is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment in accordance with the teachings of the present application;
FIG. 2 is a flow chart illustrating a method of target presentation according to an exemplary embodiment;
FIG. 3 is a flow chart of an embodiment prior to step 210 in the corresponding embodiment of FIG. 2;
FIG. 4 is a flowchart illustrating a method of object detection, according to an example embodiment;
FIG. 5A is a gesture icon corresponding to a different target gesture;
FIG. 5B is a schematic diagram of a space page;
FIG. 6A is a schematic diagram of point cloud data for a sample person in an upright, sitting, lying, etc. position;
FIG. 6B is a schematic diagram of the point cloud height distribution when the target switches between standing, sitting, and lying;
FIG. 6C is a flow chart for separating physiological data from radar signals;
FIG. 7 is a block diagram of a target presentation device, according to an example embodiment;
FIG. 8 is a block diagram of an object detection device according to an exemplary embodiment;
fig. 9 is a hardware configuration diagram of a terminal shown according to an exemplary embodiment;
fig. 10 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of illustrating the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
The target display method provided by the application can realize visual display of the target gesture, and is correspondingly suitable for a target display device which can be deployed on electronic equipment, wherein the electronic equipment can be computer equipment configured with a von neumann architecture, and the computer equipment comprises a desktop computer, a notebook computer, a server and the like; the electronic device may also be an electronic device having a central control function, for example, the electronic device includes a control panel and the like; the electronic device may also refer to a portable mobile electronic device, including, for example, a smart phone, a tablet computer, etc.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment related to a target display method. The implementation environment includes at least a user terminal 110, an intelligent device 130, a server side 170, and a network device, which in fig. 1 includes a gateway 150 and a router 190, which are not limited in this regard.
The user terminal 110 may be considered as a user terminal or a terminal, and may be configured (also understood as installation) with a client associated with the smart device 130, where the user terminal 110 may be an electronic device such as a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent control panel, or other devices having display and control functions, which is not limited herein.
The client is associated with the smart device 130, and is essentially that the user registers an account in the client, and configures the smart device 130 in the client, for example, the configuration includes adding a device identifier to the smart device 130, so that when the client is operated in the user terminal 110, functions related to device display, device control, and the like of the smart device 130 can be provided for the user, where the client may be in the form of an application program or a web page, and correspondingly, an interface where the client performs device display may be in the form of a program window or a web page, which is not limited herein.
The intelligent device 130 is disposed in the gateway 150 and communicates with the gateway 150 through its own configured communication module, and is further controlled by the gateway 150. It should be understood that smart device 130 is generally referred to as one of a plurality of smart devices 130, and embodiments of the present application are merely illustrated with smart device 130, i.e., embodiments of the present application are not limited in the number and type of smart devices deployed in gateway 150. In one application scenario, intelligent device 130 accesses gateway 150 via a local area network, thereby being deployed in gateway 150. The process of intelligent device 130 accessing gateway 150 through a local area network includes: a local area network is first established by gateway 150 and intelligent device 130 joins the local area network established by gateway 150 by connecting to gateway 150. Such local area networks include, but are not limited to: ZIGBEE or bluetooth. The intelligent device 130 may be an intelligent printer, an intelligent fax machine, an intelligent camera, an intelligent air conditioner, an intelligent door lock, an intelligent lamp, or a human body sensor, a door and window sensor, a temperature and humidity sensor, a water immersion sensor, a natural gas alarm, a smoke alarm, a wall switch, a wall socket, a wireless switch, a wireless wall-mounted switch, a magic cube controller, a curtain motor, a millimeter wave radar, etc. which are provided with a communication module.
Interaction between user terminal 110 and intelligent device 130 may be accomplished through a local area network, or through a wide area network. In an application scenario, the ue 110 establishes a communication connection between the router 190 and the gateway 150 in a wired or wireless manner, for example, including but not limited to WIFI, so that the ue 110 and the gateway 150 are disposed in the same local area network, and further the ue 110 may implement interaction with the smart device 130 through a local area network path. In another application scenario, the ue 110 establishes a wired or wireless communication connection between the server 170 and the gateway 150, for example, but not limited to, 2G, 3G, 4G, 5G, WIFI, etc., so that the ue 110 and the gateway 150 are deployed in the same wide area network, and further the ue 110 may implement interaction with the smart device 130 through a wide area network path.
The server 170 may be considered as a cloud, a cloud platform, a server, etc., where the server 170 may be a server, a server cluster formed by a plurality of servers, or a cloud computing center formed by a plurality of servers, so as to better provide background services to a large number of user terminals 110. For example, the background service includes a target detection service.
In an application scenario, the user terminal 110 displays a space page, where the space page is used for displaying a target space; displaying the gesture mark of the target in the target space displayed by the space page; the target pose of the target is indicated by the pose mark.
Referring to fig. 2, an embodiment of the present application provides a target exhibition method, which is applicable to an electronic device, and the electronic device may be the user terminal 110 in the implementation environment shown in fig. 1.
In the following method embodiments, for convenience of description, the execution subject of each step of the method is described as an electronic device, but this configuration is not particularly limited.
As shown in fig. 2, the method may include the steps of:
step 200, displaying the space page.
The space page is used for displaying the target space.
The space page may be in the form of a UI (user interface), web interface, mobile application interface, or embedded interface, among other implementations, without limitation.
The target space may be a home space, an office space, a factory space, and the like.
Step 210, displaying the gesture mark of the target in the target space displayed by the space page.
The pose mark is used for indicating the target pose of the target. The target poses may include sitting, standing, lying, falling, and the like.
The gesture markings may be text markings, color markings, etc., and may also be graphic/iconic markings, without specific limitation.
By displaying the target space and the gesture mark of the target in the space page, the gesture information of the target in the target space is visually displayed, so that the gesture information of the target in the target space can be intuitively known in real time, the target space and the target can be timely managed and controlled, and measures can be timely found and taken when the target falls. The problem that the gesture of each target in the target space is difficult to intuitively acquire in the related art is solved.
In an exemplary embodiment, step 210 may be preceded by the steps of:
step 208, the location of the target in physical space is obtained.
Possibly, the positioning is performed by a radar device to obtain the position of the target in physical space.
Step 209, determining the position of the target in the target space according to the position of the target in the physical space based on the spatial mapping relationship between the physical space and the target space, so as to display the gesture mark of the target in the space page based on the position of the target in the target space.
For example, the physical space corresponding to the target space is a home space, the target is a person sleeping in a bedroom, that is, the position of the target in the physical space is on a bed of the bedroom, the position of the target in the target space is the position of a bedroom bed icon, and at this time, the electronic device displays a gesture mark of the target sleeping in the position of the bedroom bed icon in the space page, for example, the bedroom bed icon and the gesture mark are overlapped.
By the above embodiment, based on the position of the target in the target space, the gesture mark of the target is displayed in the space page, and the position of the space page where the gesture mark is located can reflect the position of the target in the target space.
In an exemplary embodiment, step 210 may include the steps of:
in step 2101, a target pose of the target is determined, and a pose mark corresponding to the target pose is found.
In one possible implementation, target pose detection may be performed using computer vision and deep learning techniques to determine a target pose of a target.
In one possible implementation, the target gesture detection may also be performed by using a pre-established correspondence between features and gestures to determine a target gesture of the target.
The target poses may include sitting, standing, lying, falling, and the like. The pose mark is used for indicating the target pose of the target.
The gesture markings may be text markings, color markings, etc., and may also be graphic/iconic markings, without specific limitation.
Step 2102, displaying the searched gesture mark in a target space displayed by the space page.
Through the cooperation of the embodiment, after the target gesture of the target is detected, the target gesture is visually displayed through the gesture icon, so that a user can better know the gesture of a target space person, and timely control can be performed, for example, when the person falls down, the user can timely recognize and take help measures.
In an exemplary embodiment, step 210 may be followed by the steps of:
in step 211, in the space page, a time stamp associated with the gesture stamp is displayed.
The time stamp is used to indicate the duration that the target maintains the target pose.
Possibly, the time stamp may be in the form of a text stamp.
In one possible implementation, step 211 may include the steps of:
step 2111, determining the number of gesture markers in the spatial page.
The gesture indicia is used to indicate the target gesture of the target, it being understood that when two or more targets are present, two or more gesture indicia are displayed in the space page accordingly.
Step 2112, in the case where the number exceeds the set threshold, detects a trigger operation for the posture mark.
The threshold value may be set arbitrarily, or may be set for the purpose of simplifying the space page, which is not particularly limited herein.
Possibly, the set threshold may be 2 or 3.
Step 2113, if a triggering operation for the gesture mark is detected, displaying the time mark associated with the selected gesture mark in the space page.
Possibly, the triggering operation may be a click, a selection, etc.
In one possible implementation, step 211 may further include the steps of:
step 2114, in the event that the number does not exceed the set threshold, displaying a time stamp associated with the gesture stamp.
By the above-described embodiment, displaying the time stamp associated with the posture stamp, it is possible to demonstrate the duration of time for which the target maintains the target posture; under the condition that a plurality of gesture marks exist, only the gesture marks are triggered to operate, the time marks are displayed, the layout of the space page can be simplified, excessive visual interference is avoided, and the space page is more concise.
In an exemplary embodiment, step 210 may be followed by the steps of:
step 212, displaying the first region and the second region in the space page.
The first region is used for displaying the target space and/or the gesture mark of the target in the target space, and the second region is used for displaying whether the target is detected.
Through the steps, the space pages are further enriched, so that the space pages can realize more functions, and the use requirements are better met.
As shown in fig. 3, in an exemplary embodiment, step 210 may be preceded by the steps of:
step 300, determining target characteristics of the target, and searching sample characteristics matched with the target characteristics.
The target features may be extracted from image data or may be extracted from radar signals.
The target characteristics may include the point cloud height distribution of the target, physiological range (respiration rate range, heartbeat range), current position, not specifically limited herein.
The sample features may be extracted from image data or from radar signals.
Step 310, if a sample feature matching with the target feature is found, a sample gesture corresponding to the found sample feature is obtained, and the sample gesture is taken as a target gesture of the target, so as to determine a gesture mark of the target according to the target gesture.
In one possible implementation, step 310 may be preceded by the steps of:
at step 308, sample features of the sample at different poses are obtained.
Possibly, radar signals of the sample in different postures are obtained, and feature analysis is carried out on the radar signals of the sample in different postures, so that sample features of the sample in different postures are obtained.
The feature analysis may be any radar signal processing algorithm, for example, wavelet transformation, empirical mode decomposition, fourier transformation, etc., and may also be a maximum value detection algorithm, a phase difference measurement algorithm, a pulse compression algorithm, CFAR processing, etc.
Step 309, establishing a corresponding relationship between the sample characteristics of the sample and the sample gesture according to the sample characteristics of the sample in different gestures.
For example, if the sample feature includes a point cloud height distribution, a correspondence between the point cloud height distribution and the sample pose is established according to the point cloud height distribution of the sample corresponding to different poses, and specifically, the correspondence includes a correspondence between the first height range and the first pose, a correspondence between the second height range and the second pose, and a correspondence between the third height range and the third pose.
Under the action of the embodiment, the sample is in different postures, so that radar signals of the sample in the different postures are obtained, sample characteristics of the sample in the different postures are obtained through characteristic analysis, and a corresponding relation between the sample characteristics and the sample postures is established, so that the target posture is obtained based on target characteristic matching.
In an exemplary embodiment, the sample features include a point cloud height distribution, i.e., step 309 may include the steps of:
step 3091, according to the point cloud height distribution of the sample under different postures, establishing the corresponding relation between the point cloud height distribution of the sample and the posture of the sample.
The correspondence includes at least one of: the corresponding relation between the first height distribution and the first gesture, the corresponding relation between the second height distribution and the second gesture and the corresponding relation between the third height distribution and the third gesture.
The first height distribution is used for describing a first height range of the sample in a first posture, the second height distribution is used for describing a second height range of the sample in a second posture, and the third height distribution is used for describing a third height range of the sample in a third posture;
wherein the first posture is standing, the second posture is sitting, and the third posture is lying. The average height of each height in the first height range is the largest, and the average height of each height in the third height range is the smallest in the first height range, the second height range and the third height range.
Under the action of the embodiment, the point cloud height distribution is used as a sample feature, so that the corresponding relation between the first height range and the first gesture, the corresponding relation between the second height range and the second gesture and the corresponding relation between the third height range and the third gesture are established, and the gesture detection based on the established corresponding relation is convenient.
In an exemplary embodiment, step 300 may include the steps of:
step 301, receiving a radar signal of a target transmitted by a radar device.
The target may be a person, a robot, or the like.
The radar signal is obtained by reflecting electromagnetic waves sent by the radar device on a target and then returning signals, and can be used for measuring information such as distance, angle, speed and the like between the target and the radar, for example, the radar signal can be ADC data of a radio frequency end received by the radar device.
Step 302, performing point cloud operation on the radar signal of the target to obtain point cloud data of the target.
Possibly, the point cloud operation includes processing steps of pulse compression, pulse demodulation, target detection, doppler processing and the like, and the radar signal obtains point cloud data through the processing steps.
The point cloud data is made up of a large number of discrete points, each of which contains location and possibly other attribute information such as three-dimensional coordinates (X, Y and Z), color, normal vector, class labels, etc.
Step 303, determining the point cloud height distribution of the target based on the heights of the point clouds in the point cloud data of the target.
The point cloud height distribution is used to represent the height range of each point cloud in the point cloud data.
Possibly, the height of each point cloud in the point cloud data may be represented by a Z coordinate.
For the point cloud data of each frame, the heights (Z coordinate values) of all the point clouds are calculated, and all the points can be traversed to find the minimum and maximum heights (Z coordinate values), so that the point cloud height distribution is obtained.
Step 304, taking the point cloud height distribution of the target as the target characteristic of the target.
For example, if the point cloud height distribution of the target is a first height range, the first height range is the target feature of the target.
In one possible implementation, step 310 may include the steps of:
step 311, if the sample feature matching the target feature is found to be the first height distribution, determining the sample gesture corresponding to the first height distribution as the first gesture, and taking the first gesture as the gesture detection result of the target; or if the sample feature matched with the target feature is found to be the second height distribution, determining the sample gesture corresponding to the second height distribution as a second gesture, and taking the second gesture as a gesture detection result of the target; or if the sample feature matched with the target feature is found to be the third height distribution, determining the sample gesture corresponding to the third height distribution as the third gesture, and taking the third gesture as the gesture detection result of the target.
Through the steps, gesture detection can be realized based on radar signals, and privacy protection is good.
In one possible implementation, step 310 may include the steps of:
in step 312, in the case that the point cloud height distribution in the current frame of point cloud data of the target is the third height distribution, the point cloud height distribution in the point cloud data of the previous frame of the target is determined.
The third altitude range description is given for the point cloud height distribution in the current frame point cloud data: the pose of the target at the current moment is the third pose (i.e., lying).
The point cloud data of the previous frame is the point cloud data corresponding to the previous time.
In step 313, if the point cloud height distribution in the point cloud data of the previous frame is the first height distribution, it is detected whether a support is disposed around the target.
The point cloud height distribution in the point cloud data of the previous frame is described as a first height range: the target is in the first posture, i.e. standing, at the previous moment.
The support comprises an object for the target to lie down, which may be a bed, a sofa or the like, for example.
In step 314, if no support is disposed around the target, the fourth gesture is taken as the gesture detection result of the target.
The fourth posture is a fall.
In this embodiment, the target stands immediately before, the target lies at the current time, and no support is arranged around the target, and the target posture is determined to be the fourth posture (falling).
In step 315, if the support is disposed around the target or the point cloud height distribution in the point cloud data of the previous frame is the second height distribution, the third gesture is taken as the gesture detection result of the target.
In this embodiment, the target stands at the previous moment, the target is lying at the current moment, and the supports are arranged around the target, so that the target posture is determined to be the third posture (lying).
And when the target is in the second posture (sitting) at the previous moment and is lying at the current moment, determining that the target posture is in the third posture (lying).
In the process, the radar signals of the target are processed to obtain the point cloud height distribution of the target, so that different types of target characteristics are constructed, the target gesture of the target is detected based on the target characteristics, and detection and recognition can be performed on various different gestures.
In one possible implementation, step 310 may include the steps of:
in step 316, in the case that the point cloud height distribution in the current frame of point cloud data of the target is the first height distribution, the historical position of the target obtained based on the previous frames of point cloud data is detected.
The current frame point cloud data is point cloud data corresponding to the current moment, and the height range of each point cloud in the current frame point cloud data is a first height range, which indicates that the target current moment is in a first posture (i.e. standing).
The previous frames of point cloud data are point cloud data corresponding to the previous times, and the historical position of the target can be obtained based on the previous frames of point cloud data.
Clustering is carried out on the point cloud data of the target to obtain a point cloud block corresponding to the target, and the central coordinate of the point cloud block is taken as the position of the target.
In step 317, if the current position of the target obtained based on the current frame point cloud data changes compared with the historical position, the fifth gesture is taken as the gesture detection result of the target.
The fifth gesture is walking, the target is in the first gesture (i.e. standing) at the current moment, and the position changes, so that the target gesture is determined to be walking.
In step 318, if the current position is unchanged from the historical position, the first gesture is taken as the gesture detection result of the target.
The first posture is standing.
The target is currently in the first posture (i.e., standing) and the position is unchanged, so that the target posture is determined to be standing.
In one possible implementation, step 310 may include the steps of:
and 319, under the condition that the point cloud height distribution in the current frame of point cloud data of the target is the third height distribution, extracting biological characteristics based on radar signals of the target, and acquiring physiological data of the target.
The biometric extraction may include peak detection based algorithms, autocorrelation function based algorithms, fourier transform based algorithms, filter based algorithms, wavelet transforms, empirical mode decomposition, and so forth.
The physiological data includes respiration rate and heart beat frequency.
It should be noted that radar signals can capture small movements of the human body, including variations in beating and breathing of the heart. By performing signal processing on the radar signal, frequency information of heart beat and respiration can be extracted.
And 320, if the physiological data of the target is matched with the sleep physiological parameter, taking the sixth gesture as a gesture detection result of the target.
Generally, the respiratory rate of a person is 16-20 times/minute and the heartbeat frequency is 40-60 times/minute when the person sleeps, and the respiratory rate can be used as a sleep physiological parameter. Of course, the sleep physiological index can be adjusted according to actual needs.
The sixth posture is sleep.
By the embodiment, the physiological data, the positions and the like are utilized to provide a monitoring method aiming at different postures (sleeping, walking and the like), more postures can be covered, and therefore posture display is more accurate and perfect.
As shown in fig. 4, the embodiment of the present application further provides a target detection method, which is applicable to an electronic device, where the electronic device may be the user terminal 110 in the implementation environment shown in fig. 1, or may be the server 170 in the implementation environment shown in fig. 1.
As shown in fig. 4, the target detection method includes the steps of:
step 400, obtaining radar signals of a target.
The target may be a person, a robot, or the like.
The radar signal is obtained by the return signal of the electromagnetic wave sent by the radar system after being reflected on the target, and can be used for measuring the distance, angle, speed and other information between the target and the radar, for example, the radar signal can be ADC data of a radio frequency end received by radar equipment.
In step 410, a feature analysis is performed on the radar signal of the target to obtain a target feature of the target.
The feature analysis may be any radar signal processing algorithm, such as wavelet transform, empirical mode decomposition, fourier transform, etc., and may also be a maximum detection algorithm, phase difference measurement algorithm, pulse compression algorithm, CFAR processing, etc.
The target characteristics may include the point cloud height distribution of the target, physiological range (respiration rate range, heartbeat range), current position, not specifically limited herein.
Step 420, searching for sample features matching the target features according to the target features.
In one possible implementation, finding sample features that match target features includes: sample features that are similar to the target features are found.
In step 430, if a sample feature matching the target feature is found, determining a sample gesture corresponding to the found sample feature, and taking the sample gesture as a gesture detection result of the target.
The gesture detection result is used to indicate the target gesture of the target.
In one possible implementation, the correspondence between the sample characteristics and the sample pose is pre-established, and the sample may be a person, a robot, or the like, and the sample characteristics are obtained based on a characteristic analysis of the radar signal of the sample.
In the process of establishing the corresponding relation, the sample is in different postures, so that radar signals of the sample in the different postures are obtained, sample characteristics of the sample in the different postures are obtained through characteristic analysis, and the corresponding relation between the sample characteristics and the sample postures is established.
Through the method, the radar signal of the target is subjected to feature analysis to obtain the target feature of the target, and the corresponding relation between the constructed sample feature and the sample gesture is combined to obtain the gesture detection result of the target. By constructing the sample characteristics and establishing the corresponding relation between the sample characteristics and the sample gestures, the gesture detection method based on the corresponding relation between the pre-constructed sample characteristics and the sample gestures is simple and feasible, and a machine learning model is not needed, so that the problem that the target gesture detection in the related technology is not simple and convenient can be effectively solved.
Fig. 5A and fig. 5B are schematic diagrams illustrating a specific implementation of a target exhibition method in an application scenario.
Wherein fig. 5A shows gesture icons (gesture markers) corresponding to different target gestures.
By setting corresponding gesture icons for different target gestures, the target gestures are visually displayed.
Fig. 5B shows a schematic view of a space page, wherein the space page comprises a first area 1 and a second area 2, the first area 1 shows a target space and gesture icons of people under the target space, and the second area 2 shows whether people exist in the target space. In addition, by clicking on the gesture icon, the space page may also display the duration for which the person remains in the current gesture.
Through the application scene, according to different human body postures, corresponding posture icons are matched, graphical illustration is carried out, and the graphical illustration is more visual, so that the posture icons of the personnel can be conveniently displayed in the space page.
Fig. 6A to fig. 6C are schematic diagrams illustrating an implementation of a target gesture detection method in an application scenario.
Fig. 6A illustrates point cloud data of a sample person in an upright, sitting, lying, etc. posture.
The point cloud data of the sample personnel in the standing, sitting, lying and other postures can be obtained by respectively carrying out point cloud operation on radar signals of the sample personnel in the standing, sitting, lying and other postures.
And determining the point cloud height distribution of the sample personnel corresponding to the standing, sitting, lying and other postures based on the heights of the point clouds in the point cloud data corresponding to the standing, sitting, lying and other postures, and taking the point cloud height distribution as the sample characteristics of the sample personnel in the standing, sitting, lying and other postures.
Possibly, in order to improve the accuracy of the point cloud height distribution, after the multi-frame (15 frames) point cloud data are spliced together, the point cloud height distribution of the sample personnel corresponding to the standing, sitting, lying and other postures is determined.
The first height range corresponds to the vertical point cloud height distribution, the second height range corresponds to the sitting point cloud height distribution, the third height range corresponds to the lying point cloud height distribution, and the first range > the second range > the third range can be obtained in fig. 6A.
A correspondence between the first height range and standing, a correspondence between the second height range and sitting, and a correspondence between the third height range and lying may be established.
Fig. 6B shows the point cloud height distribution when the target switches between standing, sitting, and lying. It can be seen that in the process of mutually switching three postures of standing, sitting and lying, the height of the point cloud is distributed in a descending or ascending process.
Possibly, the target attitude of the target can be distinguished by detecting whether the point cloud height distribution of the target descends or ascends and the point cloud height distribution of the target when the target is stationary.
In conjunction with fig. 6B, a fall may be further detected, where the target posture is a determination condition of the fall: 1. the target posture is standing at the previous moment and lying at the current moment; 2. there is no support around the target such as a bed or sofa.
A person generally transits from standing to lying through sitting, so that the target posture is sitting at the previous moment, lying at the current moment, and the target posture is determined to be lying.
Fig. 6C shows a flow chart for separating physiological data from radar signals.
And acquiring the respiration rate and the heartbeat of the sample person in the sixth posture (sleeping), and further determining the physiological data range of the sample person in the sixth posture (sleeping), for example, the respiration rate is 16-20 times/minute, and the heartbeat frequency is 40-60 times/minute.
Establishing the corresponding relation between the breathing rate of 16-20 times/minute and the heartbeat frequency of 40-60 times/and the sixth posture (sleeping).
Compared with the related art, the method has the following beneficial effects:
1. by displaying the target space and the gesture mark of the target in the space page, the gesture information of the target in the target space is visually displayed, and the gesture information of the target in the target space can be intuitively known in real time, so that the target space is timely managed and controlled, and measures are timely found and taken when the target falls. The problem that the gesture of each target in the target space is difficult to intuitively acquire in the related art is solved.
2. And the sample is in different postures, so that radar signals of the sample in the different postures are obtained, sample characteristics of the sample in the different postures are obtained through characteristic analysis, and a corresponding relation between the sample characteristics and the sample postures is established, so that a target posture is obtained based on target characteristic matching.
3. And taking the point cloud height distribution as a sample characteristic, thereby establishing a corresponding relation between the first height range and the first gesture, a corresponding relation between the second height range and the second gesture and a corresponding relation between the third height range and the third gesture.
4. And (3) separating and obtaining physiological data (respiratory rate and heartbeat) of the sample based on the radar signals, determining the physiological range of the sample in sleep based on the physiological data of the sample in sleep, and establishing a corresponding relation between the physiological range of the sample and the sleep so as to detect the sleep posture of the target.
5. And performing feature analysis on the radar signal of the target to obtain target features of the target, and combining the corresponding relation between the constructed sample features and the sample gestures to obtain a gesture detection result of the target. By constructing the sample characteristics and establishing the corresponding relation between the sample characteristics and the sample gestures, the gesture detection method based on the corresponding relation between the pre-constructed sample characteristics and the sample gestures is simple and easy to implement, and a machine learning model is not needed, so that the problem that the gestures of all targets in a target space are difficult to intuitively acquire in the related technology can be effectively solved.
6. The gesture detection method is achieved based on radar signals, privacy protection is good, and excessive privacy details cannot be exposed.
The following is an embodiment of the apparatus of the present application, which may be used to execute the target detection method and the target display method related to the present application. For details not disclosed in the device embodiments of the present application, please refer to method embodiments of the target detection method and the target display method related to the present application.
Referring to fig. 7, in an embodiment of the present application, a target display device 700 is provided, including but not limited to: a spatial page display module 710, and a gesture flag display module 720.
The space page display module 710 is configured to display a space page; the space page is used for displaying the target space.
The gesture mark display module 720 is configured to display a gesture mark of a target in a target space displayed on the space page; the pose mark is used for indicating the target pose of the target.
In an exemplary embodiment, the target presentation device 700 further comprises: a position acquisition module 730, configured to acquire a position of a target in a physical space; the position determining module 740 is configured to determine, based on a spatial mapping relationship between the physical space and the target space, a position of the target in the target space according to a position of the target in the physical space, so as to display a gesture mark of the target in the space page based on the position of the target in the target space.
In an exemplary embodiment, the gesture flag display module 720 includes: a target posture determining unit 721 for determining a target posture of the target and finding a posture mark corresponding to the target posture; and the gesture mark display unit 722 is configured to display the searched gesture mark in the target space displayed on the space page.
In an exemplary embodiment, the target presentation device 700 further comprises: a time stamp display module 750 for displaying, in the space page, a time stamp associated with the gesture stamp, the time stamp indicating a duration of time the target maintains the target gesture.
In an exemplary embodiment, the time stamp display module 750 includes: a number determination unit 751 for determining the number of the gesture marks in the space page; a trigger operation detection unit 752 for detecting a trigger operation for the posture mark in the case where the number exceeds a set threshold; and a time stamp display unit 753, configured to display, if a trigger operation for the gesture stamp is detected, the selected time stamp associated with the gesture stamp in the space page.
In an exemplary embodiment, the target presentation device 700 further comprises: the region display module 760 is configured to display a first region and a second region in the space page, where the first region is used to display the target space and/or a gesture mark of the target in the target space, and the second region is used to display whether the target is detected.
In an exemplary embodiment, the target presentation device 700 further comprises: a target feature determination module 770, configured to determine target features of the target and find sample features that match the target features; the sample gesture obtaining module 780 is configured to obtain a sample gesture corresponding to the found sample feature if the sample feature matching the target feature is found, and use the sample gesture as a target gesture of the target, so as to determine a gesture mark of the target according to the target gesture.
In an exemplary embodiment, the target presentation device 700 further comprises: a sample feature obtaining module 790, configured to obtain sample features of the sample in different poses; the correspondence establishing module 795 is configured to establish a correspondence between the sample characteristics of the sample and the sample pose according to the sample characteristics of the sample in different poses.
In an exemplary embodiment, the correspondence includes at least one of: the corresponding relation between the first height distribution and the first gesture; the first height distribution is used to describe a first height range of the sample in the first pose; the corresponding relation between the second height distribution and the second gesture; the second height profile is used to describe a second range of heights of the sample in the second pose; the corresponding relation between the third height distribution and the third gesture; the third height profile is used to describe a third height range of the sample in the third pose; wherein, in the first height range, the second height range and the third height range, the average height of each height in the first height range is the largest, and the average height of each height in the third height range is the smallest.
In an exemplary embodiment, the target feature determination module 770 includes: a radar signal receiving unit 771 for receiving a radar signal of the target transmitted by the radar apparatus; the point cloud computing unit 772 is configured to perform a point cloud operation on the radar signal of the target to obtain point cloud data of the target, where the point cloud data includes height data of each point; a point cloud height distribution determining unit, configured to determine a point cloud height distribution of the target based on height data of each point cloud in the point cloud data of the target; a target feature acquiring unit 773, configured to take the point cloud height distribution of the target as a target feature of the target.
In an exemplary embodiment, the sample pose acquisition module 780 includes: a gesture detection result obtaining unit 781, configured to determine, if a sample feature that matches the target feature is found to be a first height distribution, that a sample gesture corresponding to the first height distribution is a first gesture, and use the first gesture as a gesture detection result of the target; or if the sample feature matched with the target feature is found to be the second height distribution, determining the sample gesture corresponding to the second height distribution as a second gesture, and taking the second gesture as a gesture detection result of the target; or if the sample feature matched with the target feature is found to be the third height distribution, determining the sample gesture corresponding to the third height distribution as a third gesture, and taking the third gesture as a gesture detection result of the target.
In an exemplary embodiment, the sample pose acquisition module 780 further comprises: a point cloud height distribution determining unit 782, configured to determine, when the point cloud height distribution in the current frame of point cloud data of the target is the third height distribution, the point cloud height distribution in the point cloud data of the previous frame of the target; a support detecting unit 783, configured to detect whether a support is arranged around the target if the point cloud height distribution in the point cloud data of the previous frame is the first height distribution; a gesture detection result obtaining unit 784, configured to take a fourth gesture as a gesture detection result of the target if no support is arranged around the target; and an attitude detection result obtaining unit 784, configured to take the third attitude as an attitude detection result of the target if a support is arranged around the target or if a point cloud height distribution in the point cloud data of the previous frame is a second height distribution.
In an exemplary embodiment, the sample pose acquisition module 780 further comprises: a historical position obtaining unit 785, configured to detect, when a point cloud height distribution in current frame point cloud data of the target is a first height distribution, a historical position of the target obtained based on previous frames of point cloud data; a gesture detection result obtaining unit 784, configured to take a fifth gesture as a gesture detection result of the target if a current position obtained by the target based on the current frame point cloud data changes compared with the historical position; and an attitude detection result obtaining unit 784, configured to take the first attitude as an attitude detection result of the target if the current position is unchanged from the historical position.
In an exemplary embodiment, the sample pose acquisition module 780 further comprises: a biological feature extraction unit 786, configured to perform biological feature extraction based on a radar signal of the target to obtain physiological data of the target when a point cloud height distribution in the current frame of point cloud data of the target is a third height distribution; and an attitude detection result obtaining unit 784, configured to take the sixth attitude as an attitude detection result of the target if the physiological data of the target matches the sleep physiological parameter.
Referring to fig. 8, an object detection apparatus 800 is provided in an embodiment of the present application, including but not limited to: radar signal acquisition module 810, feature analysis module 820, search module 830, and result acquisition module 840.
The radar signal acquisition module 810 is configured to acquire a radar signal of a target.
The feature analysis module 820 is configured to perform feature analysis on the radar signal of the target to obtain a target feature of the target.
A searching module 830, configured to search for a sample feature matching the target feature according to the target feature.
The result obtaining module 840 is configured to determine a sample gesture corresponding to the found sample feature if the sample feature matching the target feature is found, and take the sample gesture as a gesture detection result of the target; the gesture detection result is used to indicate the target gesture of the target.
It should be noted that, when the object posture detecting device and the object posture displaying device provided in the above embodiments are used for detecting and displaying an object posture, only the division of the above functional modules is used for illustrating, in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structures of the object posture detecting device and the object posture displaying device are divided into different functional modules, so as to complete all or part of the functions described above.
In addition, the embodiments of the target gesture detection apparatus and the target gesture display apparatus provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module performs the operation has been described in detail in the method embodiments, which are not described herein again.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a structure of a terminal according to an exemplary embodiment. The terminal is suitable for use in the user terminal 110 in the implementation environment shown in fig. 1.
It should be noted that the terminal is just one example adapted to the present application, and should not be construed as providing any limitation on the scope of use of the present application. Nor should the terminal be construed as necessarily relying on or necessarily having one or more of the components of the exemplary terminal 1400 illustrated in fig. 9.
As shown in fig. 9, terminal 1400 includes memory 101, memory controller 103, one or more (only one is shown in fig. 9) processors 105, peripheral interface 107, radio frequency module 109, positioning module 111, camera module 113, audio module 115, touch screen 117, and key module 119. These components communicate with each other via one or more communication buses/signal lines 121.
The memory 101 may be configured to store computer readable instructions, such as computer readable instructions corresponding to the method and apparatus for detecting and displaying a target gesture in the exemplary embodiments of the present application, and the processor 105 may perform various functions and data processing by reading the computer readable instructions stored in the memory 101, that is, complete the method for detecting and displaying a target gesture.
Memory 101, which is the carrier of resource storage, may be random access memory, e.g., high speed random access memory, non-volatile memory, such as one or more magnetic storage devices, flash memory, or other solid state memory. The storage means may be a temporary storage or a permanent storage.
The peripheral interface 107 may include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input/output interface, at least one USB interface, etc. for coupling external various input/output devices to the memory 101 and the processor 105 to enable communication with the external various input/output devices.
The radio frequency module 109 is configured to receive and transmit electromagnetic waves, and to implement mutual conversion between the electromagnetic waves and the electrical signals, so as to communicate with other devices through a communication network. The communication network may include a cellular telephone network, a wireless local area network, or a metropolitan area network, and may employ various communication standards, protocols, and techniques.
Positioning module 111 is configured to obtain the current geographic location of terminal 1400. Examples of the positioning module 111 include, but are not limited to, global satellite positioning system (GPS), wireless local area network or mobile communication network based positioning technology.
The camera module 113 is attached to a camera for taking pictures or videos. The photographed pictures or videos may be stored in the memory 101, and may also be transmitted to an upper computer through the rf module 109.
The audio module 115 provides an audio interface to the user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more earphone interfaces. The interaction of the audio data with other devices is performed through the audio interface. The audio data may be stored in the memory 101 or may be transmitted via the radio frequency module 109.
The touch screen 117 provides an input-output interface between the terminal 1400 and the user. Specifically, the user may perform an input operation, such as a gesture operation of clicking, touching, sliding, etc., through the touch screen 117 to cause the terminal 1400 to respond to the input operation. The terminal 1400 displays and outputs the output content formed by any one or combination of text, picture or video to the user through the touch screen 117.
The key module 119 includes at least one key to provide an interface for a user to input to the terminal 1400, and the user can cause the terminal 1400 to perform different functions by pressing different keys. For example, the sound adjustment key may allow a user to adjust the volume of sound played by terminal 1400.
It is to be understood that the configuration shown in fig. 9 is merely illustrative and that terminal 1400 may also include more or fewer components than shown in fig. 9 or have different components than shown in fig. 9. The components shown in fig. 9 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 10, in an embodiment of the present application, an electronic device 4000 is provided, where the electronic device 4000 may include: desktop computers, notebook computers, servers, smartphones, gateways, and the like.
In fig. 10, the electronic device 4000 includes at least one processor 4001 and at least one memory 4003.
Among other things, data interaction between the processor 4001 and the memory 4003 may be achieved by at least one communication bus 4002. The communication bus 4002 may include a path for transferring data between the processor 4001 and the memory 4003. The communication bus 4002 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 10, but not only one bus or one type of bus.
Optionally, the electronic device 4000 may further comprise a transceiver 4004, the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The processor 4001 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program instructions or code in the form of instructions or data structures and that can be accessed by electronic device 400.
The memory 4003 has computer readable instructions stored thereon, and the processor 4001 can read the computer readable instructions stored in the memory 4003 through the communication bus 4002.
The computer readable instructions are executed by the one or more processors 4001 to implement the target detection method and the target presentation method in the embodiments described above.
Further, in an embodiment of the present application, a storage medium having stored thereon computer readable instructions that are executed by one or more processors to implement the target detection method and target presentation method as described above is provided.
In an embodiment of the present application, a computer program product is provided, where the computer program product includes computer readable instructions, where the computer readable instructions are stored in a storage medium, and where one or more processors of an electronic device read the computer readable instructions from the storage medium, load and execute the computer readable instructions, so that the electronic device implements the target detection method and the target presentation method as described above.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (20)

1. A method of displaying a target, comprising:
displaying a space page, wherein the space page is used for displaying a target space;
displaying the gesture mark of the target in the target space displayed by the space page;
the gesture indicia is for indicating a target gesture of the target.
2. The method of claim 1, wherein the method further comprises, prior to displaying the gesture marker of the target in the target space presented by the space page:
acquiring the position of the target in a physical space;
based on a spatial mapping relationship between the physical space and the target space, determining a position of the target in the target space according to the position of the target in the physical space, so as to display a gesture mark of the target in the space page based on the position of the target in the target space.
3. The method of claim 1, wherein the displaying the gesture marker of the target in the target space presented by the space page comprises:
determining the target attitude of the target, and searching an attitude mark corresponding to the target attitude;
and displaying the searched gesture mark in the target space displayed by the space page.
4. The method of claim 1, wherein the method further comprises:
in the space page, a time stamp associated with the gesture indicia is displayed, the time stamp indicating a duration of time the target maintains the target gesture.
5. The method of claim 4, wherein displaying, in the space page, a time stamp associated with the gesture stamp comprises:
determining the number of the gesture marks in the space page;
detecting a trigger operation for the gesture flag if the number exceeds a set threshold;
and if the triggering operation for the gesture mark is detected, displaying the time mark associated with the selected gesture mark in the space page.
6. The method of claim 5, wherein the method further comprises:
in the event that the number does not exceed a set threshold, a time stamp associated with the gesture stamp is displayed.
7. The method of claim 1, wherein the method further comprises:
and displaying a first area and a second area in the space page, wherein the first area is used for displaying the target space and/or the gesture mark of the target in the target space, and the second area is used for displaying whether the target is detected or not.
8. The method of claim 1, wherein the method further comprises, prior to displaying the gesture marker of the target in the target space presented by the space page:
determining target characteristics of the target, and searching sample characteristics matched with the target characteristics;
if the sample characteristics matched with the target characteristics are found, the sample gesture corresponding to the found sample characteristics is obtained, and the sample gesture is used as the target gesture of the target, so that the gesture mark of the target is determined according to the target gesture.
9. The method of claim 8, wherein prior to said finding a sample feature that matches the target feature, the method further comprises:
Acquiring sample characteristics of the sample in different postures;
and establishing a corresponding relation between the sample characteristics of the sample and the sample posture according to the sample characteristics of the sample in different postures.
10. The method of claim 9, wherein the correspondence comprises at least one of:
a correspondence of a first height distribution to a first pose, the first height distribution describing a first height range of the sample in the first pose;
a correspondence of a second height distribution to a second pose, the second height distribution describing a second height range of the sample in the second pose;
a third height distribution corresponding to a third pose, the third height distribution describing a third height range of the sample in the third pose;
wherein, in the first height range, the second height range and the third height range, the average height of each height in the first height range is the largest, and the average height of each height in the third height range is the smallest.
11. The method of claim 8, wherein the determining the target characteristic of the target comprises:
Receiving radar signals of the targets sent by radar equipment;
performing point cloud operation on the radar signal of the target to obtain point cloud data of the target, wherein the point cloud data comprises height data of each point cloud;
determining the point cloud height distribution of the target based on the height data of each point cloud in the point cloud data of the target;
and taking the point cloud height distribution of the target as the target characteristic of the target.
12. The method of claim 11, wherein if a sample feature matching the target feature is found, obtaining a sample pose corresponding to the found sample feature and taking the sample pose as a target pose of the target comprises:
if the sample features matched with the target features are found to be the first height distribution, determining the sample gesture corresponding to the first height distribution as a first gesture, and taking the first gesture as a gesture detection result of the target; or (b)
If the sample features matched with the target features are found to be second height distribution, determining that the sample gesture corresponding to the second height distribution is a second gesture, and taking the second gesture as a gesture detection result of the target; or (b)
If the sample feature matched with the target feature is found to be the third height distribution, determining the sample gesture corresponding to the third height distribution as a third gesture, and taking the third gesture as a gesture detection result of the target.
13. The method of claim 12, wherein if a sample feature matching the target feature is found, obtaining a sample pose corresponding to the found sample feature and taking the sample pose as a target pose of the target, further comprising:
under the condition that the point cloud height distribution in the current frame of point cloud data of the target is the third height distribution, determining the point cloud height distribution in the point cloud data of the previous frame of the target;
if the point cloud height distribution in the point cloud data of the previous frame is the first height distribution, detecting whether a support is arranged around the target;
if no supporting object is arranged around the target, taking the fourth gesture as a gesture detection result of the target;
and if the supporting objects are distributed around the target or the point cloud height distribution in the point cloud data of the previous frame is the second height distribution, taking the third gesture as a gesture detection result of the target.
14. The method of claim 12, wherein if a sample feature matching the target feature is found, obtaining a sample pose corresponding to the found sample feature and taking the sample pose as a target pose of the target, further comprising:
detecting a historical position of the target based on the previous frames of point cloud data under the condition that the point cloud height distribution of the current frames of point cloud data of the target is the first height distribution;
if the current position of the target obtained based on the current frame point cloud data is changed compared with the historical position, taking a fifth gesture as a gesture detection result of the target;
and if the current position is unchanged compared with the historical position, taking the first gesture as a gesture detection result of the target.
15. The method of claim 12, wherein if a sample feature matching the target feature is found, obtaining a sample pose corresponding to the found sample feature and taking the sample pose as a target pose of the target, further comprising:
under the condition that the point cloud height distribution in the current frame of point cloud data of the target is third height distribution, extracting biological characteristics based on radar signals of the target to acquire physiological data of the target;
And if the physiological data of the target is matched with the sleep physiological parameters, taking the sixth gesture as a gesture detection result of the target.
16. A method of detecting an object, comprising:
acquiring a radar signal of a target;
performing feature analysis on the radar signal of the target to obtain target features of the target;
according to the target characteristics, searching sample characteristics matched with the target characteristics;
if the sample characteristics matched with the target characteristics are found, determining a sample gesture corresponding to the found sample characteristics, and taking the sample gesture as a gesture detection result of the target;
the gesture detection result is used for indicating a target gesture of the target.
17. A target display device, comprising:
the space page display module is used for displaying space pages; the space page is used for displaying a target space;
the gesture mark display module is used for displaying gesture marks of the targets in the target space displayed by the space page;
the gesture indicia is for indicating a target gesture of the target.
18. An object detection apparatus, comprising:
The radar signal acquisition module is used for acquiring radar signals of targets;
the feature analysis module is used for carrying out feature analysis on the radar signal of the target to obtain target features of the target;
the searching module is used for searching sample characteristics matched with the target characteristics according to the target characteristics;
the result acquisition module is used for determining a sample gesture corresponding to the found sample feature if the sample feature matched with the target feature is found, and taking the sample gesture as a gesture detection result of the target; the gesture detection result is used for indicating a target gesture of the target.
19. An electronic device, comprising: at least one processor, and at least one memory, wherein,
the memory has computer readable instructions stored thereon;
the computer readable instructions are executed by one or more of the processors to cause an electronic device to implement the method of any one of claims 1 to 16.
20. A storage medium having stored thereon computer readable instructions, the computer readable instructions being executable by one or more processors to implement the method of any of claims 1 to 16.
CN202311524125.XA 2023-11-14 2023-11-14 Target display method, target detection device, electronic equipment and medium Pending CN117635881A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311524125.XA CN117635881A (en) 2023-11-14 2023-11-14 Target display method, target detection device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311524125.XA CN117635881A (en) 2023-11-14 2023-11-14 Target display method, target detection device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117635881A true CN117635881A (en) 2024-03-01

Family

ID=90033007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311524125.XA Pending CN117635881A (en) 2023-11-14 2023-11-14 Target display method, target detection device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117635881A (en)

Similar Documents

Publication Publication Date Title
CN110807361B (en) Human body identification method, device, computer equipment and storage medium
CN110495819B (en) Robot control method, robot, terminal, server and control system
EP3586316B1 (en) Method and apparatus for providing augmented reality function in electronic device
EP3023873A1 (en) Electronic device and method for providing map service
US20150193971A1 (en) Methods and Systems for Generating a Map including Sparse and Dense Mapping Information
US11736555B2 (en) IOT interaction system
CN111476780A (en) Image detection method and device, electronic equipment and storage medium
CN111968247B (en) Method and device for constructing three-dimensional house space, electronic equipment and storage medium
CN108683850B (en) Shooting prompting method and mobile terminal
EP3748533A1 (en) Method, apparatus, and storage medium for obtaining object information
CN108564274B (en) Guest room booking method and device and mobile terminal
CN108881544B (en) Photographing method and mobile terminal
KR101680667B1 (en) Mobile device and method for controlling the mobile device
CN112532887A (en) Shooting method, device, terminal and storage medium
CN109507904B (en) Household equipment management method, server and management system
KR101995799B1 (en) Place recognizing device and method for providing context awareness service
CN109472825B (en) Object searching method and terminal equipment
CN111064888A (en) Prompting method and electronic equipment
CN117635881A (en) Target display method, target detection device, electronic equipment and medium
CN111882650A (en) Spatial light processing method and device, electronic equipment and storage medium
CN110095792B (en) Method and device for positioning terminal
CN113238214B (en) Target object detection method, target object detection device, electronic equipment and storage medium
CN116304260A (en) Method, device, equipment and storage medium for generating and displaying environment model
CN115713616A (en) Room source space model generation method and device, terminal device and storage medium
CN115830280A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination