CN217007681U - Radar system and vehicle - Google Patents

Radar system and vehicle Download PDF

Info

Publication number
CN217007681U
CN217007681U CN202122043889.XU CN202122043889U CN217007681U CN 217007681 U CN217007681 U CN 217007681U CN 202122043889 U CN202122043889 U CN 202122043889U CN 217007681 U CN217007681 U CN 217007681U
Authority
CN
China
Prior art keywords
gesture
radar
seat
adjustment
radar system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202122043889.XU
Other languages
Chinese (zh)
Inventor
刘娴
王慧
江涵
易志伟
武俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202122043889.XU priority Critical patent/CN217007681U/en
Application granted granted Critical
Publication of CN217007681U publication Critical patent/CN217007681U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses radar system, radar system deploys in the vehicle cabin of vehicle, the vehicle cabin is still including deploying being located be close to on the steering wheel of main driver position the place ahead radar system of one side of copilot position. The deployment position is mainly used for the operation and control of a main driver, and can mainly reduce signal interference caused by the operation of a driver body and the operation of a steering wheel controlled by a driver arm. According to the method and the device, the user is allowed to complete gesture control under the conditions of non-contact, non-line-of-sight transfer and short arm moving distance, and driving safety and operation convenience are guaranteed.

Description

Radar system and vehicle
Technical Field
The present application relates to the field of radars, and more particularly, to a radar system and a vehicle.
Background
With the progress of society and the rapid development of material life, people have more and more strong demands on more intelligent and convenient human-computer interaction modes. Human-computer interaction is a study of the interaction between a research system and a user, and the system can be various machines and can also be a computerized system and software. The human-computer interaction technology has extremely high development potential and application value in application scenes such as future terminals and intelligent cabins.
In the continuous development of the man-machine interaction mode, the man-machine interaction mode can be divided into contact type interaction and non-contact type interaction. The initial interaction mode of the keyboard or the physical keys has higher accuracy, has no redundant operation, but is not visual, and requires complex equipment interfaces to cover all the operations; the birth of the graphical user interface gets rid of abstract commands, the interactive device is generally a mouse, but the control of the mouse device is separated from the display domain displayed by the interface, so that the user needs to perform indirect interactive operation on the target, and the interactive difficulty is further increased; the touch interactive interface realizes direct interactive operation, thereby further reducing the learning and cognition cost of a user while retaining part of tactile feedback. However, when the touch screen is clicked, the position of the drop point is often difficult to accurately control, the granularity of the input signal is far lower than the response granularity of the interactive element, and meanwhile, the interactive form of the interactive element is still a two-dimensional interface; the current non-contact interaction mainly comprises voice control interaction, action interaction and the like. The harsh requirements of voice-controlled interaction on a noisy environment limit the application scenarios.
Gesture recognition is one of important human-computer interaction modes, becomes a research hotspot, and is widely applied in various fields. For example, in a vehicle-mounted environment, due to excessive environmental noise during driving and a scene in which multiple persons may speak in the vehicle, the accuracy of speech recognition is often unsatisfactory. For the touch screen mode, a driver must shift sight to operate, and driving safety is affected. Therefore, in a vehicle-mounted environment, gesture recognition is highly required as an interactive mode which allows blind operation and non-contact.
The traditional gesture recognition technology is mainly carried out by using an optical camera, the shape and texture of a gesture can be clearly represented by an optical image, but the limitation is also high, and firstly, the effect of the optical camera under strong light or dim light is poor; secondly, the limitation of the sight distance is large, a user needs to perform action recognition in a certain space, and no obstacle exists; moreover, the storage cost and the calculation cost of the optical image are relatively high; in addition, optical technology has a large risk of privacy leakage, and cannot ensure security. Compared with the prior art, the gesture recognition based on the millimeter waves has the advantages that the gesture recognition based on the millimeter waves is not influenced by illumination conditions, the application range is greatly improved, the gesture recognition based on the millimeter waves has the advantages of being well integrated due to the low power consumption, and the problem of user privacy is not involved.
The gesture recognition based on the radar has the characteristics of high accuracy, good fluency, strong environmental adaptability and privacy protection, can be used for space-isolated fine adjustment, and has important application value in scenes such as intelligent cabins. Therefore, a need exists for a radar system deployment scenario within an intelligent vehicle cabin.
SUMMERY OF THE UTILITY MODEL
In a first aspect, the present application provides a radar system deployed in a cabin of a vehicle, the cabin further comprising a primary driver's seat, a secondary driver's seat, and a steering wheel fixed in front of the primary driver's seat; wherein the content of the first and second substances,
the radar system includes: a first radar system comprising a first radar integrated circuit, the first radar integrated circuit comprising: at least one first transmit antenna; at least one first receiving antenna; the first radar integrated circuit is located on a side of the steering wheel near the co-driver seat, wherein the steering wheel is in a state of not being rotated by a user.
Among other things, the radar system may be configured to: providing a radar field; sensing a reflection from a user in the radar field; analyzing reflections from the user in the radar field; and providing radar data based on the analysis of the reflections.
The deployed position is mainly aimed at the operation of a main driver, and can mainly reduce signal interference caused by the operation of a driver body and a driver arm for operating the steering wheel.
According to the embodiment of the application, the user is allowed to complete gesture control under the conditions of no contact, no line of sight transfer and short arm moving distance, and driving safety and operation convenience are guaranteed.
In one possible implementation, the at least one first transmit antenna is configured to provide a radar field to at least one of:
an area in the primary driver's seat near the secondary driver's seat; and the number of the first and second groups,
an area between the primary driving position and the secondary driving position.
In one possible implementation, the vehicle cabin further comprises a center console;
the radar system further includes:
a second radar system comprising a second radar integrated circuit, the second radar integrated circuit comprising:
at least one second transmit antenna;
at least one second receiving antenna;
the second radar integrated circuit is located on one side, deviating from the direction of the vehicle head, of the center console.
Wherein, the radar beam of the radar system (such as the second radar system in the embodiment of the present application) deployed near the center console irradiates towards the middle, and can be simultaneously used by the main and auxiliary drivers, and the interference of the passenger body is small.
In one possible implementation, the at least one second transmitting antenna is configured to provide a radar field to at least one of:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driver's seat near the primary driver's seat; and
an area between the primary driving position and the secondary driving position.
In a possible implementation, the cabin further comprises a handrail box fixed in the area between the main driving seat and the secondary driving seat;
the radar system further includes:
a third radar system comprising a third radar integrated circuit, the third radar integrated circuit comprising:
at least one third transmit antenna;
at least one third receiving antenna;
the third radar integrated circuit is positioned on one side of the armrest box, which faces the main control console.
The radar beam of the radar system (such as the third radar system in the embodiment of the present application) located at the armrest box is irradiated upwards, so that the radar system can be used by the main and auxiliary drivers at the same time, and the interference of the passenger body is small.
In one possible implementation, the at least one third transmit antenna is configured to provide a radar field to at least one of:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driving seat near the primary driving seat; and
an area between the primary driving seat and the secondary driving seat.
In a second aspect, the present application provides a vehicle comprising: a radar system located within a cabin of the vehicle; the vehicle cabin further comprises a main driving position, a secondary driving position and a steering wheel fixed in front of the main driving position; wherein the content of the first and second substances,
the radar system includes:
a first radar system comprising a first radar integrated circuit, the first radar integrated circuit comprising:
at least one first transmit antenna;
at least one first receiving antenna;
the first radar integrated circuit is located on a side of the steering wheel near the co-driver seat, wherein the steering wheel is in a state of not being rotated by a user.
In one possible implementation, the at least one first transmit antenna is configured to provide a radar field to at least one of:
an area in the primary driver's seat near the secondary driver's seat; and the number of the first and second groups,
an area between the primary driving position and the secondary driving position.
In one possible implementation, the vehicle cabin further comprises a center console;
the radar system further includes:
a second radar system comprising a second radar integrated circuit, the second radar integrated circuit comprising:
at least one second transmit antenna;
at least one second receiving antenna;
the second radar integrated circuit is located on one side, away from the direction of the vehicle head, of the center control console.
In one possible implementation, the at least one second transmitting antenna is configured to provide a radar field to at least one of:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driver's seat near the primary driver's seat; and
an area between the primary driving seat and the secondary driving seat.
In a possible implementation, the cabin further comprises a handrail box fixed in the area between the main driving seat and the secondary driving seat;
the radar system further includes:
a third radar system comprising a third radar integrated circuit, the third radar integrated circuit comprising:
at least one third transmit antenna;
at least one third receive antenna;
the third radar integrated circuit is positioned on one side of the armrest box, which faces the main control console.
In one possible implementation, the at least one third transmitting antenna is configured to provide a radar field to at least one of:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driver's seat near the primary driver's seat; and
an area between the primary driving position and the secondary driving position.
The embodiment of the application provides a radar system, wherein the radar system is deployed in a vehicle cabin of a vehicle, and the vehicle cabin further comprises a main driving position, an auxiliary driving position and a steering wheel fixed in front of the main driving position; wherein the radar system comprises: a first radar system comprising a first radar integrated circuit, the first radar integrated circuit comprising: at least one first transmit antenna; at least one first receiving antenna; the first radar integrated circuit is located on a side of the steering wheel near the co-driver seat, wherein the steering wheel is in a state of not being rotated by a user. The deployed position is mainly aimed at the operation of a main driver, and can mainly reduce signal interference caused by the operation of a driver body and a driver arm for operating the steering wheel. The gesture control is accomplished under the condition of contactless, no line of sight shifts, short arm displacement distance to the user of this application permission, has guaranteed driving safety and simple operation nature.
Drawings
Fig. 1a is a schematic view of a scenario provided in an embodiment of the present application;
fig. 1b is a scene schematic provided in the embodiment of the present application;
fig. 1c is a scene schematic provided in the embodiment of the present application;
FIG. 2 is a schematic view of a scenario provided in an embodiment of the present application;
FIG. 3 is a scenario illustration provided by an embodiment of the present application;
FIG. 4 is a scenario illustration provided by an embodiment of the present application;
fig. 5 is a schematic view of a scenario provided in an embodiment of the present application;
FIG. 6 is a flow chart illustrating a method for adjusting functions provided in an embodiment of the present application;
fig. 7a is a scene schematic provided in an embodiment of the present application;
FIG. 7b is a schematic diagram of a radar signal provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of radar data processing provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of radar data processing provided by an embodiment of the present application;
FIG. 10 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 11a is a schematic diagram of gesture data provided in an embodiment of the present application;
FIG. 11b is a gesture data schematic provided by an embodiment of the present application;
FIG. 11c is a schematic diagram of radar data processing provided by an embodiment of the present application;
FIG. 12a is a gesture schematic provided by an embodiment of the present application;
FIG. 12b is a flow chart illustrating a method for adjusting functions provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of radar data processing provided in an embodiment of the present application;
FIG. 14 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 15 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 16 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 17 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 18 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 19 is an illustration of a radar antenna provided in an embodiment of the present application;
FIG. 20 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 21 is a radar angle illustration provided by an embodiment of the present application;
FIG. 22 is a gesture data illustration provided in accordance with an embodiment of the present application;
FIG. 23 is a gesture data schematic provided by an embodiment of the present application;
FIG. 24a is a flowchart illustrating a method for adjusting functions according to an embodiment of the present application;
FIG. 24b is a flowchart illustrating a method for adjusting functions according to an embodiment of the present application;
FIG. 25 is a schematic structural diagram of a function adjusting device according to an embodiment of the present application;
FIG. 26 is a schematic structural diagram of a function adjusting device according to an embodiment of the present application;
FIG. 27 is a schematic structural diagram of a function adjusting device according to an embodiment of the present application;
fig. 28 is a schematic structural diagram of a chip according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems. First, an application scenario of the embodiment of the present application is introduced:
the embodiment of the application can be applied to scenes such as intelligent home and intelligent cabins which need to be subjected to function adjustment.
Next, the architecture of the application scenario is described with reference to the product architecture included in the scenario.
Scene one, smart home:
referring to fig. 1a, fig. 1a shows a schematic structural diagram of an intelligent home system provided in an embodiment of the present application. As shown in fig. 1a, the smart home system may include: electronic device 100 (optional), one or more smart home devices 200, cloud server 300 (optional).
With respect to the electronic device 100:
the electronic device 100 may be a portable electronic device such as a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), and a wearable device. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that carry an iOS, android, microsoft, or other operating system. The portable electronic device may also be other portable electronic devices such as laptop computers (laptop) with touch sensitive surfaces (e.g., touch panels), etc. It should also be understood that in some other embodiments of the present application, the electronic device 100 may not be a portable electronic device, but may be a desktop computer with a touch-sensitive surface (e.g., a touch pad).
The electronic device 100 may be installed with an Application (APP) for managing the smart home device, or the electronic device 100 may access a world wide web (web) page for managing the smart home device. An application or web page for managing smart home devices may be developed and provided by a manufacturer of smart home devices, such as a manufacturer of smart routers (e.g., huaye).
Regarding the smart home device 200:
the intelligent home equipment is intelligent equipment which can realize information exchange and even autonomous learning through a wireless communication technology, can provide convenient and effective service for users, and reduces the labor capacity of the users. The smart home devices 200 may include smart sockets, smart door locks, smart lamps, smart fans, smart air conditioners, smart curtains, smart televisions, smart electric rice cookers, smart routers, and the like. Illustratively, as shown in fig. 1a, the smart home device 200 may include a smart light fixture 201, a smart tv 202, and a smart sound box 203. The intelligent lighting fixture 201 can control changes of lighting, such as changes of lighting color and brightness. The smart television 202 may perform voice interaction with the user, for example, may receive a voice control instruction of the user to play a favorite television program of the user. Smart speaker 203 may interact with the user by voice, for example, receiving a user's voice control command to play a song that the user likes. In some implementations, smart speaker 203 may have an integrated voice assistant module that may provide interactive voice dialog or query functionality via a "wake up word" (e.g., "hello, art").
The smart home device 200 may be configured with a radar system (the architecture of the radar system may be as shown in fig. 2), the radar system may transmit a radar signal to a monitored area, and receive a reflected signal of the radar signal (which may be referred to as radar data in this embodiment), and through analysis and processing of the reflected signal, state determination (e.g., a moving state, a sleeping state, a static state, etc.) of an object in the monitored area, or information recognition of a gesture (e.g., determination of a gesture category, determination of a motion characteristic of the gesture) may be implemented.
With regard to radar systems:
depending on the implementation of the radar system, the radar signal may have a variety of carriers, such as: when the radar system is a microwave radar, the radar signal is a microwave signal; when the radar system is an ultrasonic radar, the radar signal is an ultrasonic signal; when the radar system is a lidar, the radar signal is a laser signal. It should be noted that, when the radar system is configured to integrate a plurality of different radars, the radar signal may be a set of a plurality of radar signals, which is not limited herein.
The radar system may generate and transmit radar signals into an area that the radar system is monitoring. Referring to fig. 2, the generation and transmission of signals may be implemented by a Radio Frequency (RF) signal generator 12, a radar transmission circuit 14, and a transmission antenna 32. The radar transmit circuitry 14 generally includes any circuitry required to generate signals for transmission via the transmit antenna 32, such as pulse shaping circuitry, transmit trigger circuitry, RF switching circuitry, or other suitable transmit circuitry. The RF signal generator 12 and the radar transmission circuit 14 may be controlled via a processor 20 which issues command and control signals via control lines 34 so that a desired RF signal having a desired configuration and signal parameters is transmitted at the transmission antenna 32.
The radar system may also receive a returned radar signal, which may be referred to as an "echo," "radar data," "echo signal," "echo data," or "reflected signal," at analog processing circuitry 16 via receive antenna 30. Analog processing circuitry 16 generally includes any circuitry required to process signals received via receive antenna 30 (e.g., signal separation, mixing, heterodyne and/or homodyne conversion, amplification, filtering, receive signal triggering, signal switching and routing, and/or other suitable radar signal receiving functions). Accordingly, analog processing circuitry 16 generates one or more analog signals, such as an in-phase (I) analog signal and a quadrature (Q) analog signal. The resulting analog signal is transmitted to and digitized by an analog-to-digital converter (ADC) circuit 18. The digitized signal is then forwarded to processor 20 for reflected signal processing.
It should be understood that the radar system may not be deployed in the smart home devices 200, but may be deployed independently from the smart home devices 200.
For example, taking the smart home device 200 as a smart screen, referring to fig. 1b, the radar system may be deployed at, but not limited to, a corner position of an upper frame of the display screen shown in fig. 1b, and referring to fig. 1c, the radar system may also be deployed independently from the smart home device 200, and is set in a smart home scene as an independent sensing unit.
With respect to the processor:
the processor 20 may be one of various types of processors that implement the following functions: which is capable of processing the digitized received signals and controlling RF signal generator 12 and radar transmission circuit 14 to provide radar operation and functionality of terminal device 100. Thus, the processor 20 may be a Digital Signal Processor (DSP), microprocessor, microcontroller, or other such device.
In some implementations, the processor 20 may include a hardware circuit (e.g., an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA), a general-purpose processor, a Digital Signal Processor (DSP), a microprocessor or a microcontroller, etc.), or a combination of these hardware circuits, for example, the processor 20 may be a hardware system with an instruction execution function, such as a CPU, a DSP, etc., or a hardware system without an instruction execution function, such as an ASIC, an FPGA, etc., or a combination of the above hardware systems without an instruction execution function and a hardware system with an instruction execution function.
To perform the radar operations and functions of the radar system, the processor 20 interfaces via the system bus 22 with one or more other desired circuits (e.g., one or more memory devices 24 comprised of one or more types of memory, any desired identification of peripheral circuits 26, and any desired input/output circuits 28).
As described above, the processor 20 may interface the RF signal generator 12 and the radar transmission circuit 14 via the control line 34. In an alternative embodiment, the RF signal generator 12 and/or the radar transmit circuit 14 may be connected to the bus 22 such that they may communicate with one or more of the processor 20, the memory device 24, the peripheral circuits 26, and the input/output circuits 28 via the bus 22.
In this embodiment, a target object (e.g., a gesture of a user in this embodiment) may be located in a monitoring area of the radar system, so that the radar system may receive a reflected signal (e.g., the first radar data, the second radar data, and the third radar data in this embodiment) of the target object after the radar signal is reflected by the target object.
In an alternative implementation, processor 20, upon receiving the radar data, may process the radar data to determine the gesture indicated by the reflected signal and the gesture-related information, and perform the related function control based on the gesture-related information.
It should be understood that the smart home system may include a plurality of smart home devices having data processing capability, and a communication connection relationship exists between the smart home devices, so that distributed computation may be implemented by the plurality of smart home devices in the smart home system, and the action of processing to determine the gesture indicated by the reflection signal and the information related to the gesture may be implemented by the plurality of smart home devices in the smart home system.
In the embodiment of the present application, the processor 20 may acquire code stored in the memory device 24 (or a memory device disposed separately from the processor 20) to implement the function adjustment method in the embodiment of the present application.
Specifically, the processor 20 may be a hardware system having a function of executing instructions, the function adjustment method provided in the embodiment of the present application may be a software code stored in a memory, and the processor 20 may acquire the software code from the memory and execute the acquired software code to implement the function adjustment method provided in the embodiment of the present application.
It should be understood that the processor 20 may also be a combination of a hardware system without a function of executing instructions and a hardware system with a function of executing instructions, and some steps in the function adjustment method provided by the embodiment of the present application may also be implemented by a hardware system without a function of executing instructions in the processor 20, which is not limited herein.
In some possible implementations, the step of determining the identity of the target object may also be implemented based on the interaction between the smart home device 200 and the cloud server 300.
The smart home devices 200 may be configured with a wireless communication module, and the smart home devices 200 may establish a communication connection with the cloud server 300 through the wireless communication module.
With respect to the wireless communication module:
the wireless communication module may provide one or more wireless communication modes including a Wireless Local Area Network (WLAN) (e.g., a wireless fidelity (Wi-Fi) network), Bluetooth (BT), a Near Field Communication (NFC), an Infrared (IR) technology, and the like, which are applied to the smart home device 200. In some embodiments, the smart home device 200 may further be configured with a mobile communication module, which may provide a solution including wireless communication technologies such as 2G/3G/4G/5G applied on the electronic device 100.
The smart home devices 200 may be connected to a network through the wireless communication module or the mobile communication module, and further communicate with the cloud server 300 to receive data, instructions, and the like of the cloud server 300, or the smart home devices 200 may report the data, their operating states, operating parameters, and the like to the cloud server 300.
In an alternative implementation, the processor 20 may transmit the radar data to the cloud server 300 after receiving the radar data, and the server may process the radar data to determine the gesture indicated by the reflected signal and the gesture-related information.
In an alternative implementation, the processor 20 may receive a gesture indicated by a reflected signal sent by the cloud server, information related to the gesture, and the like.
With respect to the cloud server 300:
the cloud server 300 is a device providing safe and reliable elastic computing service, and can be used as a media platform to realize communication between the inside of a home and an external control device, so as to meet the requirements of remote control, detection and information exchange. It is to be appreciated that the cloud server 300 can include one or more servers, for example the cloud server 300 can be a cluster of servers, different servers can be used to provide different services. The cloud server 300 is associated with a manufacturer or service provider of the smart home device 200. For example, the cloud server 300 may automatically send a software update to the smart home devices 200 or provide cloud services to the smart home devices 200. In the embodiment of the present application, the cloud server 300 provides an interface for managing an application or a web page of the smart home device. The cloud server 300 may receive, through the interface, an instruction for managing the smart home devices sent by the electronic device 100, and send an instruction to the corresponding smart home devices based on the instruction, so as to manage the smart home devices. For example, the cloud server 300 may instruct the smart light 201 to turn on/off, adjust brightness or color temperature, and the like according to the instruction sent by the electronic device 100.
Scene two, intelligent passenger cabin
Fig. 3 is a schematic structural diagram of an automobile interior according to an embodiment of the present application. Currently, in the field of automobiles, a vehicle-mounted terminal such as a vehicle machine (also referred to as an in-vehicle audio/video entertainment system) can be fixedly located at a center console of an automobile, and a screen of the vehicle-mounted terminal can also be referred to as a center control display screen or a center control screen. In addition, some high-end automobiles gradually and comprehensively display in a cabin in a digital mode, and a plurality of or one display screen is arranged in the cabin and used for displaying contents such as a digital instrument panel and a vehicle-mounted entertainment system. As shown in fig. 3, a plurality of display screens, such as a digital instrument display screen 101, a central control screen 102, a display screen 103 in front of a passenger (also called a front passenger) in the passenger compartment, a display screen 104 in front of a left rear passenger and a display screen 105 in front of a right rear passenger are provided in the cabin.
In addition, a radar system (also simply referred to as radar in the following embodiments) may be deployed in the automobile, although fig. 3 shows only one radar 106 near the a-pillar (pilar) on the driver side, multiple radars may be disposed in the cabin, and the position of the radar is flexible, for example, some radars of the cabin may be disposed above the control screen in the vehicle, some radars of the cabin may be disposed on the left side of the control screen in the vehicle, some radars of the cabin may be disposed on the a-pillar or the B-pillar, and some radars of the cabin may be disposed in the front part of the cabin roof of the vehicle. The specific description of the radar can refer to the description of the radar system in fig. 2 in the above embodiment.
In order to be able to recognize the gesture information of the main and the assistant driver, radars may be provided on the side of the steering wheel near the assistant driver's seat, on the main console, and on the console box between the main and assistant driver's seats.
For example, reference may be made to fig. 4, where fig. 4 shows a layout of radars in a vehicle cabin, and as shown in fig. 4, a radar 1 may be provided on a side of a steering wheel near a passenger seat, a direction in which the radar 1 provides a radar field may be directed toward an area of the main seat near the passenger seat and an area between the main seat and the passenger seat, and further, a radar 2 may be provided on a center console toward sides of the main seat and the passenger seat, and a direction in which the radar 2 provides a radar field may be directed toward an area of the main seat near the passenger seat, an area of the passenger seat near the main seat, and an area between the main seat and the passenger seat.
For example, reference may be made to fig. 5, wherein fig. 5 shows a layout schematic of a radar in a vehicle cabin, as shown in fig. 4, a radar 1 may be provided on a side of a steering wheel near a passenger seat, a direction in which the radar 1 provides a radar field may be directed toward a region of the passenger seat near the passenger seat and a region between the passenger seat and the passenger seat, and further, a radar 2 may be provided on a center console toward a side of the passenger seat and the passenger seat, a direction in which the radar 2 provides a radar field may be directed toward a region of the passenger seat near the passenger seat, and a region between the passenger seat and the passenger seat, and further, a radar 3 may be provided on a console box between the passenger seat and the passenger seat toward a side of the center console, a direction in which the radar 3 provides a radar field may be directed toward a region of the passenger seat near the passenger seat, The area in the passenger seat near the main seat and the area between the main seat and the passenger seat.
The embodiment of the application provides the deployment positions of three radar systems from the angles of facilitating the use of passengers, ensuring safety, reducing signal interference and enhancing gesture action characteristics.
The deployed position is mainly aimed at the operation of a main driver, and can mainly reduce signal interference caused by the operation of a driver body and a driver arm for operating the steering wheel.
Wherein, the radar beam of the radar system (such as the second radar system in the embodiment of the present application) deployed near the center console irradiates towards the middle, and can be simultaneously used by the main and auxiliary drivers, and the interference of the passenger body is small.
The radar beam of the radar system (such as the third radar system in the embodiment of the present application) located at the armrest box is irradiated upwards, so that the radar system can be used by the main and auxiliary drivers at the same time, and the interference of the passenger body is small.
According to the embodiment of the application, the user is allowed to complete gesture control under the conditions of no contact, no line of sight transfer and short arm moving distance, and driving safety and operation convenience are guaranteed.
The vehicle 200 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, an amusement car, a playground vehicle, construction equipment, a trolley, a golf cart, a train, a trolley, etc., and the embodiment of the present invention is not particularly limited.
The embodiments of the present application will be described below with reference to the drawings. The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating an embodiment of a function adjusting method provided in an embodiment of the present application, where the function adjusting method provided in the embodiment of the present application may be applied to an electronic device or a server, where the electronic device may be a product such as an in-vehicle device, a computer, a smart phone, or a smart watch. As shown in fig. 6, a function adjusting method provided in an embodiment of the present application may include:
601. first radar data is acquired.
Taking an application scene of the smart home as an example, an indoor user can perform radar gesture operation within a detection range of a radar, and the user can adjust a specific function in the smart home through the radar gesture operation.
Taking an application scenario of the intelligent cabin as an example, passengers (such as a primary driver and a passenger in a secondary driving position) in the vehicle can perform radar gesture operation within a detection range of the radar, and a user can adjust a specific function in the vehicle-mounted system through the radar gesture operation.
In this embodiment, the radar-based touch-independent gesture (e.g., the first gesture, the second gesture, the third gesture, the target gesture) is a radar-based touch-independent gesture (also referred to as a "3D gesture"), and the radar gesture refers to a property that the gesture is spatially far from the electronic device (e.g., the gesture does not require a user to touch the device, although the gesture does not exclude touch). The radar gesture itself may typically have only a two-dimensional component of activity information, such as a radar gesture consisting of a swipe left up to right down, but because the radar gesture is some distance away from the electronic device ("third" dimension or depth), the radar gesture in embodiments of the present application may be generally considered to be three-dimensional.
In one possible implementation, when the user performs the gesture operation, the gesture of the user may be located in a monitoring area of the radar system, and the radar system may transmit a radar signal to the monitored area and receive a reflected signal of the gesture of the user to the radar signal. For example, the generation and transmission of signals may be implemented by the RF signal generator 12, the radar transmission circuit 14, and the transmission antenna 32 in the above-described embodiments.
Among other things, radar systems may generate radar signals, which may include, but are not limited to, Continuous Wave (CW) signals and chirp signals (otherwise known as chirp).
Taking the chirp signal as an example, the chirp signal is an electromagnetic signal whose frequency varies with time. Generally, the frequency of the rising chirp signal increases over time, while the frequency of the falling chirp signal decreases over time. The frequency variation of the chirp signal may take many different forms. For example, the frequency of a Linear Frequency Modulated (LFM) signal varies linearly. Other forms of frequency variation in the chirp signal include exponential variations. In addition to the chirp signal of the type in which the frequency is continuously changed according to some predetermined function (i.e., a linear function or an exponential function), a chirp signal in the form of a stepped chirp signal in which the frequency is changed stepwise may be generated. That is, a typical stepped chirp signal comprises a plurality of frequency steps, where the frequency is constant at each step for some predetermined duration. The step chirp signal may also be pulsed on and off with the pulse being on during some predetermined time period during each step of the chirp scan.
In one possible implementation, the radar system may transmit a chirp signal, where the chirp signal mathematical expression may be exemplified by:
Figure DEST_PATH_GDA0003596507770000113
wherein the content of the first and second substances,
Figure DEST_PATH_GDA0003596507770000111
b is the bandwidth of the radio frequency signals,
Figure DEST_PATH_GDA0003596507770000112
to fix the initial phase, tcFor chirp signal period, A is amplitude, f0Is the starting frequency.
In one possible implementation, the radar system may transmit radar signals and receive reflected signals from the user's gesture reflections.
Here, the "reflected signal reflected by the gesture of the user" may be understood as: the radar signal impinges on and is reflected by the user's gesture. In the scene of the intelligent home, the radar signal can be a signal which is reflected by a target object when the radar signal strikes a target object during walking, and in the scene of the intelligent cabin, the radar signal can be a signal which is reflected by the target object when the radar signal strikes the target object during getting on or off the vehicle.
Further, the processor may acquire the first radar data, and perform recognition of the gesture of the user and analysis of gesture information based on the first radar data.
It should be understood that the first radar data in the embodiments of the present application may refer to a reflected signal received by a receiving antenna at an analog processing circuit in a radar system, and the reflected signal is an analog signal. After the analog signal is obtained, the analog signal may be transmitted to an analog-to-digital converter circuit and digitized by the circuit to obtain a digital signal.
It should be understood that the analog signal obtained by the analog processing circuit may be transmitted to the analog-to-digital converter circuit and digitized by the analog-to-digital converter circuit to obtain a digital signal, and the first radar data in the embodiment of the present application may also refer to the digitized digital signal, which is not limited herein.
The implementation of the processor to obtain the first radar data is described next based on the processor and the deployment location of the radar system:
1. the radar system and the processor are deployed in the same electronic device:
the electronic equipment can be a terminal in an intelligent home or vehicle-mounted equipment in an intelligent cabin;
in one possible implementation, the radar system may be deployed in the electronic device, and after acquiring the first radar data, the radar system may transmit the first radar data to a processor in the electronic device (if the first radar data is an analog signal, the analog-to-digital conversion circuit may convert the analog signal into a digital signal and then transmit the digital signal to the processor), and the processor may process the first radar data.
2. The radar system and the processor are deployed in different electronic devices (for convenience of description, the different electronic devices are described below as an a electronic device and a B electronic device):
in one possible implementation, the radar system may be deployed in the a electronic device, and after acquiring the first radar data, the radar system may transmit the first radar data to a processor in the B electronic device (if the first radar data is an analog signal, an analog-to-digital conversion circuit in the a electronic device may convert the analog signal into a digital signal and then transmit the digital signal to the processor in the B electronic device, or may transmit the analog signal to the B electronic device, and after the analog signal is converted into a digital signal by a processor digital conversion circuit in the B electronic device, transmit the digital signal to the processor in the B electronic device), and then the processor in the B electronic device may process the first radar data.
3. The radar system is deployed in the electronic device, and the processor is deployed in the cloud server:
in a possible implementation, the radar system may be deployed in the electronic device, and after the radar system acquires the first radar data, the radar system may transmit the first radar data to a processor in the cloud server (if the first radar data is an analog signal, an analog-to-digital conversion circuit in the electronic device may convert the analog signal into a digital signal and then transmit the digital signal to the processor in the cloud server, or may transmit the analog signal to the cloud server, and the analog signal is converted into the digital signal by the processor digital conversion circuit in the cloud server), and then the processor in the cloud server may process the first radar data.
Referring to fig. 7a, fig. 7a illustrates an exemplary operation of radar system 102. Wherein radar system 102 is implemented as a frequency modulated continuous wave radar. In the environment, user 302 is located at a monitoring environment from radar system 102. To detect user 302, radar system 102 transmits radar signal 306 (depicted in FIG. 7a as radar transmitted signal 306). At least a portion of radar signal 306 is reflected by user 302. The reflected portion represents a reflected signal 308 (depicted in fig. 7a as radar received signal 308). The radar system 102 receives the reflected signal 308 and processes the reflected signal 308 to extract data for the radar-based application 206. As depicted, the amplitude of reflected signal 308 is less than the amplitude of radar signal 306 due to losses incurred during propagation and reflection.
Radar signal 306 includes a sequence of chirps a310-1 through 310-N, where N represents a positive integer greater than one. Radar system 102 may transmit chirps 310-1 to 310-N in consecutive bursts or as time-separated pulses 310-1 to 310-N. For example, the duration of each chirp 310-1 to 310-N may be on the order of tens or thousands of microseconds (e.g., between approximately 30 microseconds (μ β) to 5 milliseconds (ms)).
The individual frequencies of the chirps 3101 to 310-N may increase or decrease over time. In the depicted example, the radar system 102 employs a dual-slope cycle (e.g., triangular frequency modulation) to linearly increase and linearly decrease the frequencies of the chirps 310-1 to 310-N over time. The dual slope cycle enables radar system 102 to measure the doppler shift caused by the motion of user 302.
In general, the transmission characteristics (e.g., bandwidth, center frequency, duration, and transmission power) of the chirps 310-1 to 310-N may be tailored to achieve a particular detection range, range resolution, or doppler sensitivity to detect one or more characteristics of the user 302 or one or more gesture actions performed by the user 302.
At radar system 102, reflected signal 308 represents a delayed version of radar signal 306. The amount of delay is proportional to the range of tilt (e.g., distance) from the antenna array 212 of the radar system 102 to the user 302. Specifically, this delay represents the sum of the time it takes radar signal 306 to propagate from radar system 102 to user 302 and the time it takes reflected signal 308 to propagate from user 302 to radar system 102. If user 302 and/or radar system 102 are moving, reflected signal 308 is shifted in frequency relative to radar signal 306 due to the Doppler effect. In other words, the characteristics of the reflected signal 308 depend on the hand motion and/or the motion of the radar system 102. Similar to radar signal 306, reflected signal 308 is comprised of one or more of chirps 310-1 to 310N.
In one possible implementation, motion information of an object (e.g., motion information of a gesture of a user) in a radar field may be extracted based on the first radar data, where the motion information may include, but is not limited to, distance information, velocity information, angle information, and the like, where the distance information is included in a frequency of each echo pulse, and the distance information of the gesture in a current pulse time may be obtained by performing fast fourier transform on a single pulse in a fast time, and the distance information of each pulse is integrated, so that overall distance change information of the single gesture may be obtained. After FFT is carried out on the fast time of the original gesture echo, FFT is carried out on the slow time dimension, and the peak value of the FFT can reflect the Doppler frequency of the target, namely the speed information of the target is contained. The slow time domain FFT is required to be carried out in the same range gate, and the range migration exists due to the overall movement of the target, the FFT cannot be directly carried out on a certain range gate of the overall gesture, and the accumulated pulse number is reasonably set, so that the gesture truncation in each FFT operation basically has no range migration.
Specifically, after the first radar data is acquired, the first radar data may be subjected to a preliminary process (e.g., Fast Fourier Transform (FFT)), specifically, the processor may perform a one-dimensional (1D) Fast Fourier Transform (FFT) calculation on the first radar data to acquire a Range-fourier spectrum Range-FFT, and acquire a Range-Doppler spectrum Range-Doppler through a two-dimensional (2D) FFT calculation, and then the process of the 1D-FFT and the 2D-FFT is described in detail:
in one possible implementation, the first radar data may include a plurality of chirp signals, each of which may be processed to obtain a corresponding Range-fourier-spectrum Range-FFT. For example, if r (N) is a digitized reflected signal, where N is the number of samples in a single chirp signal period, N1-point FFT (alternatively referred to as 1D-FFT) calculation may be performed on r (N), to obtain r (k):
R(k)=FFT(r(n),N1),N1≥n;
that is, the 1D-FFT calculation may be performed on the reflected signal to obtain a corresponding Range-FFT, where the Range-FFT may be composed of a plurality of Range-bins, and the Range-bins may be expressed as
Figure DEST_PATH_GDA0003596507770000131
Wherein alpha isiFor the module value of the complex value of the R (k) positive frequency domain, the unit distance corresponding to the Range-bin of a single distance point can be defined as the distance resolution dresThen the distance value di=αi×dresMaximum detection distance of
Figure DEST_PATH_GDA0003596507770000132
The horizontal axis of the Range-FFT may be the above distance values, the vertical axis of the Range-FFT may be the signal reflection intensity corresponding to each distance value, and the signal reflection intensity may be defined as the modulus of the complex signal (for example, if the complex signal is a + bj, the signal reflection intensity may be expressed as the modulus of the complex signal
Figure DEST_PATH_GDA0003596507770000133
) The Range-FFT of the Range Fourier spectrum may include N 12 distance values, and a signal reflection intensity corresponding to each distance value.
Illustratively, referring to FIG. 5, FIG. 5 is a schematic diagram of a Range-FFT distance spectrum, as shown in FIG. 5, with the abscissa of the Range-FFT distance spectrum being the distance value d (including d)res,2dres,…,
Figure DEST_PATH_GDA0003596507770000134
) And the vertical axis represents the signal reflection intensity.
In one possible implementation, after a Range-FFT of a chirp signal is calculated, similarly, 1D-FFT processing may be performed on all K chirp signals within a frame to obtain K Range-FFTs.
In one possible implementation, a sequence of K values in the same Range bin of the Range-FFT may be FFT-computed (also referred to as 2D-FFT) to obtain the Range-Doppler spectrum.
Taking the radar data as the chirp signal as an example, the first radar data may include a plurality of chirp signals, referring to fig. 7b, where each row in the matrix shown in fig. 7b is a chirp signal, and the plurality of chirp signals are superimposed in rows to form the gesture data (e.g., the first radar data).
In a possible implementation, the radar system can have a long-distance and large-angle detection range, meanwhile, due to excellent radar performance, the radar system is very sensitive to micro-motion, and interference information irrelevant to gestures can be filtered through distance dimensional filtering and speed dimensional filtering.
The distance dimensional filtering refers to filtering out targets (including moving targets and static targets, such as limb motions of a driver in an intelligent cabin scene, and breathing of the driver) outside a gesture area. As shown in fig. 8, the gesture motion is separated in distance from the human body range, and other objects than the gesture distance may be filtered out by using a filter in the distance dimension.
The filtering in the velocity dimension is to filter out static objects and low-speed objects (such as a display in a vehicle, a stationary or swinging object, etc.) in the gesture area by using a fourth-order feedback filter.
602. Opening an adjustment function for a target function based on the first radar data indicating a first gesture and a duration of the first gesture exceeding a first threshold.
In this embodiment, after the first radar data is acquired, data analysis related to the gesture may be performed on the first radar data.
In one possible implementation, part or all of the first radar data may be radar data corresponding to the first gesture, and after the first radar data is acquired, radar data related to the gesture of the user needs to be identified, and then the first gesture related processing (e.g., determination of a gesture category, determination of a gesture duration, etc.) may be performed on the identified radar data.
In one possible implementation, the start time and the end time of the first gesture in the first radar data may be determined by a variance detection method. For example: the variance of each chirp of the gesture echoes can be obtained, because the variance of the echoes in the gesture action is obviously increased compared with the variance in the non-action, the starting and stopping time of a gesture can be judged by utilizing the characteristic, the variance of the echoes is increased in the gesture action, and when the variance of the echoes of a section of radar signal data is greater than a set threshold theta, the gesture starting time is judged. As shown in fig. 9, if the echo variance of a certain point is greater than the threshold θ, the radar wave data from this point is determined as gesture data, as shown in "gesture starting point a" in fig. 9. In the process of judging the termination of the gesture, since a short static condition may occur during the gesture, a condition that the variance in a certain time period is smaller than the threshold (as shown in a section from a point b to a point c in fig. 9) may occur, if the data is counted into radar signal data for gesture recognition, redundant data may exist, and the amount of calculation is increased, so that the end flag of the gesture is set that the echo variances of n consecutive frames (for example, about 1/30s) are all smaller than the threshold θ (as shown in fig. 9, the echo variances of n consecutive frames starting from the point b are all smaller than the threshold θ, the point b is identified as the termination point of the gesture). After the echo of one gesture is judged to be received, the end point (point c shown in fig. 9) after n frames is not taken as the gesture end point, and the last echo smaller than the threshold is taken as the end point (point b shown in fig. 9) of the gesture data.
In the embodiment of the application, timing can be started when gesture data are detected, and if the duration of the gesture data exceeds a first threshold, a fine adjustment mode can be started. The first threshold may be greater than 0.7 second and less than 1.5 seconds, for example, the first threshold may be 0.7 second, 0.8 second, 0.9 second, 1 second, 1.1 second, and the like.
The fine adjustment may include opening of a function, and adjusting a degree of the function, where the degree of the function may be increasing or decreasing of a numerical value, adjusting a direction of a display position, adjusting a zoom of a display area, adjusting a position or a form of hardware, and for example, the fine adjustment may include adjusting a volume, adjusting a display brightness, adjusting a zoom of a display image, adjusting a movement of a display interface, adjusting a height of a window, adjusting a front-rear position of a seat in a cabin, and the like. Because of the degree adjustment related to the function, the gesture of fine adjustment needs a certain time to make the selection of the degree of adjustment, and the time duration is long.
In one possible implementation, a timer may be started from when the gesture data is detected, and if the duration of the gesture data expires without exceeding a first threshold, a separate gesture adjustment mode may be started.
Wherein, the independent gesture adjusting mode may include the turning on or off of the function, and since the degree adjustment of the function is not involved, the independent gesture may be an independent gesture with a short duration, such as left waving, right waving, etc.
Along with the gradual enrichment of the interaction scheme, the number of the gesture actions is gradually increased, and the characteristics of each gesture action are different, so that the independent gesture and the fine adjustment gesture have a lot of overlapping performance on the action characteristics and are not easy to directly distinguish, the biggest difference is the gesture time length, the independent gestures are independent gestures, and the duration is short. As shown in table 1 below, table 1 illustrates a statistical example of the duration of a continuous circling motion by using a circling example.
TABLE 1
Figure DEST_PATH_GDA0003596507770000151
In table 1, taking continuous circling as an example, considering individual gesture differences, taking the judgment limit as 1s, corresponding to 1440chirp s in 1s in the test radar parameter configuration in table 1 above, determining that a gesture less than 1s is an independent gesture, and regarding a gesture greater than 1s as a fine adjustment gesture, so that it can be ensured that the system can know that fine adjustment is performed at the beginning of the second circle of motion.
In the embodiment of the present application, the duration of the gesture indicated by the first radar data is used as a basis for whether to turn on the fine adjustment mode, and this part of the radar data may not be used as a basis for determining the adjustment degree when performing the fine adjustment subsequently, but only as a trigger condition for turning on the fine adjustment (which may also be referred to as a wake-up gesture in the embodiment of the present application), and turning on the fine adjustment based on the gesture duration has the following advantages: because the gesture types are limited, in the case of the future function types being continuously abundant, the gesture types may not be enough in the scheme of starting the fine adjustment function (the independent gesture function occupies one part of the gesture types, the wakeup gesture occupies another part of the gesture types, the two cannot be overlapped, otherwise an error occurs), and based on the gesture duration as the basis of whether the fine adjustment mode is started, the gesture types used by the independent gesture function and the wakeup gesture can be overlapped, so that the gesture types required in the function adjustment of the gesture realization can be reduced. In addition, in the scene of adjusting functions related to gestures, especially in the scene of fine adjustment, it is necessary to ensure that the overall gesture design is continuous as much as possible, when the user wants to perform fine adjustment based on gestures, the user is aware of the gesture required for the adjusting process for a continuous period of time, and when the fine-adjustment function is woken up, if the rule of the wake-up gesture is also defined as whether the duration is long enough, the user can consider the operation process and the follow-up of the part of the wake-up gesture to be coherent.
When the duration of the first gesture indicated based on the first radar data exceeds a first threshold value, the radar data in the first radar data associated with the first gesture may be identified for a gesture category (i.e., for the gesture category of the first gesture), and the identification of the gesture category of the first gesture may be performed because the gesture category of the first gesture needs to be encountered to determine the type of function that is subsequently fine-tuned (i.e., to determine the target function). It should be understood that the gesture classes herein may be understood as hand type classes, with hand type characteristics differing between gestures of different gesture classes.
In one possible implementation, the processor may determine that the user's gesture is a first gesture based on the first radar data indicating a user's gesture and a duration of the user's gesture exceeds a first threshold, the first gesture indicating that the adjustment function is turned on.
In one possible implementation, a portion of the radar data may be intercepted from the first radar data, and the gesture of the user may be determined to be a first gesture based on the portion of the radar data. Optionally, the partial radar data is the first N radar data in the first radar data. Different from gesture detection, gesture interception is to intercept a part of gestures with proper length in the gesture action process to perform gesture recognition. Therefore, the truncation length is a key to gesture truncation, and truncation too short or too long may cause the partial gesture recognition to fail. Optionally, the gesture capture may be performed by a time capture method and a gesture feature capture method.
The time interception method is consistent with the idea of independent gesture judgment, and from the time perspective, intercepts N radar data (for example, intercepts N chirp signals) after the gesture starts, and performs gesture recognition on the intercepted signals. The time interception method is simple, direct and effective, and the effectiveness is derived from the following aspects: firstly, a gesture recognition algorithm of a multidimensional feature fusion network based on an attention mechanism can be utilized during subsequent gesture category determination, the algorithm is insensitive to length change of gesture signals, similar gesture features are similar, time lengths (namely gesture action speed) are slightly different, the influence on recognition results is small, and the same gesture length of different users is different; secondly, the length of the independent gesture is shorter than the first threshold, and the judgment based on the duration can ensure that the independent gesture cannot be intercepted, so that the recognition of the independent gesture cannot be influenced.
Taking the continuous circle fine adjustment as an example, as can be seen from table 1, the length of the first clockwise and counterclockwise circle wake-up gesture generally does not exceed 1000chirp, so the length N can be intercepted as 1000 chirp. Referring to fig. 10, fig. 10 is a schematic diagram of a 2-circle gesture drawn counterclockwise before gesture interception, and referring to fig. 11a, fig. 11a is a schematic diagram of a 2-circle gesture drawn counterclockwise after gesture interception.
In the above continuous circling fine adjustment example, when the gesture motion is performed for 1s, the system determines that the fine adjustment operation is being performed, and intercepts the data of the first 1000chirp (under the current radar parameter configuration, the 1s is 1440chirp, which is greater than 1000 chirp) in 1s for identification, and the identification result is the category of the wakeup gesture.
The gesture feature interception method is to analyze the feature change of a specific gesture to finish gesture interception, wherein the interception length is not fixed and changes along with the gesture condition. For example: the gesture interception of hovering can be finished from the change of distance or speed, the interception of the first circle in the continuous circle drawing action is finished from the change of distance, angle and speed, the gesture feature interception can solve the problem of influence on the interception caused by gesture length differences caused by different users and different gestures, and the interception of a single gesture is more accurate.
Based on the above description, the time capture method is suitable for the situation that the time lengths of different wake-up gestures are similar; the gesture feature interception method is suitable for different awakening gestures with a certain same feature similar to the feature, and can realize the interception of different awakening gestures by using a single feature. In practical application, the selection of the interception mode can be comprehensively considered according to the type and the characteristics of the awakening gesture.
After the intercepted radar data is obtained, acquiring the motion characteristics of the gesture of the user according to the first radar data, and determining that the gesture of the user is a first gesture according to the motion characteristics of the gesture of the user; or determining the gesture of the user as a first gesture through a pre-trained gesture classification network according to the first radar data.
Next, how to obtain the motion characteristics of the user's gesture according to the first radar data, and determine that the user's gesture is a first gesture according to the motion characteristics of the user's gesture is described:
in one possible implementation, the motion feature is a single significant feature (e.g., distance, speed, angle feature, etc.) of the gesture motion, and the recognition of a certain type of gesture can be realized through the analysis of the feature. The gesture type recognition based on the motion characteristics of the gesture only needs to perform partial characteristic analysis, and does not need to pass through characteristic fusion and a neural network, so that part of gesture recognition work can be simplified, the calculated amount is reduced, and the real-time performance is improved.
In the wake-up gesture, taking the hover action as an example, the hover action is a static action, the distance, speed and angle characteristics are maintained unchanged, and the method is very suitable for judging from the signal layer, and considering that under the conventional hardware performance, the distance resolution is far higher than the angle resolution, the hover action is judged by using the distance characteristics. Specifically, the distance characteristic of the currently received signal can be obtained, the position of the maximum value of the signal energy at each time is extracted, the tail end position of the signal is taken as a reference, the signal continuously extends towards the starting point of the signal until the distance difference between a plurality of time gesture positions and the reference position continuously exceeds a threshold, and the signal is considered to be not hovering and has large displacement. By this method, the hover time can be quantitatively obtained. In addition, similar to the fine quantification of distance features, when the slope value of the first-order distance change fitting straight line is continuously low in a certain time, the hovering action can be considered.
Referring to fig. 11b, fig. 11b shows an example of the judgment of the signal layer of the hovering gesture by using the above method, and the slope values of the phases are-7.2967, -1.5059e-14, -2.8360, -1.5059e-14, 1.2219, -1.5059e-14, 0.1450, 0.5360, 0.5165, -0.0956, -0.7544, 2.3507, -0.8567, -0.6055, and 0.4787, respectively. If the slope determination limit is set to 3, the hover gesture is known to last for about 2 seconds.
Next, how to determine the gesture of the user as a first gesture through a pre-trained gesture classification network according to the first radar data is described:
in a possible implementation, the pre-trained gesture classification network may be a network implemented based on a convolutional layer multi-dimensional feature fusion recognition algorithm combined with a self-attention mechanism, and distance, angle and speed features of a gesture action are fused to obtain a recognition result. It should be understood that the pre-trained gesture classification network described above may also be applied in independent gesture class recognition in an independent gesture mode.
The implementation of the gesture recognition mainly includes two modes, namely network recognition and signal layer recognition, the network layer recognition is a multi-dimensional feature fusion recognition algorithm based on the method and combined with an attention mechanism, and the overall architecture of the algorithm is shown in fig. 11 c.
By the method, the gesture type of the first gesture can be obtained, and then the gesture type of the first gesture is determined to correspond to the target function based on the preset corresponding relation, wherein the preset corresponding relation comprises mapping between the gesture type and the function.
That is, what the adjustment object (i.e., the target function) is at the time of fine adjustment of the subsequent opening can be determined based on the gesture type of the first gesture.
In one possible implementation, the gesture type of the first gesture is finger pinch (for example, as shown in fig. 12 a), and the target function is progress adjustment of video or audio played by an application; or the gesture type of the first gesture is circle drawing, and the target function is volume adjustment; or the gesture type of the first gesture is palm hovering, and the target function is display brightness adjustment or zoom adjustment of a display image; or the gesture type of the first gesture is fist making, and the target function is movement adjustment of a display interface; or the gesture type of the first gesture is light palm shaking, and the target function is vehicle window height adjustment; or the gesture type of the first gesture is fist making and thumb extending, and the target function is front-back position adjustment of a seat in the vehicle cabin.
Specifically, the processor can recognize that the gesture of the user exists in the monitoring area of the radar system at a certain moment, and determines that the duration time of the gesture of the user exceeds a first threshold value, so that the gesture category of the gesture of the user can be recognized as finger pinch, and a function adjusting mode aiming at progress adjustment of video or audio played by the application can be opened.
Specifically, the processor can identify that the gesture of the user exists in the monitoring area of the radar system at a certain moment, and determine that the duration of the gesture of the user exceeds a first threshold value, and then the gesture category of the gesture of the user can be identified as a circle, and then a function adjusting mode aiming at volume size adjustment can be opened.
Specifically, the processor can recognize that the gesture of the user exists in the monitoring area of the radar system at a certain moment, and determines that the duration time of the gesture of the user exceeds a first threshold value, so that the gesture category of the gesture of the user can be recognized as palm hovering, and a function adjusting mode aiming at display brightness adjustment or zoom adjustment of a display image can be opened.
Specifically, the processor can identify that there is the user's gesture in radar system's monitoring area at a moment, and determines that the duration of user's gesture exceeds first threshold value, and then can identify the gesture classification of user's gesture and be for clenching a fist, and then can open the function control mode to the movement control of display interface.
Specifically, the processor can identify that there is the user's gesture in radar system's monitoring area at a moment, and determines that the duration of user's gesture exceeds first threshold value, and then can identify that the gesture classification of user's gesture is for the palm gently shake, and then can open the function control mode of door window height control.
Specifically, the processor can recognize that the gesture of the user exists in the monitoring area of the radar system at a certain moment, and determines that the duration time of the gesture of the user exceeds a first threshold value, so that the gesture category of the gesture of the user can be recognized as fist making and thumb stretching, and then a function adjusting mode aiming at front and back position adjustment of a seat in a vehicle cabin can be opened.
In an embodiment of the application, when it is determined that the first gesture exists based on the first radar data and the duration of the first gesture exceeds a first threshold, an adjustment function for the target function may be turned on.
Specifically, referring to fig. 12b, when the adjusting function for the target function is opened, certain feedback information may be presented, where the feedback information may indicate that the adjusting function for the target function is opened, and specifically, when the adjusting function for the target function is opened, target presentation corresponding to the target function may be performed, where the target presentation is used to indicate that the adjusting function for the target function is opened.
In one possible implementation, the target presentation may include: and displaying a control for adjusting the target function. In an application scene of the smart home, target presentation can be performed on electronic equipment with a display screen; in the intelligent cabin scene, can carry out the target presentation on the well accuse screen in the cabin.
Taking the progress adjustment of the video or audio played by the target function as an application as an example, the target presentation may be a display of a progress bar.
Taking the target function as the volume adjustment, the target presentation may be the display of a volume adjustment control.
Taking the target function as the display brightness adjustment as an example, the target presentation may be the display of the display brightness adjustment control.
Taking the target function as an example of zoom adjustment to display an image, the target presentation may be a display of a zoom control for the image.
Taking the target function as the movement adjustment of the display interface as an example, the target presentation may be the display of a movement control of the display interface.
In one possible implementation, the target presentation may include: the vibration prompt of the hardware related to the target function, for example in the scene of the intelligent vehicle cabin, the target presentation may be the vibration prompt of the seat.
Taking the target function as the front-rear position adjustment of the seat in the vehicle cabin as an example, the target presents a vibration prompt that may be a seat for which the seat position adjustment is to be performed.
In one possible implementation, the target presentation may include: an audio prompt, which may include a voice that has enabled a tuning function for the target function, such as, for example, a progress tuning of a video or audio played by the target function for the application, then the target presentation may be "the progress tuning function of the video or audio played by the application has enabled" playing the voice; taking the target function as volume adjustment, the target presentation may be that the volume adjustment function is turned on; taking the target function as the display brightness adjustment as an example, the target presentation may be playing voice "the display brightness adjustment function is turned on"; taking the target function as zoom adjustment of the displayed image as an example, the target presentation may be playing voice "zoom function of image is turned on"; taking the target function as the car window height adjustment as an example, the target presentation may be playing a voice that the car window height adjustment function is turned on; taking the target function as the front-rear position adjustment of the seat in the vehicle cabin as an example, the target presentation may be playing voice "the front-rear position adjustment function of the seat in the vehicle cabin is turned on".
603. Second radar data is acquired.
After the adjustment function for the target function is turned on (fine adjustment), the user can make the adjustment for the target function by a gesture within the monitoring area of the radar system.
Specifically, the processor may acquire the second radar data, where the second radar data may be obtained from a reflected signal of a gesture (a second gesture) of the user, and how the processor acquires the second radar data may refer to the description of the acquisition manner of the first radar data in the above embodiment, which is not described herein again.
In one possible implementation, the first radar data is acquired before the second radar data.
In one possible implementation, the first radar data and the second radar data are radar data acquired continuously in a time domain; or the first radar data and the second radar data are radar data acquired at intervals of a target time period in a time domain, and the duration of the target time period is smaller than a second threshold; the second threshold may be a time when the processor performs the relevant processing of the first radar data, during which the adjustment function for the target function is not yet turned on.
604. In response to the turning on of the adjustment function for the target function, determining a second gesture indicated by the second radar data and a motion feature of the second gesture according to the second radar data.
In the embodiment of the application, based on the turning on of the adjustment function for the target function, the second gesture indicated by the second radar data and the motion feature of the second gesture may be determined according to the second radar data.
It should be understood that there is also a preset mapping relationship between the first gesture and the second gesture, specifically, the first gesture may be used to turn on an adjustment mode of the target function corresponding to the gesture type of the first gesture, and in the case of turning on the adjustment mode of the target function, the user can only adjust the target function based on the gesture type (gesture type of the second gesture) corresponding to the adjustment mode of the target function.
The second radar gesture is a gesture when the user performs fine adjustment, the first gesture is used as a wakeup gesture of the fine adjustment function, and the gesture difference between the first gesture and the second gesture is large, that is, the first gesture and the second gesture can be considered to be independent gestures.
In a possible implementation, the first gesture and the second gesture may also be continuous gesture actions of the user, so called continuous gesture actions, which may be understood as that the hand types of the first gesture and the second gesture are the same (or have a small difference), alternatively, the first gesture may be a still gesture or a gesture with a movement amplitude smaller than a threshold value, and the second gesture may be a gesture with a movement amplitude larger than the threshold value because the second gesture requires a certain adjustment amplitude of the target function.
In the embodiment of the application, the first gesture is the same as the second gesture in hand type, so that the gesture of the user when the function of fine adjustment is awakened and the gesture of the user when the function of fine adjustment is awakened are the same in hand type, and the user can accurately adjust the function with low learning cost.
In one possible implementation, the first gesture is a finger pinch gesture, and the second gesture is a hold-pinch-and-drag gesture; or the first gesture is a palm hovering gesture, and the second gesture is a lifting gesture or a pressing gesture; or the first gesture is a palm hovering gesture, and the second gesture is a left-right hand waving gesture; or the first gesture is a palm hovering gesture, and the second gesture is a push-back gesture; or the first gesture is a fist making gesture, and the second gesture is a fist keeping gesture and moving gesture; or the first gesture is a palm light shaking gesture, and the second gesture is a lifting gesture or a pressing gesture; alternatively, the first gesture is a fist-making and thumb-extending gesture, and the second gesture is a thumb-holding and thumb-extending gesture. The gesture semantic attaching and adjusting function of the second gesture accords with user habits and can help to reduce learning cost.
In one possible implementation, the first gesture and the second gesture are the same gesture type of the first gesture, and both the first gesture and the second gesture are gestures with a movement magnitude greater than a threshold value. For example, the first gesture and the second gesture may both be a circling gesture.
In an embodiment of the application, the second radar data may be analyzed to determine a motion characteristic of the second gesture, which may be used for determining adjustment information when making a fine adjustment. In the fine adjustment scheme, all characteristics of each adjustment action do not need to be subjected to fine quantization, and a certain motion characteristic or part of motion characteristics of the gesture can be selected according to the characteristics of the gesture to realize a fine adjustment function. Wherein different gestures may employ different motion characteristics.
In one possible implementation, the second radar data is obtained based on reflection of a gesture of the user in a radar field provided by the radar system, and the motion feature of the second gesture may include: distance information of the second gesture, the distance information including at least one of a change in distance between the second gesture and the radar system over time, a rate of change in the distance, and a direction of change in the distance.
Wherein, the application condition based on the distance characteristic realizes meticulous regulation can be the relative distance constantly changing's between gesture action and the radar condition to and the hand reflection point region is comparatively concentrated, is favorable to the condition of range finding.
For example, gestures suitable for fine quantization with distance features may include, but are not limited to: palm up/down, left/right waving, back/forth movement, etc.
The distance information is contained in the frequency of each echo pulse, the fast Fourier transform can be carried out on a single pulse in a fast time to obtain the distance information of the gesture in the current pulse time, and the distance information of each pulse is integrated to obtain the whole distance change information of the single gesture.
The intermediate frequency signal can be simplified as:
Figure DEST_PATH_GDA0003596507770000201
namely, a signal spectrum can be obtained through FFT, and the position of a spectrum peak is found out:
Figure DEST_PATH_GDA0003596507770000202
which is proportional to the target distance, the target distance is obtained as:
Figure DEST_PATH_GDA0003596507770000203
the schematic diagram of distance information extraction is shown in fig. 13. The distance resolution refers to the ability to distinguish two adjacent targets, namely the minimum distance between targets that can ensure that echo signals are not aliased, and satisfies the following conditions:
Figure DEST_PATH_GDA0003596507770000211
where c is the speed of light and B is the chirp bandwidth. The correspondence between the common bandwidth and the range resolution in the 60GHz band can be shown in table 2 below:
TABLE 2
Figure DEST_PATH_GDA0003596507770000212
Thus, increasing the sweep bandwidth increases range resolution. The minimum scale for fine adjustment of the distance dimension at this time is the distance resolution. In the embodiment of the application, the distance resolution capability can be improved by adjusting Range-FFT when the distance features are extracted. Specifically, after Range _ FFT processing, one spectrum interval corresponds to one Range gate unit, and the following relationship is satisfied:
Figure DEST_PATH_GDA0003596507770000213
wherein, NsRepresenting the number of sampling points of the I/Q path Chirp signal, NFFTRepresenting the number of FFT sample points. In general, NsAnd NFFTIs equal at this time
Figure DEST_PATH_GDA0003596507770000214
If increase NFFTI.e., a can be decreased.
By way of example: if N is setsIs 128, NFFTAt 512, 128 samples of the signal are input as the first 1/4 samples of the FFT, and the remainder 3/4 is padded with 0, at which time the distance gate spacing is reduced to 1/4 before. The minimum scale for fine adjustment of super resolution at this time may be 1/4 for range resolution.
In one possible implementation, the distance profile can be obtained by range-FFT, and the horizontal axis of the profile corresponds to the number of chirp signals, which can represent time. The vertical axis represents distance, and each unit corresponds to a distance resolution RresThe corresponding length. Because millimeter wave radar is sensitive to submillimeter-level displacement, echo signals with different intensities are arranged on different range gates on the same chirp signal. And then, the position of the energy maximum value in each chirp echo signal can be extracted, and the longitudinal axis value corresponding to the position is the position of the current gesture motion. There are typically few positional discontinuities in the results, the analytical cause being mainly due to noise. In order to weaken the influence of the mutation points, a sliding window averaging method is used for smoothing, and meanwhile, jump jumping between distance points can be smoothed, so that the subsequent processing is facilitated. The sliding window length may be determined from the extracted fine feature time window (n frames) length. Then, first-order fitting can be performed on each section of distance characteristic data, and a slope k of a first-order fitting function is extracted, so as to obtain distance adjustment basis information shown in the following table 3:
TABLE 3
Figure DEST_PATH_GDA0003596507770000215
Taking the above-described lift and push gestures as examples, for a lift-up action, referring to FIG. 14, the lift-up action may last for about 5/6 seconds, resulting in 5 slope values. -4.4657, 2.4895, 5.7985, 8.8039, 12.5691, wherein the first slope is negative because the process of extending the hand into the radar detection is a relative distance reduction process, and the subsequent action is continuously away from the radar. The magnitude of the value reflects the rate of action.
With respect to the push-down action, referring to fig. 15, the push-down action is a process in which the relative distance is continuously reduced, as opposed to the lift-up action. The above-mentioned actions also last about 5/6 seconds, and all slope values are negative numbers, which are-9.4675, -8.8054, -4.7695, -8.7964 and-6.6660, respectively. Similarly, the magnitude of the absolute value of the slope reflects the speed of the motion.
In one possible implementation, the second radar data is obtained based on reflection of a gesture of the user in a radar field provided by the radar system, and the motion characteristic of the second gesture may include: velocity information of the second gesture, the velocity information including a magnitude of change in a rate of movement of the second gesture in the radar field over time.
The fine adjustment based on the speed characteristic can be applied to fine adjustment processes with significant speed variation, in particular to periodic movements. At which time the velocity exhibits a clear sinusoidal relationship with time. Based on this analysis, gestures that employ fine quantification of speed characteristics, such as clockwise, counterclockwise circling, continuous clicking, slider dialing, etc., are suitable.
In one possible implementation, after performing FFT on the fast time of the original gesture echo, performing FFT again in the slow time dimension, the peak value of which can reflect the doppler frequency of the target, i.e. the speed information of the target. The slow time domain FFT is needed to be carried out in the same range gate, and because the overall movement of the target has range migration, the FFT cannot be directly carried out on a certain range gate of the overall gesture, and the accumulated pulse number is reasonably set, so that the gesture truncation in each FFT operation basically has no range migration. The time-frequency analysis of the gesture signals is completed through short-time Fourier transform, so that the gesture Doppler information is extracted, the number of accumulated pulses is reasonably designed, and the window length of the short-time Fourier transform is reasonably set.
For data processing of the text, firstly, fast time FFT is carried out on an original signal to obtain distance information, then, data of the position of each pulse peak is extracted and recombined into a column, and time-frequency analysis is carried out on the column of data by using STFT, so that the Doppler change rule of a single gesture is obtained.
If the target is in a motion state, the following steps are carried out:
Figure DEST_PATH_GDA0003596507770000221
the signal frequency contains both distance and velocity information, i.e. the coupling of distance and velocity occurs and cannot be directly obtained by one-dimensional FFT. Let the signal sampling period be TsThe pulse repetition interval is T, the number of sampling points of the single pulse is N, L pulses are received, and the method is rewritten as follows:
Figure DEST_PATH_GDA0003596507770000222
where N is 0,1,2, …, N-1, representing a sequence of sampling points for a single pulse, and L is 0,1,2, …, L-1, representing a sequence of pulses.
It can be observed that the phase part of the signal carries the velocity information, and the phase part is in the form of a complex envelope of the signal after one-dimensional FFT. Therefore, the FFT is performed on the signal after the one-dimensional FFT in the second dimension (i.e., the slow time lT is used as a variable), and the signal center frequency (i.e., the target doppler frequency) reflecting the target speed can be obtained:
Figure DEST_PATH_GDA0003596507770000223
the following can be obtained:
Figure DEST_PATH_GDA0003596507770000224
the time-frequency analysis of the signal refers to describing the frequency component composition mode of each time range of the signal. Since a stationary signal is often an ideal condition or manufactured by human, and the signal is generally non-stationary, the fourier transform is not sufficient to analyze the stationary signal, and the stationary signal needs to be analyzed by means of time-frequency analysis. And performing time-frequency analysis by short-time Fourier transform.
A Short Time Fourier Transform (STFT) represents the signal characteristics at a certain time instant by a segment of the signal within a time window. In the short-time fourier transform process, the time resolution and the frequency resolution of the time-frequency diagram are determined by the window length, and an increase in the window length leads to an increase in the length of the intercepted signal, so that the higher the frequency resolution obtained by the STFT, the worse the time resolution, and vice versa. For operation, the short-time fourier transform is performed by first multiplying a function by a window function, and then performing a one-dimensional fourier transform. And a series of Fourier transform results are obtained by sliding the window function, and the results are arranged to obtain a two-dimensional representation with the horizontal axis as a time domain and the vertical axis as a frequency domain. Let s (t) be the signal to be analyzed, STFT (t, ω) be the signal time-frequency analysis result, and the short-time Fourier transform formula is as follows:
STFT(t,ω)=∫s(t′)ω(t′-t)e-jωt′dt′;
as described above, when using the STFT, a window length needs to be set, and the window length affects the time resolution and the frequency resolution, the high frequency signal is suitable for using a small window length to obtain a high time domain resolution, and the low frequency signal is suitable for using a large window length to obtain a high frequency domain resolution, but the STFT window length is fixed, so that the time-frequency analysis capability of the signal still has a certain defect. The STFT basis functions at different frequencies may be as shown in fig. 16. A diagram of doppler information extraction can be seen in fig. 17.
Exemplarily, when fine adjustment based on the velocity information is performed, a velocity feature map may be obtained, and the position of the energy maximum value in each chirp echo signal is extracted, where the longitudinal axis value corresponding to the position is the current gesture velocity value. There may be fluctuations in the local velocity values due to the inability of the gesture motion to maintain an absolutely steady velocity. In order to eliminate the local fluctuation condition of the speed change curve, a sliding window averaging method is adopted for smoothing. The resulting velocity profile may reflect a relatively uniform movement of the gesture motion. According to the shape characteristics of the obtained speed curve, variables reflecting the fine gesture characteristics can be extracted. Taking a sine-type speed variation curve as an example, the number of zero points of the sine-type speed curve can be determined by the mesopic theorem: the function is monotonous within [ a, b ] and f (a) × f (b) <0, then the function f (x) has one and only one zero point in [ a, b ]. One zero corresponds to 1/2 cycles. By first-order derivation, the number of peaks and valleys that satisfy a derivative value of 0 can be determined. One peak to valley corresponds to 1/2 cycles. And fitting a sine function to obtain the frequency of the speed transformation sine function. Wherein, the gesture total duration/sine function period T is the period number.
Similar to the distance characteristic, the control of the adjustment amplitude and the adjustment speed can also be realized based on the change of the speed characteristic of the transform domain. The implementation of the adjustment direction may be based on the recognition result of the wake gesture, such as clockwise or counterclockwise circling, or may utilize the angle feature to implement the determination, such as the direction of the clockwise or counterclockwise angle change is opposite.
Taking the circling gesture as an example, according to the above analysis, circling clockwise and counterclockwise is a typical action of fine adjustment using the speed feature, and the periodic variation of the speed information after the STFT is more obvious than the periodic fluctuation of the distance.
The speed characteristic diagram shown in fig. 18 is a speed characteristic diagram of 5-circle counterclockwise motion, and the number of zero points and the number of peaks and troughs obtained through the above steps are 11 and 11 respectively. The gesture classification result is judged to be anticlockwise, the user can be fed back every 1/4 periods according to the number of wave crests and wave troughs and the number of zero points, and the adjustment speed can be judged according to the number of the wave crests and the wave troughs in a fixed processing time interval.
In a possible implementation, the second radar data is obtained based on reflection of a gesture of the user in a radar field provided by a radar system, and the motion characteristic of the second gesture may include angle information of the second gesture, the angle information including a change in an angle between the second gesture and the radar system over time, the angle including an azimuth angle and/or a pitch angle.
The fine adjustment realized based on the angle features can be suitable for the situation that the gesture movement has no large distance and fluctuation change of speed, and is particularly suitable for the gesture with the track existing in the XOY planes at different heights of the schematic diagram. For example, the method can be applied to actions moving in different directions (currently, the hardware performance is limited, the angular resolution is low, and a 360 ° plane can be divided into 4 areas, as shown in fig. 21), and the target angle is obtained based on multiple receiving antennas of a radar by measuring the phase difference of each received echo. The diagram of the target echo received by the multiple antennas is shown in fig. 19.
For example, a multiple signal classification algorithm (MUSIC) may be used, and the angular change of the gesture is measured by using a radar four-receive antenna array, which is different from the idea of directly processing a covariance matrix of an array received signal used by a preamble related algorithm, and the MUSIC algorithm performs feature decomposition on the covariance matrix of arbitrary array output data to obtain a signal subspace corresponding to a signal component and a noise subspace orthogonal to the signal component, and then estimates signal parameters such as an incident direction, polarization information, and signal strength by using the orthogonality of the two subspaces. The MUSIC algorithm has universal applicability and has the advantages of high precision, simultaneous measurement of multiple signals and the like. The use of the MUSIC algorithm requires that the radar meet that the array element spacing is not more than half the carrier wave length.
Exemplarily, if the number of array elements of the radar linear array is K and the distance is d, the time delay of the received signal between the two array elements is dsin theta/c, and if M targets exist, the angles are thetamWhen M is 1, …, M, the M target received signals are:
S(t)=[S1(t),S2(t),…,SM(t)]T
the direction vector of the signal is:
Figure DEST_PATH_GDA0003596507770000241
wherein the content of the first and second substances,
Figure DEST_PATH_GDA0003596507770000242
setting array element noise vectors as:
N(t)=[n1(t),n2(t),…,nK(t)]T
the received signal can be obtained as:
X(t)=AS(t)+N(t);
assuming that the signals of the array elements are uncorrelated, the covariance matrix of the received signals is:
R=E[XXH]=APAH2I;
wherein, P ═ E [ SS ]H]Is a signal correlation matrix, σ2Is the noise power, I is the K × K order identity matrix. Since R is a full rank matrix and the eigenvalues are positive, decomposing the R characteristics into eigenvectors vi(i ═ 1,2, …, K), since the noise subspace is orthogonal to the signal subspace, with the noise eigenvectors as columns, construct the noise matrix:
En=[vM+1,…,vK];
defining a spatial spectrum function:
Figure DEST_PATH_GDA0003596507770000243
when a (theta) and EnWhen the columns are orthogonal, the denominator reaches the minimum value, so that P can be pairedmu(theta) performing a spectral peak search and estimating an angle of arrival by finding a peak.
Based on the multiple receiving antennas of the radar, the angle change of the gesture process can be obtained through a MUSIC algorithm. For the purposes of this document, each angle calculation uses 8 pulses, i.e. the 8 original echo pulses of the single-channel received echo are first spliced.
Xi=[xi1,xi2,…,xiN];
Where N is 4096, i is the total length of 8 pulses and i is the channel number. And then splicing the four channels of data to obtain an input matrix of the MUSIC algorithm:
Figure DEST_PATH_GDA0003596507770000251
and obtaining gesture angle distribution corresponding to the echo signal section through a MUSIC algorithm. Operating as above for every 8 pulses in all echoes, angular information for a single gesture full phase can be obtained. The angle information extraction diagram is shown in fig. 20.
Angular resolution characterizes the ability to resolve two objects at the same distance, denoted by the angle at that time. In general, the angular resolution at the time of FMCW millimeter wave angle measurement is related to the number of receiving antennas, and the more receiving antennas, the higher the accuracy. Which satisfies the following conditions:
Figure DEST_PATH_GDA0003596507770000252
by using multiple transmitting terminals, the angular resolution can be further improved, a series of transmitting and receiving antennas of the MIMO array can form a virtual array, and the angular resolution at the moment meets the following requirements:
Figure DEST_PATH_GDA0003596507770000253
according to the above formula, the results of calculating the angular resolution of two common transceiver systems are shown in table 3 below:
Figure DEST_PATH_GDA0003596507770000254
in summary, it can be seen that the angular resolution can be greatly improved by improving the radar hardware configuration.
When the target function is finely adjusted based on the angle features, the angle feature map can be obtained by using an angle feature extraction method, the distribution of scattering points of the angle feature map is divided into regions according to angles, the palm moving direction can be determined according to the change of the angles along with time, then the change direction, the change size and the change speed of the angle features can be determined, the adjusting direction of the angle features can be reflected through the change of the regions, the adjusting amplitude can be reflected through the angle difference, and the adjusting speed can be reflected through the angle difference in unit time.
Illustratively, due to the current angular resolution limitations, the radar detection area may be roughly divided into four sections (as shown in fig. 21). With the improvement of the radar performance, the area can be divided more finely, so that more accurate direction judgment is realized. In the case of dividing 4 regions, each region corresponds to 90 °. The horizontal movement of the palm over the radar from Area 4 to Area 2 may result in a profile as shown in fig. 22.
In addition, referring to fig. 23, the target function may be finely adjusted based on the motion characteristics, specifically, when multiple high-degree-of-freedom gestures need to be finely adjusted, the fine adjustment may not be easily performed in a single dimension of distance, speed, and angle (azimuth angle and pitch angle), and the fine adjustment may be performed by using a fusion characteristic. By increasing the number of the receiving and transmitting antennas or adopting a virtual aperture technology, after the radar angle measurement resolution and the accuracy are further improved, the angle characteristic, the distance characteristic and the speed characteristic can be fused, the gesture can be positioned and tracked in real time, and therefore the function of moving similar to an air mouse is achieved.
Optionally, in order to ensure real-time performance, a short-interval feature fine quantization method may be adopted, fine features are extracted once for each segment of time window (n frames), gesture data processed for each segment is not stored, and the gesture data is updated quickly.
According to the gesture feature extraction method and device, on the basis of gesture feature extraction, gesture features are finely quantized, including distance, speed, horizontal angle, pitching angle and the like. And obtaining variables reflecting the characteristic change direction, the characteristic change quantity and the characteristic change speed, so that bidirectional, different-amplitude, different-speed, high-stability and strong-generalization fine adjustment can be realized.
605. According to the motion characteristics, adjusting information is determined, wherein the adjusting information comprises at least one of adjusting amplitude, adjusting direction and adjusting speed, and the target function is adjusted based on the adjusting information.
Taking the progress adjustment of the video or audio played by the target function as the application as an example, the adjustment magnitude may indicate an adjustment magnitude of the progress, the adjustment direction may indicate an adjustment direction of the progress (for example, forward or backward adjustment of the progress), and the adjustment speed may indicate an adjustment speed of the progress.
For example, the target function is volume adjustment, the adjustment amplitude may indicate an adjustment size of the progress, the adjustment direction may indicate an adjustment direction of the progress (for example, forward or backward adjustment of the progress), and the adjustment speed may indicate an adjustment speed of the progress.
For example, in the case that the target function is display brightness adjustment, the adjustment amplitude may indicate an adjustment size of the progress, the adjustment direction may indicate an adjustment direction of the progress (for example, forward or backward adjustment of the progress), and the adjustment speed may indicate an adjustment speed of the progress.
Taking the target function as zoom adjustment of the display image as an example, the adjustment amplitude may indicate the zoom scale size of the display image, the adjustment direction may indicate zoom-in or zoom-out adjustment, and the adjustment speed may indicate the adjustment speed of zoom.
Taking the target function as the movement adjustment of the display interface as an example, the adjustment amplitude can indicate the displacement of the movement during adjustment, the adjustment direction can indicate the movement direction during adjustment, and the adjustment speed can indicate the movement speed during adjustment.
Taking the target function as the window height adjustment as an example, the adjustment amplitude can indicate the adjustment size of the window height, the adjustment direction can indicate whether the window height is adjusted upwards or downwards, and the adjustment speed can indicate the adjustment speed of the window height.
The target function is taken as the adjustment of the front and rear positions of the seat in the vehicle cabin, the adjustment amplitude can indicate the adjustment size of the front and rear position of the seat in the vehicle cabin, the adjustment direction can indicate the forward adjustment or the backward adjustment of the seat in the vehicle cabin, and the adjustment speed can indicate the adjustment speed of the front and rear position of the seat in the vehicle cabin.
In one possible implementation, the user may close the fine adjustment function by terminating the gesture (third gesture), and specifically, the processor may obtain third radar data, and may further close the adjustment function for the target function based on the third gesture indicated by the third radar data, where the third gesture is a withdrawing hand gesture or a hovering gesture. The stopping action serves as a mark for fine adjustment, so that misadjustment caused by redundant actions is avoided, most fine adjustment functions can be directly stopped, and part of the fine adjustment functions are sensitive to distance change and adopt hovering to stop. Optionally, there may be strong continuity between the third gesture and the second gesture.
For example, reference may be made to fig. 24a, and fig. 24a is a flowchart illustrating a function adjusting method provided in an embodiment of the present application. As shown in fig. 24a, the embodiment of the present application may organically combine fine adjustment with gesture classification recognition, which not only ensures that the gesture classification recognition outputs the recognition result to complete single-instruction operation, but also compatibly adopts a combined gesture or a continuously repeated gesture to enter the fine adjustment function.
The embodiment of the application provides a function adjusting method, which comprises the following steps: acquiring first radar data; based on the first radar data indicating a first gesture and a duration of the first gesture exceeding a first threshold, opening an adjustment function for a target function; acquiring second radar data; in response to the turning on of the adjustment function for the target function, determining a second gesture indicated by the second radar data and a motion feature of the second gesture according to the second radar data; according to the motion characteristics, adjusting information is determined, wherein the adjusting information comprises at least one of adjusting amplitude, adjusting direction and adjusting speed, and the target function is adjusted based on the adjusting information. In the embodiment of the present application, the duration of the gesture indicated by the first radar data is used as a basis for whether to turn on the fine adjustment mode, and this part of the radar data may not be used as a basis for determining the adjustment degree when performing the fine adjustment subsequently, but only as a trigger condition for turning on the fine adjustment (which may also be referred to as a wake-up gesture in the embodiment of the present application), and turning on the fine adjustment based on the gesture duration has the following advantages: because the gesture types are limited, in the case of the future function types being continuously abundant, the gesture types may not be enough in the scheme of starting the fine adjustment function (the independent gesture function occupies one part of the gesture types, the wakeup gesture occupies another part of the gesture types, the two cannot be overlapped, otherwise an error occurs), and based on the gesture duration as the basis of whether the fine adjustment mode is started, the gesture types used by the independent gesture function and the wakeup gesture can be overlapped, so that the gesture types required in the function adjustment of the gesture realization can be reduced. In addition, in the scene of adjusting functions related to gestures, especially in the scene of fine adjustment, it is necessary to ensure that the overall gesture design is continuous as much as possible, when the user wants to perform fine adjustment based on gestures, the user is aware of the gesture required for the adjusting process for a continuous period of time, and when the fine-adjustment function is woken up, if the rule of the wake-up gesture is also defined as whether the duration is long enough, the user can consider the operation process and the follow-up of the part of the wake-up gesture to be coherent. The duration of the gesture indicated by the first radar data is used as the basis for judging whether the fine adjustment mode is started, so that the thinking and using habits of the user are better met, and the learning cost of the user is reduced.
Referring to fig. 24b, fig. 24b is a flowchart illustrating a function adjusting method provided in an embodiment of the present application, where the method may include:
2401. obtaining target radar data, wherein the target radar data is obtained based on reflection of a target gesture of a user in a radar field provided by a radar system;
in the design with the wake gesture, the target radar data may be the second radar data described in the first aspect, and the details of the similarity are not repeated here.
2402. Determining the motion characteristics of the target gesture according to the target radar data; the feature types of the motion features of the target gesture include at least two of range information, velocity information, or angle information, the range information including changes in distance between the target gesture and the radar system over time, the velocity information including changes in relative velocity of the target gesture and the radar system over time, the angle information including changes in angle of the target gesture in the radar field over time, the angle including azimuth and/or pitch.
In one possible implementation, the variation of the distance over time includes:
at least one of a magnitude of change of the distance over time, a rate of change of the distance over time, or a direction of change of the distance over time;
the adjustment amplitude is related to the value of the change of the distance with time, the adjustment speed is related to the change rate of the distance with time, and the adjustment direction is related to the change direction of the distance with time.
In one possible implementation, the target gesture is a periodic gesture, and the change in relative velocity over time is used to determine a number of gesture cycles of the target gesture;
the adjustment magnitude is related to the number of cycles, and the adjustment speed is related to the number of gesture cycles of the target gesture within a fixed time.
In one possible implementation, the variation of the angle over time includes:
at least one of a magnitude of change of the angle over time, a rate of change of the angle over time, or a direction of change of the angle over time;
the adjustment amplitude is related to the change value of the angle along with the time, the adjustment speed is related to the change rate of the angle along with the time, and the adjustment direction is related to the change direction of the angle along with the time.
In one possible implementation, it may be determined that the feature type of the motion feature of the target gesture includes the speed information based on whether the target gesture is a periodic gesture or a gesture with a changing relative velocity with the radar system;
determining that the feature type of the motion feature of the target gesture comprises the distance information based on the fact that the target gesture is a gesture with a continuously changing distance from the radar system;
determining that the feature type of the motion feature of the target gesture includes the angle information based on the target gesture being a gesture with a constantly changing angle in the radar field.
According to the embodiment of the application, the corresponding motion characteristic types can be obtained aiming at different gesture types, and the data processing amount is reduced on the premise of ensuring accurate identification.
For more description about step 2402, reference may be made to the description of step 604 in the foregoing embodiment, which is not described herein again.
2403. According to the motion characteristics, adjusting information is determined, wherein the adjusting information comprises at least one of adjusting amplitude, adjusting direction and adjusting speed, and the target function is adjusted based on the adjusting information.
When the function is finely adjusted, the adjusting action at least has some characteristics with significant changes, such as distance, angle or speed, and the gesture characteristics are finely quantized on the basis of the motion characteristic extraction of the gesture according to the embodiment of the application, including distance, speed, horizontal angle, pitch angle and the like. Variables reflecting the characteristic change direction, the characteristic change quantity and the characteristic change speed are obtained, and then bidirectional, different amplitude, different speed, high stability and strong generalization fine adjustment can be realized.
Referring to fig. 25, fig. 25 is a schematic structural diagram of a function adjusting device provided in an embodiment of the present application, where the device 2500 includes:
an obtaining module 2501, configured to obtain first radar data;
for a detailed description of the obtaining module 2501, refer to step 601 and step 603, and similar parts are not described herein again.
A function opening module 2502 configured to open an adjustment function for a target function based on the first radar data indicating a first gesture and a duration of the first gesture exceeding a first threshold;
for a detailed description of the function starting module 2502, reference may be made to step 602, and similar parts are not described herein again.
The acquiring module 2501 is further configured to acquire second radar data;
a function adjusting module 2503, configured to determine, according to the second radar data, a second gesture indicated by the second radar data and a motion feature of the second gesture in response to the turning on of the adjusting function for the target function; and the number of the first and second groups,
according to the motion characteristics, adjusting information is determined, wherein the adjusting information comprises at least one of adjusting amplitude, adjusting direction and adjusting speed, and the target function is adjusted based on the adjusting information.
For a detailed description of the function adjustment module 2503, reference may be made to step 604 and step 605, and similar parts are not described herein again.
In one possible implementation, the first threshold is greater than 0.7 seconds and less than 1.5 seconds.
In one possible implementation, the first radar data is acquired before the second radar data.
In one possible implementation, the first radar data and the second radar data are radar data acquired continuously in a time domain; alternatively, the first and second liquid crystal display panels may be,
the first radar data and the second radar data are radar data acquired at intervals of a target time period on a time domain, and the duration of the target time period is smaller than a threshold value.
In one possible implementation, the first gesture and the second gesture are continuous gesture actions of the user.
In one possible implementation, the first gesture and the second gesture are of the same gesture type, the first gesture is a static gesture or a gesture with a movement magnitude smaller than a threshold value, and the second gesture is a gesture with a movement magnitude larger than a threshold value.
In one possible implementation, the first gesture is a finger pinch gesture, and the second gesture is a hold-pinch-and-drag gesture; alternatively, the first and second electrodes may be,
the first gesture is a palm hovering gesture, and the second gesture is an upward lifting gesture or a downward pressing gesture; alternatively, the first and second liquid crystal display panels may be,
the first gesture is a palm hovering gesture, and the second gesture is a left-right waving gesture; alternatively, the first and second liquid crystal display panels may be,
the first gesture is a palm hovering gesture, and the second gesture is a push-back gesture; alternatively, the first and second electrodes may be,
the first gesture is a fist-making gesture, and the second gesture is a fist-holding and moving gesture; alternatively, the first and second electrodes may be,
the first gesture is a palm shake gesture, and the second gesture is a lift-up gesture or a press-down gesture; alternatively, the first and second liquid crystal display panels may be,
the first gesture is a fist-making and thumb-extending gesture, and the second gesture is a holding fist-making and thumb-extending and back-and-forth pushing gesture.
In one possible implementation, the first gesture and the second gesture are the same gesture type of the first gesture, and both the first gesture and the second gesture are gestures with a movement magnitude greater than a threshold value.
In one possible implementation, the first gesture and the second gesture are both circling gestures.
In a possible implementation, the function starting module 2502 is further configured to:
before the starting of the adjustment function for the target function, determining that the gesture type of the first gesture corresponds to the target function based on a preset corresponding relation, wherein the preset corresponding relation comprises mapping between the gesture type and the function.
In one possible implementation, the gesture type of the first gesture is finger pinch, and the target function is progress adjustment of video or audio played by an application; alternatively, the first and second electrodes may be,
the gesture type of the first gesture is circling, and the target function is volume adjustment; alternatively, the first and second electrodes may be,
the gesture type of the first gesture is palm hovering, and the target function is display brightness adjustment or zoom adjustment of a display image; alternatively, the first and second electrodes may be,
the gesture type of the first gesture is fist making, and the target function is movement adjustment of a display interface; alternatively, the first and second liquid crystal display panels may be,
the gesture type of the first gesture is light palm shaking, and the target function is vehicle window height adjustment; alternatively, the first and second electrodes may be,
the gesture type of the first gesture is fist making and thumb extending, and the target function is front-back position adjustment of a seat in the cabin.
In one possible implementation, the apparatus further comprises:
and a presentation module, configured to perform target presentation corresponding to the target function before the target function is adjusted based on the adjustment information, where the target presentation is used to indicate that an adjustment function for the target function has been opened.
In one possible implementation, the target presentation includes at least one of:
displaying a control for adjusting the target function;
a shock alert for hardware associated with the target function; and the number of the first and second groups,
and (6) voice prompt.
In one possible implementation, the indicating a first gesture based on the first radar data and a duration of the first gesture exceeding a first threshold includes:
based on the first radar data indicating a gesture of a user and the duration of the gesture of the user exceeding a first threshold, determining the gesture of the user as a first gesture according to the first radar data, wherein the first gesture is used for indicating to start the adjusting function.
In one possible implementation, the determining, from the first radar data, that the gesture of the user is a first gesture includes:
intercepting part of the radar data from the first radar data;
and determining the gesture of the user as a first gesture according to the partial radar data.
In one possible implementation, the partial radar data is the first N radar data in the first radar data.
In one possible implementation, the determining, from the first radar data, that the gesture of the user is a first gesture includes:
acquiring the motion characteristics of the gesture of the user according to the first radar data, and determining that the gesture of the user is a first gesture according to the motion characteristics of the gesture of the user; alternatively, the first and second liquid crystal display panels may be,
and determining the gesture of the user as a first gesture through a pre-trained gesture classification network according to the first radar data.
In one possible implementation, the second radar data is obtained based on reflection of a gesture of the user in a radar field provided by a radar system, and the motion characteristic of the second gesture includes:
distance information of the second gesture, the distance information including at least one of a change in distance between the second gesture and the radar system over time, a rate of change in the distance, and a direction of change in the distance.
In one possible implementation, the second radar data is obtained based on reflection of a gesture of the user in a radar field provided by a radar system, and the motion characteristic of the second gesture includes:
velocity information of the second gesture, the velocity information including a magnitude of change in a rate of movement of the second gesture in the radar field over time.
In one possible implementation, the second radar data is obtained based on reflection of a gesture of the user in a radar field provided by a radar system, and the motion characteristic of the second gesture includes:
angle information of the second gesture, the angle information including a change over time in an angle between the second gesture and a radar system, the angle including an azimuth angle and/or a pitch angle.
In one possible implementation, the obtaining module 2501 is further configured to:
obtaining third radar data after the adjusting the target function based on the adjustment information;
the device further comprises:
a function closing module to close an adjustment function for a target function based on the third radar data indicating a third gesture; the third gesture is a dismissal gesture or a hover gesture.
In the embodiment of the present application, the duration of the gesture indicated by the first radar data is used as a basis for whether to turn on the fine adjustment mode, and this part of the radar data may not be used as a basis for determining the adjustment degree when performing the fine adjustment subsequently, but only as a trigger condition for turning on the fine adjustment (which may also be referred to as a wake-up gesture in the embodiment of the present application), and turning on the fine adjustment based on the gesture duration has the following advantages: because the gesture types are limited, in the future situation that the function types are continuously abundant, the gesture types are possibly insufficient in the scheme of starting the fine adjustment function (the independent gesture function occupies one part of the gesture types, the wake-up gesture occupies the other part of the gesture types, the two can not be overlapped, otherwise errors can occur), and the gesture types used by the independent gesture functions can be overlapped with the wake-up gesture based on the gesture duration as the basis of whether the fine adjustment mode is started, so that the gesture types required in function adjustment of gesture realization can be reduced. In addition, in the scene of adjusting functions related to gestures, especially in the scene of fine adjustment, it is necessary to ensure that the overall gesture design is continuous as much as possible, when the user wants to perform fine adjustment based on gestures, the user is aware of the gesture required for a continuous period of time in the adjusting process, and when the fine adjustment function is woken up, if the rule of the wake-up gesture is also defined as whether the duration is long enough, the operation process and the subsequent processes of the wake-up gesture are considered to be coherent as the user. The duration of the gesture indicated by the first radar data is used as a basis for judging whether the fine adjustment mode is started, so that the thinking inertia and the use habit of the user are more met, and the learning cost of the user is reduced.
Referring to fig. 26, fig. 26 is a schematic structural diagram of a function adjusting device provided in an embodiment of the present application, where the device 2600 includes:
the obtaining module 2601 is configured to obtain target radar data, where the target radar data is obtained based on reflection of a target gesture of a user in a radar field provided by a radar system.
For a detailed description of the obtaining module 2601, reference may be made to the description of step 2401 in the foregoing embodiment, which is not described herein again.
A motion feature determination module 2602, configured to determine a motion feature of the target gesture according to the target radar data; the feature types of the motion features of the target gesture include at least two of range information, velocity information, or angle information, the range information including changes in distance between the target gesture and the radar system over time, the velocity information including changes in relative velocity of the target gesture and the radar system over time, the angle information including changes in angle of the target gesture in the radar field over time, the angle including azimuth and/or pitch;
for a detailed description of the motion characteristic determining module 2602, reference may be made to the description of step 2402 in the foregoing embodiment, which is not described herein again.
A function adjusting module 2603, configured to determine adjustment information according to the motion characteristics, where the adjustment information includes at least one of an adjustment magnitude, an adjustment direction, and an adjustment speed, and adjust the target function based on the adjustment information.
For a detailed description of the function adjusting module 2603, reference may be made to the description of step 2403 in the foregoing embodiment, which is not described herein again.
In one possible implementation, the variation of the distance over time includes:
at least one of a magnitude of change of the distance over time, a rate of change of the distance over time, or a direction of change of the distance over time;
the adjustment amplitude is related to the value of the change of the distance with time, the adjustment speed is related to the change rate of the distance with time, and the adjustment direction is related to the change direction of the distance with time.
In one possible implementation, the target gesture is a periodic gesture, and the change in relative velocity over time is used to determine a number of gesture cycles of the target gesture;
the adjustment magnitude is related to the number of cycles, and the adjustment speed is related to the number of gesture cycles of the target gesture within a fixed time.
In one possible implementation, the variation of the angle over time includes:
at least one of a magnitude of change of the angle over time, a rate of change of the angle over time, or a direction of change of the angle over time;
the adjustment amplitude is related to the change value of the angle along with the time, the adjustment speed is related to the change rate of the angle along with the time, and the adjustment direction is related to the change direction of the angle along with the time.
In one possible implementation, the motion feature determination module 2602 is further configured to: determining that the feature type of the motion feature of the target gesture comprises the speed information based on whether the target gesture is a periodic gesture or a gesture with a constantly changing relative velocity with the radar system before the motion feature of the target gesture is determined according to the target radar data;
determining that the feature type of the motion feature of the target gesture comprises the distance information based on the fact that the target gesture is a gesture with a changing distance from the radar system;
determining that the feature type of the motion feature of the target gesture includes the angle information based on the target gesture being a gesture with a constantly changing angle in the radar field.
When fine adjustment of functions is carried out, the adjustment actions at least have certain characteristics with remarkable changes, such as distance, angle or speed, and the gesture characteristics are finely quantized on the basis of motion characteristic extraction of gestures, wherein the gesture characteristics comprise distance, speed, horizontal angle, pitching angle and the like. And obtaining variables reflecting the characteristic change direction, the characteristic change quantity and the characteristic change speed, so that bidirectional, different-amplitude, different-speed, high-stability and strong-generalization fine adjustment can be realized.
Next, a function adjusting device provided in the embodiment of the present application is introduced, please refer to fig. 27, and fig. 27 is a schematic structural diagram of the function adjusting device provided in the embodiment of the present application. Specifically, the function adjusting apparatus 2700 includes: a receiver 2701, a transmitter 2702, a processor 2703 and a memory 2704 (wherein the number of processors 2703 in the function adjusting apparatus 2700 may be one or more, one processor is exemplified in fig. 27), wherein the processor 2703 may include an application processor 27031 and a communication processor 27032. In some embodiments of the application, the receiver 2701, the transmitter 2702, the processor 2703, and the memory 2704 may be connected by a bus or other means.
Memory 2704 may include both read-only memory and random access memory and provides instructions and data to processor 2703. A portion of memory 2704 may also include non-volatile random access memory (NVRAM). The memory 2704 stores the processor and the operating instructions, executable modules or data structures, or a subset thereof, or an expanded set thereof, wherein the operating instructions may include various operating instructions for implementing various operations.
The processor 2703 controls the operation of the radar system (including the antenna, receiver 2701 and transmitter 2702). In a particular application, the various components of the radar system are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, the various buses are referred to in the figures as a bus system.
The function adjusting method (shown in fig. 6 and 24 b) disclosed in the embodiments of the present application can be applied to the processor 2703 or implemented by the processor 2703. The processor 2703 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 2703. The processor 2703 may be a general-purpose processor, a Digital Signal Processor (DSP), a microprocessor or a microcontroller, and may further include an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The processor 2703 may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 2704, and the processor 2703 reads the information in the memory 2704 and completes the steps of the function adjusting method provided by the above embodiments in combination with the hardware thereof.
Receiver 2701 may be used to receive incoming digital or character information and to generate signal inputs relating to relevant settings and functional control of the radar system. The transmitter 2702 may be used to output numeric or character information through the first interface; the transmitter 2702 may also be used to send instructions to the disk pack through the first interface to modify data in the disk pack.
In one possible implementation, the apparatus further comprises a radar system for:
providing a radar field;
sensing a reflection from a user in the radar field;
analyzing reflections from the user in the radar field; and
providing radar data based on an analysis of the reflections.
The function adjusting device 2700 may be a vehicle-mounted device in an intelligent cabin scene, a terminal device in an intelligent home scene, or the like.
Embodiments of the present application also provide a computer program product, which when executed on a computer, causes the computer to execute the function adjusting method described in the above embodiments.
Also provided in an embodiment of the present application is a computer-readable storage medium in which a program for signal processing is stored, which, when run on a computer, causes the computer to perform the function adjustment method described in the above embodiment.
The function adjusting device provided by the embodiment of the application can be specifically a chip, and the chip comprises: a processing unit, which may be for example a processor, and a communication unit, which may be for example an input/output interface, a pin or a circuit, etc. The processing unit may execute the computer-executable instructions stored by the storage unit to cause the chip in the execution device to execute the image enhancement method described in the above embodiments, or to cause the chip in the training device to execute the image enhancement method described in the above embodiments. Optionally, the storage unit is a storage unit in the chip, such as a register, a cache, and the like, and the storage unit may also be a storage unit located outside the chip in the radio access device, such as a read-only memory (ROM) or another type of static storage device that can store static information and instructions, a Random Access Memory (RAM), and the like.
Specifically, referring to fig. 28, fig. 28 is a schematic structural diagram of a chip provided in the embodiment of the present application, where the chip may be represented as a neural network processor NPU280, and the NPU280 is mounted on a main CPU (Host CPU) as a coprocessor, and the Host CPU allocates tasks. The core portion of the NPU is an arithmetic circuit 2803, and the controller 2804 controls the arithmetic circuit 2803 to extract matrix data in a memory and perform multiplication.
In some implementations, the arithmetic circuit 2803 internally includes a plurality of processing units (PEs). In some implementations, the operational circuitry 2803 is a two-dimensional systolic array. The arithmetic circuit 2803 can also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuitry 2803 is a general-purpose matrix processor.
For example, assume that there is an input matrix A, a weight matrix B, and an output matrix C. The arithmetic circuit fetches the data corresponding to the matrix B from the weight memory 2802 and buffers it in each PE in the arithmetic circuit. The arithmetic circuit takes the matrix a data from the input memory 2801 and performs matrix arithmetic on the matrix B, and a partial result or a final result of the matrix obtained is stored in an accumulator (accumulator) 2808.
The unified memory 2806 is used to store input data and output data. The weight data directly passes through a Direct Memory Access Controller (DMAC) 2805, and the DMAC is transferred to the weight memory 2802. The input data is also carried into the unified memory 2806 by the DMAC.
The BIU is a Bus Interface Unit (Bus Interface Unit) 2810, which is used for the interaction of the AXI Bus with the DMAC and the Instruction Fetch memory (IFB) 2809.
A Bus Interface Unit 2810(Bus Interface Unit, BIU for short) is used for the instruction fetch memory 2809 to obtain instructions from the external memory, and is also used for the memory Unit access controller 2805 to obtain the original data of the input matrix a or the weight matrix B from the external memory.
The DMAC is mainly used to transfer input data in the external memory DDR to the unified memory 2806, to transfer weight data to the weight memory 2802, or to transfer input data to the input memory 2801.
The vector calculation unit 2807 includes a plurality of operation processing units, and further processes the output of the operation circuit such as vector multiplication, vector addition, exponential operation, logarithmic operation, magnitude comparison, and the like, if necessary. The method is mainly used for non-convolution/full-connection layer network calculation in the neural network, such as Batch Normalization, pixel-level summation, up-sampling of a feature plane and the like.
In some implementations, the vector calculation unit 2807 can store the processed output vector to the unified memory 2806. For example, the vector calculation unit 2807 may apply a linear function and/or a nonlinear function to the output of the arithmetic circuit 2803, such as linear interpolation of the feature planes extracted by the convolution layer, and further such as a vector of accumulated values to generate an activation value. In some implementations, the vector calculation unit 2807 generates normalized values, pixel-level summed values, or both. In some implementations, the vector of processed outputs can be used as activation inputs to the arithmetic circuitry 2803, for example, for use in subsequent layers in a neural network.
An instruction fetch buffer 2809 connected to the controller 2804 and used to store instructions used by the controller 2804;
the unified memory 2806, the input memory 2801, the weight memory 2802, and the instruction fetch memory 2809 are all On-Chip memories. The external memory is private to the NPU hardware architecture.
The processor mentioned in any of the above embodiments may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of programs related to the steps of the function adjusting method described in the above embodiments.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, an exercise device, or a network device) to execute the method according to the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, training device, or data center to another website site, computer, training device, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a training device, a data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (12)

1. A radar system is characterized in that the radar system is deployed in a cabin of a vehicle, and the cabin further comprises a main driving position, an auxiliary driving position and a steering wheel fixed in front of the main driving position; wherein the content of the first and second substances,
the radar system includes:
a first radar system comprising a first radar integrated circuit, the first radar integrated circuit comprising:
at least one first transmit antenna;
at least one first receiving antenna;
the first radar integrated circuit is located on a side of the steering wheel near the co-driver seat, wherein the steering wheel is in a state of not being rotated by a user.
2. The radar system of claim 1, wherein the at least one first transmit antenna is configured to provide a radar field to at least one of:
an area in the primary driver's seat near the secondary driver's seat; and the number of the first and second groups,
an area between the primary driving seat and the secondary driving seat.
3. The radar system of claim 1 or 2, wherein the cabin further comprises a center console;
the radar system further comprises:
a second radar system comprising a second radar integrated circuit, the second radar integrated circuit comprising:
at least one second transmit antenna;
at least one second receiving antenna;
the second radar integrated circuit is located on one side, deviating from the direction of the vehicle head, of the center console.
4. Radar system according to claim 3, characterised in that the at least one second transmitting antenna is adapted to provide a radar field to at least one of the following areas:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driver's seat near the primary driver's seat; and
an area between the primary driving seat and the secondary driving seat.
5. Radar system according to claim 1 or 2, characterised in that the cabin further comprises an armrest box fixed in the area between the main and the secondary driving positions;
the radar system further includes:
a third radar system comprising a third radar integrated circuit, the third radar integrated circuit comprising:
at least one third transmit antenna;
at least one third receiving antenna;
the third radar integrated circuit is located on one side, facing the center console, of the armrest box.
6. Radar system according to claim 5, characterised in that the at least one third transmitting antenna is adapted to provide a radar field to at least one of the following areas:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driving seat near the primary driving seat; and
an area between the primary driving position and the secondary driving position.
7. A vehicle, characterized by comprising: a radar system located within a cabin of the vehicle; the vehicle cabin also comprises a main driving position, an auxiliary driving position and a steering wheel fixed in front of the main driving position; wherein, the first and the second end of the pipe are connected with each other,
the radar system includes:
a first radar system comprising a first radar integrated circuit, the first radar integrated circuit comprising:
at least one first transmit antenna;
at least one first receiving antenna;
the first radar integrated circuit is located on a side of the steering wheel near the co-driver seat, wherein the steering wheel is in a state of not being rotated by a user.
8. The vehicle of claim 7, characterized in that the at least one first transmitting antenna is configured to provide a radar field to at least one of the following areas:
an area in the primary driver's seat near the secondary driver's seat; and the number of the first and second groups,
an area between the primary driving position and the secondary driving position.
9. The vehicle of claim 7 or 8, characterized in that the cabin further comprises a center console;
the radar system further comprises:
a second radar system comprising a second radar integrated circuit, the second radar integrated circuit comprising:
at least one second transmit antenna;
at least one second receiving antenna;
the second radar integrated circuit is located on one side, deviating from the direction of the vehicle head, of the center console.
10. The vehicle of claim 9, characterized in that the at least one second transmitting antenna is configured to provide a radar field to at least one of the following areas:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driver's seat near the primary driver's seat; and
an area between the primary driving seat and the secondary driving seat.
11. The vehicle according to claim 7 or 8, characterized in that the vehicle cabin further comprises an armrest box fixed to an area between the primary and secondary seats;
the radar system further includes:
a third radar system comprising a third radar integrated circuit, the third radar integrated circuit comprising:
at least one third transmit antenna;
at least one third receiving antenna;
the third radar integrated circuit is positioned on one side, facing the center console, of the armrest box.
12. The vehicle of claim 11, characterized in that the at least one third transmitting antenna is configured to provide a radar field to at least one of the following areas:
an area in the primary driver's seat near the secondary driver's seat;
an area in the secondary driver's seat near the primary driver's seat; and
an area between the primary driving position and the secondary driving position.
CN202122043889.XU 2021-08-27 2021-08-27 Radar system and vehicle Active CN217007681U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202122043889.XU CN217007681U (en) 2021-08-27 2021-08-27 Radar system and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202122043889.XU CN217007681U (en) 2021-08-27 2021-08-27 Radar system and vehicle

Publications (1)

Publication Number Publication Date
CN217007681U true CN217007681U (en) 2022-07-19

Family

ID=82367159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202122043889.XU Active CN217007681U (en) 2021-08-27 2021-08-27 Radar system and vehicle

Country Status (1)

Country Link
CN (1) CN217007681U (en)

Similar Documents

Publication Publication Date Title
US11061115B2 (en) Method for gesture recognition, terminal, and storage medium
US10936185B2 (en) Smartphone-based radar system facilitating ease and accuracy of user interactions with displayed objects in an augmented-reality interface
WO2021218753A1 (en) Gesture recognition method and related apparatus
US11435468B2 (en) Radar-based gesture enhancement for voice interfaces
US10775483B1 (en) Apparatus for detecting and recognizing signals and method thereof
CN111812633B (en) Detecting reference system changes in smart device-based radar systems
US20200355817A1 (en) Smart-Device-Based Radar System Performing Angular Estimation Using Machine Learning
US11592547B2 (en) Smart-device-based radar system detecting user gestures in the presence of saturation
Wu et al. Dynamic hand gesture recognition using FMCW radar sensor for driving assistance
US10473762B2 (en) Wireless radio module
WO2022134989A1 (en) Gesture recognition method and apparatus
US20220326367A1 (en) Smart-Device-Based Radar System Performing Gesture Recognition Using a Space Time Neural Network
US9891711B1 (en) Human machine interface with haptic response based on phased array LIDAR
Jiang et al. Recognition of dynamic hand gesture based on mm-wave FMCW radar micro-Doppler signatures
CN113064483A (en) Gesture recognition method and related device
CN217007681U (en) Radar system and vehicle
CN115729343A (en) Function adjusting method and related device
KR20230165914A (en) Radar application programming interface

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant