CN115830715A - Unmanned vehicle control method, device and equipment based on gesture recognition - Google Patents

Unmanned vehicle control method, device and equipment based on gesture recognition Download PDF

Info

Publication number
CN115830715A
CN115830715A CN202211647098.0A CN202211647098A CN115830715A CN 115830715 A CN115830715 A CN 115830715A CN 202211647098 A CN202211647098 A CN 202211647098A CN 115830715 A CN115830715 A CN 115830715A
Authority
CN
China
Prior art keywords
gesture
key point
image
control
unmanned vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211647098.0A
Other languages
Chinese (zh)
Inventor
刘晨飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Jinhang Computing Technology Research Institute
Original Assignee
Tianjin Jinhang Computing Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Jinhang Computing Technology Research Institute filed Critical Tianjin Jinhang Computing Technology Research Institute
Priority to CN202211647098.0A priority Critical patent/CN115830715A/en
Publication of CN115830715A publication Critical patent/CN115830715A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the application provide unmanned vehicle control methods, apparatuses, devices and computer-readable storage media based on gesture recognition. The method comprises the steps of obtaining a gesture image of a user; recognizing the gesture image in a palm detection, gesture key point extraction and SVM classification mode to generate a control instruction; and controlling the vehicle based on the control instruction. In this way, the efficiency of controlling the unmanned vehicle can be greatly improved.

Description

Unmanned vehicle control method, device and equipment based on gesture recognition
Technical Field
Embodiments of the present application relate to the field of gesture control, and in particular, to a vehicle control method, apparatus, device, and computer-readable storage device based on gesture recognition.
Background
The unmanned vehicle has the characteristics of safety, flexibility and the like. The existing unmanned vehicle generally depends on equipment operation command, and is accompanied with intensive electromagnetic signals in the using process, and if the electromagnetic signals are detected and positioned, the positions of the unmanned vehicle and an operator are easily exposed.
Gestures have wide application in the fields of reconnaissance and the like, particularly in environments with covert requirements or strong noise interference. The unmanned vehicle control device based on gesture recognition is based on infrared communication, does not influence electromagnetic signal silence, does not receive electromagnetic interference's influence yet, therefore has extensive application scene.
The current gesture recognition methods mainly include: based on wearable equipment, the equipment is expensive, the operation is complex, the equipment is relatively easy to damage, and the use is inconvenient; the traditional convolutional neural network method is not high enough in precision, is greatly influenced by the environment, is low in recognition speed and cannot be used for gesture recognition on mobile equipment.
Disclosure of Invention
According to an embodiment of the application, an unmanned vehicle control scheme based on gesture recognition is provided.
In a first aspect of the application, an unmanned vehicle control method based on gesture recognition is provided. The method comprises the following steps:
acquiring a gesture image of a user;
recognizing the gesture image in a palm detection, gesture key point extraction and SVM classification mode to generate a control instruction;
and controlling the vehicle based on the control instruction.
Further, before recognizing the gesture image and generating a control instruction by means of palm detection, gesture key point extraction and SVM classification, the method further comprises:
and denoising and graying the gesture image.
Further, the recognizing the gesture image through the palm detection, the gesture key point extraction and the SVM classification, and generating the control instruction comprises:
clipping the gesture image by a palm detection method to obtain a full palm image;
extracting hand key points of the full-palm image to generate a hand key point list; the hand key point list comprises three-dimensional coordinates of key points;
generating a key point feature vector based on the hand key point list;
and inputting the feature vectors of the key points into an SVM multi-class classifier for classification to obtain a control instruction.
Further, the generating a keypoint feature vector based on the hand keypoint list comprises:
normalizing the key point coordinates in the hand key point list;
and generating a key point feature vector based on the normalized key point coordinates.
Further, the SVM multi-class classifier is trained by the following kernel function:
Figure SMS_1
wherein x is j Is the kernel function center;
‖x i -x j is vector x |) i And x j The distance between them;
σ is the range of action.
Further, the objective function and constraint condition of the kernel function includes:
Figure SMS_2
s.t.y i (w i φ(x (i) )+b)≥1-ξ(i),i=1,2,...,n
wherein | w | 2 Is the spacing of the input vector to the hyperplane;
ξ is the relaxation factor.
Further, the control commands comprise reverse, stop, left turn, right turn and forward gear; the forward gear is used to control forward speed.
In a second aspect of the present application, an unmanned vehicle control device based on gesture recognition is provided. The device comprises:
the acquisition module is used for acquiring a gesture image of a user;
the generating module is used for identifying the gesture image in a palm detection, gesture key point extraction and SVM classification mode to generate a control instruction;
and the control module is used for controlling the vehicle based on the control instruction.
In a third aspect of the present application, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
In a fourth aspect of the present application, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the method as according to the first aspect of the present application.
According to the unmanned vehicle control method based on gesture recognition, the gesture image of the user is obtained; recognizing the gesture image in a palm detection, gesture key point extraction and SVM classification mode to generate a control instruction; and controlling the vehicle based on the control instruction, so that the efficiency of controlling the unmanned vehicle is greatly improved.
It should be understood that what is described in this summary section is not intended to limit key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present application will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
fig. 1 shows a system architecture diagram in accordance with a method provided by an embodiment of the present application.
FIG. 2 illustrates a flow diagram of a method of unmanned vehicle control based on gesture recognition in accordance with an embodiment of the present application;
FIG. 3 shows a flow diagram of gesture recognition according to an embodiment of the present application;
FIG. 4 shows a hand keypoint diagram according to an embodiment of the present application;
FIG. 5 shows a control gesture diagram according to an embodiment of the present application;
FIG. 6 illustrates a block diagram of an unmanned vehicle control device based on gesture recognition according to an embodiment of the present application;
fig. 7 shows a schematic structural diagram of a terminal device or a server suitable for implementing the embodiments of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the gesture recognition based unmanned vehicle control methods or gesture recognition based unmanned vehicle control devices of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a model training application, a video recognition application, a web browser application, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, motion Picture Experts Group Audio Layer IV, motion Picture Experts Group Audio Layer 4) players, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
When the terminals 101, 102, 103 are hardware, a video capture device may also be installed thereon. The video acquisition equipment can be various equipment capable of realizing the function of acquiring video, such as a camera, a sensor and the like. The user may capture video using a video capture device on the terminal 101, 102, 103.
The server 105 may be a server that provides various services, such as a background server that processes data displayed on the terminal devices 101, 102, 103. The background server may perform processing such as analysis on the received data, and may feed back a processing result (e.g., an identification result) to the terminal device.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In particular, in the case where the target data does not need to be acquired from a remote place, the above system architecture may not include a network but only a terminal device or a server.
Fig. 2 is a flowchart of an unmanned vehicle control method based on gesture recognition according to an embodiment of the present application. As can be seen from fig. 2, the unmanned vehicle control method based on gesture recognition of the embodiment includes the following steps:
s210, acquiring a gesture image of the user.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the unmanned vehicle control method based on gesture recognition may acquire a gesture image in a wired manner or a wireless connection manner.
Further, the executing body may acquire a gesture image transmitted by an electronic device (for example, the terminal device shown in fig. 1) in communication connection with the executing body, or may acquire a gesture image stored locally in advance.
In the present disclosure, the gesture image is typically acquired by an unmanned vehicle equipped image capture device.
And S220, recognizing the gesture image through palm detection, gesture key point extraction and SVM classification, and generating a control instruction.
In some embodiments, in order to ensure the final control effect, before the gesture image is recognized, the gesture image obtained in step S210 needs to be optimized;
if the acquired gesture image is a video frame, firstly converting the gesture image into an image frame;
and with the central point of the image frame as a reference, clipping the image data into a uniform data size, and then performing Gaussian filtering (denoising) and graying processing to obtain a gesture image (optimized gesture image) for subsequent recognition.
In some embodiments, as shown in fig. 3, by a palm detection method, processing such as non-maximum suppression is performed on the gesture image to obtain detection frame coordinates only including a palm, the detection frame is rotated and enlarged based on the detection frame coordinates to obtain a full-palm detection frame, and the full-palm detection frame is clipped to obtain a full-palm image;
extracting hand key points of the full-palm image to generate a hand key point list; the hand key point list comprises three-dimensional coordinates x, y and z of key points, wherein z represents depth, the wrist is used as an origin, the smaller the value is, the closer the camera is, and x and y are respectively defined by the width and the height of an image; wherein, the number of the hand key points is usually 21, refer to fig. 4;
normalizing the coordinates in the key point list to be 0.0,1.0 (x and y normalization processing), and generating a key point feature vector by taking the normalized key point coordinates as a reference;
and inputting the feature vectors of the key points into an SVM multi-class classifier for classification to obtain a control instruction.
In some embodiments, the SVM multi-class classifier is trained by the following kernel function, and adjusts a penalty coefficient, kernel function parameters and termination conditions according to the recognition effect until the recognition accuracy reaches a desired level:
Figure SMS_3
wherein x is j Is the kernel function center;
‖x i -x j is vector x |) i And x j The distance between them;
σ is an action range and can be set according to application scenarios.
Further, the kernel function support vector machine can map feature vectors into a higher dimensional space, so that originally linearly indivisible data becomes linearly separable in the space after mapping. That is, the kernel function can map the original feature space into an infinite dimensional space.
In some embodiments, the objective function and constraint conditions for support vector machine optimization are as follows:
Figure SMS_4
s.t.y i (w i φ(x (i) )+b)≥1-ξ(i),i=1,2,...,n
wherein | w | 2 Is the separation of the input vector to the hyperplane;
xi is a relaxation factor;
c is a harmonic coefficient which can be set according to an actual application scene; when C is larger, the regularization term xi (i) is smaller, namely the structural risk is larger, the empirical risk is smaller, and overfitting is easy to occur; conversely, the smaller C, the lower the complexity of the model and the smaller the structural risk.
And S230, controlling the vehicle based on the control command.
Through step S220, the final gesture is recognized, as shown in fig. 5, including 4 control commands and 5 speed commands, including backward (fist making), stop (five fingers extending), left turn, right turn, and forward 5 gears.
Further, the unmanned vehicle is controlled according to the gesture motion (control command).
It should be noted that, in the present disclosure, except for the defined gesture in 9, all the gestures are invalid, and when the confidence is greater than a threshold (e.g., 0.8), a control command is generated.
According to the embodiment of the disclosure, the following technical effects are achieved:
the control precision is guaranteed, and meanwhile the efficiency of controlling the unmanned vehicle is greatly improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present application are further described below by way of apparatus embodiments.
Fig. 6 shows a block diagram of an unmanned vehicle control apparatus 600 based on gesture recognition according to an embodiment of the present application as shown in fig. 6, the apparatus 600 including:
an obtaining module 610, configured to obtain a gesture image of a user;
the generating module 620 is configured to identify the gesture image and generate a control instruction in a palm detection, gesture key point extraction and SVM classification manner;
and a control module 630, configured to control the vehicle based on the control instruction.
Further, still include and avoid barrier module:
keep away barrier module can adopt infrared obstacle avoidance, and the theory of operation does: the infrared emission tube emits infrared rays to the detection direction; if the detection direction touches the obstacle, the infrared ray can be reflected and received by the receiving tube, and the obstacle in front is judged and sent to the controller, and the unmanned vehicle is controlled by the driving module to stop and the obstacle depth is detected, so that the autonomous obstacle avoidance is carried out. The driving module controls the unmanned vehicle to rotate by using PWM.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 7 shows a schematic structural diagram of a terminal device or a server suitable for implementing the embodiments of the present application.
As shown in fig. 7, the terminal device or server 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, the above method flow steps may be implemented as a computer software program according to embodiments of the present application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program executes the above-described functions defined in the system of the present application when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor. Wherein the designation of a unit or module does not in some way constitute a limitation of the unit or module itself.
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may be separate and not incorporated into the electronic device. The computer readable storage medium stores one or more programs that, when executed by one or more processors, perform the methods described herein.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the application referred to in the present application is not limited to the embodiments with a particular combination of the above-mentioned features, but also encompasses other embodiments with any combination of the above-mentioned features or their equivalents without departing from the spirit of the application. For example, the above features and the technical features (but not limited to) having similar functions in the present application are mutually replaced to form the technical solution.

Claims (10)

1. An unmanned vehicle control method based on gesture recognition is characterized by comprising the following steps:
acquiring a gesture image of a user;
recognizing the gesture image in a palm detection, gesture key point extraction and SVM classification mode to generate a control instruction;
and controlling the vehicle based on the control instruction.
2. The method according to claim 1, wherein before the recognizing the gesture image by means of palm detection, gesture key point extraction and SVM classification, and generating a control command, the method further comprises:
and denoising and graying the gesture image.
3. The method according to claim 2, wherein the recognizing the gesture image through palm detection, gesture key point extraction and SVM classification, and generating the control command comprises:
clipping the gesture image by a palm detection method to obtain a full palm image;
extracting hand key points of the full-palm image to generate a hand key point list; the hand key point list comprises three-dimensional coordinates of key points;
generating a key point feature vector based on the hand key point list;
and inputting the feature vectors of the key points into an SVM multi-class classifier for classification to obtain a control instruction.
4. The method of claim 3, wherein generating a keypoint feature vector based on the hand keypoint list comprises:
normalizing the key point coordinates in the hand key point list;
based on the normalized keypoint coordinates, keypoint feature vectors are generated.
5. The method of claim 4, wherein the SVM multi-class classifier is trained by a kernel function as follows:
Figure FDA0004010134960000011
wherein x is j Is the kernel function center;
‖x i -x j is vector x |) i And x j The distance between them;
σ is the range of action.
6. The method of claim 5, wherein the objective function and constraint conditions of the kernel function comprise:
Figure FDA0004010134960000021
Figure FDA0004010134960000022
wherein | w | 2 Is the separation of the input vector to the hyperplane;
xi is a relaxation factor;
and C is a harmonic coefficient.
7. The method of claim 6, wherein the control commands include reverse, stop, left, right, and forward gears; the forward gear is used to control forward speed.
8. An unmanned vehicle control device based on gesture recognition is characterized by comprising:
the acquisition module is used for acquiring a gesture image of a user;
the generating module is used for identifying the gesture image in a palm detection, gesture key point extraction and SVM classification mode to generate a control instruction;
and the control module is used for controlling the vehicle based on the control instruction.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the computer program, implements the method of any one of claims 1-7.
10. A computer-readable storage device, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211647098.0A 2022-12-21 2022-12-21 Unmanned vehicle control method, device and equipment based on gesture recognition Pending CN115830715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211647098.0A CN115830715A (en) 2022-12-21 2022-12-21 Unmanned vehicle control method, device and equipment based on gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211647098.0A CN115830715A (en) 2022-12-21 2022-12-21 Unmanned vehicle control method, device and equipment based on gesture recognition

Publications (1)

Publication Number Publication Date
CN115830715A true CN115830715A (en) 2023-03-21

Family

ID=85517301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211647098.0A Pending CN115830715A (en) 2022-12-21 2022-12-21 Unmanned vehicle control method, device and equipment based on gesture recognition

Country Status (1)

Country Link
CN (1) CN115830715A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130374A (en) * 2023-10-26 2023-11-28 锐驰激光(深圳)有限公司 Control method of golf cart, golf cart and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130374A (en) * 2023-10-26 2023-11-28 锐驰激光(深圳)有限公司 Control method of golf cart, golf cart and storage medium
CN117130374B (en) * 2023-10-26 2024-03-15 锐驰激光(深圳)有限公司 Control method of golf cart, golf cart and storage medium

Similar Documents

Publication Publication Date Title
US11798271B2 (en) Depth and motion estimations in machine learning environments
CN108846440B (en) Image processing method and device, computer readable medium and electronic equipment
US10902245B2 (en) Method and apparatus for facial recognition
CN108509915B (en) Method and device for generating face recognition model
CN108898086B (en) Video image processing method and device, computer readable medium and electronic equipment
CN108229419B (en) Method and apparatus for clustering images
CN108230346B (en) Method and device for segmenting semantic features of image and electronic equipment
KR102305023B1 (en) Key frame scheduling method and apparatus, electronic device, program and medium
CN109711508B (en) Image processing method and device
CN113971751A (en) Training feature extraction model, and method and device for detecting similar images
CN108491812B (en) Method and device for generating face recognition model
CN110163171B (en) Method and device for recognizing human face attributes
EP4024270A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
CN112509047A (en) Image-based pose determination method and device, storage medium and electronic equipment
CN110610191A (en) Elevator floor identification method and device and terminal equipment
CN113378712A (en) Training method of object detection model, image detection method and device thereof
CN111192312B (en) Depth image acquisition method, device, equipment and medium based on deep learning
CN115830715A (en) Unmanned vehicle control method, device and equipment based on gesture recognition
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN114494747A (en) Model training method, image processing method, device, electronic device and medium
CN113409340A (en) Semantic segmentation model training method, semantic segmentation device and electronic equipment
CN111783889B (en) Image recognition method and device, electronic equipment and computer readable medium
CN113642493B (en) Gesture recognition method, device, equipment and medium
CN116152702A (en) Point cloud label acquisition method and device, electronic equipment and automatic driving vehicle
CN110263743B (en) Method and device for recognizing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination