CN117765617B - Instruction generation method, system and storage medium based on gesture behaviors of user - Google Patents

Instruction generation method, system and storage medium based on gesture behaviors of user Download PDF

Info

Publication number
CN117765617B
CN117765617B CN202410194574.0A CN202410194574A CN117765617B CN 117765617 B CN117765617 B CN 117765617B CN 202410194574 A CN202410194574 A CN 202410194574A CN 117765617 B CN117765617 B CN 117765617B
Authority
CN
China
Prior art keywords
user
information
gesture
acquiring
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410194574.0A
Other languages
Chinese (zh)
Other versions
CN117765617A (en
Inventor
郭锦炜
林丽玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tengxin Baina Technology Co ltd
Original Assignee
Shenzhen Tengxin Baina Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tengxin Baina Technology Co ltd filed Critical Shenzhen Tengxin Baina Technology Co ltd
Priority to CN202410194574.0A priority Critical patent/CN117765617B/en
Publication of CN117765617A publication Critical patent/CN117765617A/en
Application granted granted Critical
Publication of CN117765617B publication Critical patent/CN117765617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a command generation method, a system and a storage medium based on gesture behaviors of a user, wherein the method comprises the following steps: acquiring target image information through a preset device when a gesture recognition instruction is detected; acquiring gesture information according to a contour recognition mode; acquiring user identification information, and judging whether the current user is a type of user or not; if yes, acquiring target user profile information from pre-stored user profile information according to the user identification information; generating a first user gesture instruction according to the target user profile information and the gesture information; if not, determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining the gesture information. By judging the user type and executing different gesture recognition operations, the accuracy of generating corresponding instructions according to gesture recognition is improved.

Description

Instruction generation method, system and storage medium based on gesture behaviors of user
Technical Field
The application relates to the field of intelligent recognition, in particular to a command generation method, a system and a storage medium based on gesture behaviors of a user.
Background
Modern consumer and industrial electronics, particularly display devices such as networking-enabled displays, touch screen displays, curved displays, and tablet devices, are providing an increasingly high level of functionality to support modern life, including facilitating interactions with other electronic devices, appliances, and users. Research and development in the prior art may take many different directions. As users are given more force with the development of display devices, new and old paradigms begin to take advantage of this new device space. There are many solutions that take advantage of this new device capability to communicate with users and other devices.
However, user interactions with such display devices often incorporate inaccuracies or inaccuracy, and there remains a need for an electronic system with gesture calibration mechanisms suitable for today's interactions between users and devices.
Therefore, how to improve the accuracy of generating the instruction according to the gesture in the actual use process is a technical problem to be solved.
Disclosure of Invention
In order to improve the accuracy of generating instructions according to gestures in the actual use process, the application provides an instruction generating method, an instruction generating system and a storage medium based on gesture behaviors of a user.
In a first aspect, the method for generating the instruction based on the gesture behavior of the user provided by the application adopts the following technical scheme:
an instruction generation method based on gesture behaviors of a user comprises the following steps:
when a gesture recognition instruction is detected, acquiring target image information through a preset device;
acquiring gesture information from the target image information according to a contour recognition mode;
acquiring user identification information, and judging whether the current user is a type of user according to the user identification information;
If yes, acquiring target user profile information from pre-stored user profile information according to the user identification information;
generating a first user gesture instruction according to the target user profile information and the gesture information;
if not, determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining the gesture information.
Optionally, the step of acquiring gesture information from the target image information according to the contour recognition mode includes:
denoising the target image information to obtain primary image information;
acquiring secondary image information from the primary image information in a contour recognition mode;
Judging whether the secondary image information is valid or not according to a preset validity condition;
if yes, gesture information is acquired from the secondary image information.
Optionally, if so, the step of acquiring the target user profile information from the pre-stored user profile information according to the user identification information includes:
If the current user is judged to be the user of the type, pre-stored user file information is obtained;
generating traversing conditions according to the user identification information, traversing in the pre-stored user file information and acquiring traversing results;
Judging whether the traversal result belongs to one type of traversal result or two types of traversal results according to the traversal result;
if the target user profile information belongs to a type of traversal results, determining target user profile information according to the type of traversal results;
and if the file belongs to the second-class traversal result, determining target user file information according to the file creating time in the second-class traversal result.
Optionally, the step of generating the first user gesture command according to the target user profile information and the gesture information includes:
Generating a target user portrait according to the target user file information set;
determining a set of gesture associations in the target user representation;
and generating a first user gesture instruction according to the gesture association and the gesture information.
Optionally, the step of generating the target user portrait according to the target user profile information set includes:
acquiring type data labels and user preference information from the user archive information set;
determining target user tag information in the user type data tag according to the correlation condition;
Generating a preliminary user portrait according to the target table user label information;
and generating a target user portrait by combining the preliminary user portrait according to the user preference information.
Optionally, if not, determining a gesture generating policy in a preset default data template and generating a second user gesture instruction in combination with the gesture information, including:
if the current user is judged not to be the user, acquiring a preset default data template;
acquiring a current identification scene, and updating the preset default data module according to the current identification scene;
and acquiring a current preset data template and generating a second user gesture instruction according to the gesture information.
Optionally, the step of obtaining the current identification scene and updating the default data module according to the current identification scene includes:
Acquiring a current identification scene, and judging whether the current identification scene is a special scene or not;
If not, the preset default data module does not need to be updated;
if yes, acquiring a scene offset factor from the current identification scene, and updating the preset default data module according to the scene offset factor.
In a second aspect, the present application provides a command generating system based on gesture behaviors of a user, the command generating system based on gesture behaviors of a user comprising:
The information acquisition module is used for acquiring target image information through a preset device when the gesture recognition instruction is detected;
the gesture acquisition module is used for acquiring gesture information from the target image information according to a contour recognition mode;
The judging module is used for acquiring the user identification information and judging whether the current user is a type of user or not according to the user identification information;
the archive information module is used for acquiring target user archive information from pre-stored user archive information according to the user identification information if the target user archive information is the target user archive information;
The first user gesture instruction module is used for generating a first user gesture instruction according to the target user profile information and the gesture information;
And the second user gesture instruction module is used for determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining the gesture information if not.
In a third aspect, the present application provides a computer apparatus, the apparatus comprising: a memory, a processor that, when executing the computer instructions stored by the memory, performs the method as described above.
In a fourth aspect, the application provides a computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform a method as described above.
In summary, the application comprises the following beneficial technical effects:
When a gesture recognition instruction is detected, acquiring target image information through a preset device; acquiring gesture information according to a contour recognition mode; acquiring user identification information, and judging whether the current user is a type of user or not; if yes, acquiring target user profile information from pre-stored user profile information according to the user identification information; generating a first user gesture instruction according to the target user profile information and the gesture information; if not, determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining the gesture information. By judging the user type and executing different gesture recognition operations, the accuracy of generating corresponding instructions according to gesture recognition is improved.
Drawings
FIG. 1 is a schematic diagram of a computer device in a hardware operating environment according to an embodiment of the present application.
FIG. 2 is a flowchart of a first embodiment of a method for generating commands based on gesture actions of a user according to the present application.
FIG. 3 is a block diagram of a first embodiment of an instruction generation system based on user gesture behavior of the present application.
Detailed Description
The present application will be described in further detail below with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Referring to fig. 1, fig. 1 is a schematic diagram of a computer device structure of a hardware running environment according to an embodiment of the present application.
As shown in fig. 1, the computer device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is employed to enable connected communication among the components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is not limiting of a computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and an instruction generation program based on gesture behaviors of a user may be included in the memory 1005 as one storage medium.
In the computer device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the present application may be provided in a computer device, where the computer device invokes, through the processor 1001, an instruction generating program based on user gesture behaviors stored in the memory 1005, and executes the instruction generating method based on user gesture behaviors provided in the embodiment of the present application.
The embodiment of the application provides a command generation method based on gesture behaviors of a user, and referring to fig. 2, fig. 2 is a flow chart of a first embodiment of the command generation method based on gesture behaviors of the user.
In this embodiment, the instruction generating method based on gesture behaviors of a user includes the following steps:
Step S10: and when the gesture recognition instruction is detected, acquiring target image information through a preset device.
It should be noted that, in computer science, gesture recognition is an issue of recognizing human gestures through a mathematical algorithm. Gesture recognition may come from movements of parts of a person's body, but generally refers to movements of the face and hands. The user may use simple gestures to control or interact with the device to let the computer understand the behavior of a human. The core technology is gesture segmentation, gesture analysis and gesture recognition. Gesture recognition is a topic in computer science and language technology, with the aim of recognizing human gestures through mathematical algorithms. Gestures may originate from any body movement or state, but generally originate from the face or hands. Current focus in the art includes emotion recognition from face and gesture recognition. The user can control or interact with the device using simple gestures without touching them. Recognition of gestures, gait and human behavior is also the subject of gesture recognition technology. Gesture recognition can be seen as a way of computing mechanisms to solve human language, building a richer bridge between the robot and the person than the original text user interface or even GUI (graphical user interface).
In a specific implementation, the gesture recognition instruction may be a recognition instruction manually input by a background manager, or may be a mode of automatically generating a gesture recognition instruction by a task received by a system task.
It should be noted that, the preset device in acquiring the target image information by the preset device may be an image sensor or other devices having the same image acquisition function.
It can be understood that, in this embodiment, the target image information is acquired by the preset device, and the acquired image information is taken as the target image information. The method can also acquire the video information in a video information acquisition mode, and split the acquired video stream in an image frame mode to acquire the target image information.
In a specific implementation, when the gesture recognition instruction is detected, the image information acquisition device, that is, the preset device in the embodiment, is determined to be started according to the gesture recognition instruction. The target image information is acquired after the image information acquisition device is started.
Step S20: and acquiring gesture information from the target image information according to the contour recognition mode.
In this embodiment, profile information in the target image information is obtained by means of profile detection, and gesture information is identified according to the profile information.
It will be appreciated that acquiring gesture information in the target image information according to the contour recognition manner will determine valid gesture information and invalid gesture information from the image information. Wherein the determination as to validity may be confirmed from sharpness and gesture integrity.
Further, in order to reasonably acquire gesture information, the step of acquiring the gesture information from the target image information according to the contour recognition mode includes: denoising the target image information to obtain primary image information; acquiring secondary image information from the primary image information in a contour recognition mode; judging whether the secondary image information is valid or not according to a preset validity condition; if yes, gesture information is obtained from the secondary image information.
It should be noted that, if the secondary image information is determined to be invalid according to the preset validity condition, the invalid determination result is sent to the preset sensing port to continuously perform the image acquisition action.
It is understood that acquiring gesture information in secondary image information in the present embodiment means that matching is to be performed according to gesture information matching rules in the processed secondary image information, wherein the gesture information matching rules include recognition rules for gesture contours and hand type matching rules.
It should be noted that, the english name of image noise reduction is Image Denoising, which is a term of art in image processing. Refers to a process of reducing noise in a digital image, sometimes referred to as image denoising. The present embodiment is not limited to a specific image noise reduction method, for example: the method can be used for eliminating isolated noise points by using a median filter, and the basic principle is that the value of a point in a digital image or a digital sequence is replaced by the median value of the point value in one field of the point, and the main function is that pixels with larger difference of gray values of surrounding pixels are changed to be close to the surrounding pixel values, so that the median filter is very effective for filtering salt and pepper noise of the image. The median filter can remove noise and protect the edges of the image, so that a satisfactory restoration effect is obtained, the image is not needed in the actual operation process, and the method is not convenient, but the median filter method is not suitable for images with more details, particularly more points, lines and pinnacles.
Step S30: and acquiring user identification information, and judging whether the current user is a type of user according to the user identification information.
In a specific implementation, whether the current user is a type of user is determined according to the user identification information, and in this embodiment, whether the current user has a history record capable of being matched in a preset database is determined. If not, the current user is judged not to be a type of user.
Step S40: if yes, acquiring target user profile information from the pre-stored user profile information according to the user identification information.
Further, in order to promote the reasonability of obtaining the target user profile information, if yes, obtaining the target user profile information from the pre-stored user profile information according to the user identification information, including: if the current user is judged to be a user, pre-stored user file information is acquired; generating traversing conditions according to the user identification information, traversing in the pre-stored user file information and obtaining traversing results; judging whether the first type of traversal result or the second type of traversal result belongs to the first type of traversal result or the second type of traversal result according to the traversal result; if the target user file information belongs to the type of traversal results, determining the target user file information according to the type of traversal results; and if the file belongs to the second-class traversal result, determining target user file information according to the file creating time in the second-class traversal result.
In a specific implementation, one type of traversal result refers to that only one target object exists in the traversal result, and the second type of traversal result refers to that more than one target object appears in the traversal result. Therefore, after the traversal result is obtained, the target user profile information is determined according to the time point of creating the profile, and the target object closest to the current time point is selected as the target user profile information in the second traversal result.
Step S50: and generating a first user gesture instruction according to the target user profile information set and the gesture information.
Further, in order to realize the generation of the first user gesture command, the step of generating the first user gesture command according to the target user profile information set and the gesture information includes: generating a target user portrait according to the target user file information set; determining a gesture association set in the target user representation; and generating a first user gesture instruction according to the gesture association set and the gesture information.
It should be noted that, in this embodiment, the target user portrait refers to a label set containing a certain number of labels, and by establishing the target user portrait, information in the corresponding target user archive information set can be summarized, and by establishing the portrait, waste of computer resources in the subsequent information mobilizing process is avoided.
In a specific implementation, determining the gesture association set according to the target user portrait refers to acquiring corresponding tag content in the tag according to the gesture association tag in the target user portrait, and the tag content in the gesture association tag specifically refers to a mapping relation table about gesture action forms and corresponding instructions. And establishing a gesture association set by acquiring the mapping relation table.
The step of generating the target user portrait according to the target user profile information set includes: acquiring type data labels and user preference information from a user archive information set; determining target user tag information in the user type data tag according to the correlation condition; generating a preliminary user portrait according to the target user tag information; a target user representation is generated in accordance with the user preference information in combination with the preliminary user representation.
It should be noted that, generating the target user portrait according to the user preference information in combination with the preliminary user portrait refers to using each type of information content in the preference information as an object to be filled, and filling the object to be filled in the preliminary user portrait at a corresponding position according to the type information.
Step S60: if not, determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining gesture information.
Further, in order to reasonably generate the gesture command of the second user, if not, determining a gesture generation strategy in a preset default data template and generating the gesture command of the second user by combining gesture information, including: if the current user is judged not to be a type of user according to the user identification information, acquiring a preset default data template; acquiring a current identification scene, and updating a preset default data module according to the current identification scene; and acquiring a current preset data template and generating a second user gesture instruction according to the gesture information.
It should be noted that, the current recognition scenario refers to a current implementation environment of the execution subject of the present embodiment, and different recognition scenarios may be set according to specific use cases, for example: school identification scenario, office building identification scenario, or hospital identification scenario. And pre-updating each different scene and the corresponding scene factors thereof in a mode of presetting scene elements. And after determining to identify the scene, calling preset scene elements to carry out filling update on the preset default data module.
In a specific implementation, the step of acquiring the current identification scene and updating the preset default data module according to the current identification scene includes: acquiring a current identification scene, and judging whether the current identification scene is a special scene or not; if not, the preset default data module does not need to be updated; if yes, acquiring scene offset factors from the current identification scene, and updating a preset default data module according to the scene offset factors.
The scene shift factor in the present embodiment refers to a distinguishing element from a general scene factor in the default scene mode, and the distinguishing element is used as the scene shift factor. In this embodiment, the scene offset factor is determined to adjust a corresponding portion in the default data module according to the actual application scene, so that gesture recognition in the current scene is more reasonable and effective.
In the embodiment, when a gesture recognition instruction is detected, target image information is acquired through a preset device; acquiring gesture information according to a contour recognition mode; acquiring user identification information, and judging whether the current user is a type of user or not; if yes, acquiring target user profile information from pre-stored user profile information according to the user identification information; generating a first user gesture instruction according to the target user profile information and the gesture information; if not, determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining gesture information. By judging the user type and executing different gesture recognition operations, the accuracy of generating corresponding instructions according to gesture recognition is improved.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the storage medium stores a program generated based on the instruction of the gesture behavior of the user, and the program generated based on the instruction of the gesture behavior of the user realizes the steps of the method for generating the instruction based on the gesture behavior of the user when being executed by a processor.
Referring to fig. 3, fig. 3 is a block diagram illustrating a first embodiment of an instruction generating system according to the present application based on gesture behaviors of a user.
As shown in fig. 3, an instruction generating system based on gesture behaviors of a user according to an embodiment of the present application includes:
An information acquisition module 10, configured to acquire target image information through a preset device when a gesture recognition instruction is detected;
The gesture obtaining module 20 is configured to obtain gesture information from the target image information according to a contour recognition mode;
The judging module 30 is configured to obtain user identification information, and judge whether the current user is a type of user according to the user identification information;
a profile information module 40, configured to obtain target user profile information from the pre-stored user profile information according to the user identification information if yes;
a first user gesture command module 50, configured to generate a first user gesture command according to the target user profile information set and the gesture information;
and the second user gesture instruction module 60 is configured to determine a gesture generation policy in a preset default data template and generate a second user gesture in combination with the gesture information if not.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the application as desired, and the application is not limited thereto.
In the embodiment, when a gesture recognition instruction is detected, target image information is acquired through a preset device; acquiring gesture information according to a contour recognition mode; acquiring user identification information, and judging whether the current user is a type of user or not; if yes, acquiring target user profile information from pre-stored user profile information according to the user identification information; generating a first user gesture instruction according to the target user profile information and the gesture information; if not, determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining gesture information. By judging the user type and executing different gesture recognition operations, the accuracy of generating corresponding instructions according to gesture recognition is improved.
In an embodiment, the gesture obtaining module 20 is further configured to denoise the target image information to obtain primary image information; acquiring secondary image information from the primary image information in a contour recognition mode; judging whether the secondary image information is valid or not according to a preset validity condition; if yes, gesture information is obtained from the secondary image information.
In one embodiment, the profile information module 40 is further configured to obtain pre-stored user profile information if the current user is determined to be a type of user; generating traversing conditions according to the user identification information, traversing in the pre-stored user file information and obtaining traversing results; judging whether the first type of traversal result or the second type of traversal result belongs to the first type of traversal result or the second type of traversal result according to the traversal result; if the target user file information belongs to the type of traversal results, determining the target user file information according to the type of traversal results; and if the file belongs to the second-class traversal result, determining target user file information according to the file creating time in the second-class traversal result.
In one embodiment, the first user gesture instruction module 50 is further configured to generate a target user representation according to the target user profile information set; determining a gesture association set in the target user representation; and generating a first user gesture instruction according to the gesture association set and the gesture information.
In an embodiment, the first user gesture instruction module 50 is further configured to obtain a type data tag and user preference information from the user profile information set; determining target user tag information in the user type data tag according to the correlation condition; generating a preliminary user portrait according to the target user tag information; a target user representation is generated in accordance with the user preference information in combination with the preliminary user representation.
In an embodiment, the second user gesture instruction module 60 is further configured to obtain a preset default data template if it is determined that the current user is not a type of user according to the user identification information; acquiring a current identification scene, and updating a preset default data module according to the current identification scene; and acquiring a current preset data template and generating a second user gesture instruction according to the gesture information.
In an embodiment, the second user gesture instruction module 60 is further configured to obtain a current recognition scene, and determine whether the current recognition scene is a special scene; if not, the preset default data module does not need to be updated; if yes, acquiring scene offset factors from the current identification scene, and updating a preset default data module according to the scene offset factors.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present application, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in the present embodiment may refer to a method for generating an instruction based on gesture behavior of a user provided in any embodiment of the present application, which is not described herein.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory)/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (6)

1. An instruction generating method based on gesture behaviors of a user is characterized by comprising the following steps:
when a gesture recognition instruction is detected, acquiring target image information through a preset device;
acquiring gesture information from the target image information according to a contour recognition mode;
acquiring user identification information, and judging whether the current user is a type of user according to the user identification information;
If yes, acquiring target user profile information from pre-stored user profile information according to the user identification information;
generating a first user gesture instruction according to the target user profile information and the gesture information;
If not, determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining the gesture information;
The step of generating a first user gesture instruction according to the target user profile information and the gesture information includes:
Generating a target user portrait according to the target user file information set;
determining a set of gesture associations in the target user representation;
Generating a first user gesture instruction according to the gesture association set and the gesture information;
the step of generating the target user portrait based on the target user profile information set includes:
Acquiring user type data labels and user preference information from the user archive information set;
determining target user tag information in the user type data tag according to the correlation condition;
Generating a preliminary user portrait according to the target user tag information;
Generating a target user portrait in combination with the preliminary user portrait according to the user preference information;
if not, determining a gesture generation strategy in a preset default data template and generating a second user gesture instruction by combining the gesture information, wherein the step comprises the following steps:
if the current user is judged not to be the user, acquiring a preset default data template;
acquiring a current identification scene, and updating the preset default data module according to the current identification scene;
acquiring a current preset data template and generating a second user gesture instruction according to the gesture information;
The step of obtaining the current identification scene and updating the preset default data module according to the current identification scene comprises the following steps:
Acquiring a current identification scene, and judging whether the current identification scene is a special scene or not;
If not, the preset default data module does not need to be updated;
if yes, acquiring a scene offset factor from the current identification scene, and updating the preset default data module according to the scene offset factor.
2. The method for generating instructions based on gesture actions of a user according to claim 1, wherein the step of acquiring gesture information from the target image information according to a contour recognition method comprises:
denoising the target image information to obtain primary image information;
acquiring secondary image information from the primary image information in a contour recognition mode;
Judging whether the secondary image information is valid or not according to a preset validity condition;
if yes, gesture information is acquired from the secondary image information.
3. The method for generating command based on gesture of user according to claim 1, wherein if yes, the step of obtaining target user profile information from pre-stored user profile information according to the user identification information comprises:
If the current user is judged to be the user of the type, pre-stored user file information is obtained;
generating traversing conditions according to the user identification information, traversing in the pre-stored user file information and acquiring traversing results;
Judging whether the traversal result belongs to one type of traversal result or two types of traversal results according to the traversal result;
if the target user profile information belongs to a type of traversal results, determining target user profile information according to the type of traversal results;
and if the file belongs to the second-class traversal result, determining target user file information according to the file creating time in the second-class traversal result.
4. A user gesture behavior based instruction generation system for performing the method of any of claims 1 to 3, characterized in that the user gesture behavior based instruction generation system comprises:
The information acquisition module is used for acquiring target image information through a preset device when the gesture recognition instruction is detected;
the gesture acquisition module is used for acquiring gesture information from the target image information according to a contour recognition mode;
The judging module is used for acquiring the user identification information and judging whether the current user is a type of user or not according to the user identification information;
the archive information module is used for acquiring target user archive information from pre-stored user archive information according to the user identification information if the target user archive information is the target user archive information;
The first user gesture instruction module is used for generating a first user gesture instruction according to the target user profile information and the gesture information;
And the second user gesture instruction module is used for determining a gesture generation strategy in a preset default data template and generating a second user gesture by combining the gesture information if not.
5. A computer device, the device comprising: a memory, a processor, which when executing the computer instructions stored by the memory, performs the method of any one of claims 1 to 3.
6. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 3.
CN202410194574.0A 2024-02-22 2024-02-22 Instruction generation method, system and storage medium based on gesture behaviors of user Active CN117765617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410194574.0A CN117765617B (en) 2024-02-22 2024-02-22 Instruction generation method, system and storage medium based on gesture behaviors of user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410194574.0A CN117765617B (en) 2024-02-22 2024-02-22 Instruction generation method, system and storage medium based on gesture behaviors of user

Publications (2)

Publication Number Publication Date
CN117765617A CN117765617A (en) 2024-03-26
CN117765617B true CN117765617B (en) 2024-05-17

Family

ID=90314727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410194574.0A Active CN117765617B (en) 2024-02-22 2024-02-22 Instruction generation method, system and storage medium based on gesture behaviors of user

Country Status (1)

Country Link
CN (1) CN117765617B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382644A (en) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 Gesture recognition method and device, terminal equipment and computer readable storage medium
CN113190106A (en) * 2021-03-16 2021-07-30 青岛小鸟看看科技有限公司 Gesture recognition method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015095164A (en) * 2013-11-13 2015-05-18 オムロン株式会社 Gesture recognition device and control method for gesture recognition device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382644A (en) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 Gesture recognition method and device, terminal equipment and computer readable storage medium
CN113190106A (en) * 2021-03-16 2021-07-30 青岛小鸟看看科技有限公司 Gesture recognition method and device and electronic equipment

Also Published As

Publication number Publication date
CN117765617A (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN108830780B (en) Image processing method and device, electronic device and storage medium
US10878531B2 (en) Robotic process automation
EP3859605A2 (en) Image recognition method, apparatus, device, and computer storage medium
CN111290684B (en) Image display method, image display device and terminal equipment
EP3822757A1 (en) Method and apparatus for setting background of ui control
CN114402369A (en) Human body posture recognition method and device, storage medium and electronic equipment
JP7389824B2 (en) Object identification method and device, electronic equipment and storage medium
CN112131121B (en) Fuzzy detection method and device for user interface, electronic equipment and storage medium
CN112967196A (en) Image restoration method and device, electronic device and medium
CN112380566A (en) Method, apparatus, electronic device, and medium for desensitizing document image
CN111598903A (en) Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment
CN111145202B (en) Model generation method, image processing method, device, equipment and storage medium
CN110619672B (en) Figure edge line selecting method, machine readable storage medium and data processing equipment
CN117765617B (en) Instruction generation method, system and storage medium based on gesture behaviors of user
CN116205819B (en) Character image generation method, training method and device of deep learning model
CN115993887A (en) Gesture interaction control method, device, equipment and storage medium
CN116311526A (en) Image area determining method and device, electronic equipment and storage medium
US20220392127A1 (en) Image annotation method
CN113569821B (en) Gesture recognition method, device, equipment and computer readable storage medium
JP6603178B2 (en) Display control system, display control method, and display control program
CN114995738B (en) Transformation method, transformation device, electronic equipment, storage medium and program product
CN115170536B (en) Image detection method, training method and device of model
WO2024013901A1 (en) Match rate calculation device, match rate calculation method, and match rate calculation program
CN114495173A (en) Posture recognition method and device, electronic equipment and computer readable medium
CN114491345A (en) Webpage image processing method, webpage image loading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant