CN114265499A - Interaction method and system applied to customer service terminal - Google Patents

Interaction method and system applied to customer service terminal Download PDF

Info

Publication number
CN114265499A
CN114265499A CN202111554890.7A CN202111554890A CN114265499A CN 114265499 A CN114265499 A CN 114265499A CN 202111554890 A CN202111554890 A CN 202111554890A CN 114265499 A CN114265499 A CN 114265499A
Authority
CN
China
Prior art keywords
target user
human body
body contour
interaction
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111554890.7A
Other languages
Chinese (zh)
Inventor
高召松
王任文
包峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Traffic Control Technology TCT Co Ltd
Original Assignee
Traffic Control Technology TCT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Traffic Control Technology TCT Co Ltd filed Critical Traffic Control Technology TCT Co Ltd
Priority to CN202111554890.7A priority Critical patent/CN114265499A/en
Publication of CN114265499A publication Critical patent/CN114265499A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the disclosure provides an interaction method and system applied to a customer service terminal, wherein the method comprises the following steps: extracting human body contour features of a target user when the face of the target user is repeatedly detected to be located in an interaction area within a preset time period; locking and tracking the target user, and determining an interested area according to the human body contour characteristics, wherein the interested area comprises a hand area; extracting gesture image frames of the target user in the hand area according to a first preset time interval, and identifying the gesture image frames to generate an identification result; and sending feedback information according to the identification result. In this way, the consultation service can be effectively provided for special passengers, the operation efficiency is improved, and the user experience is further improved.

Description

Interaction method and system applied to customer service terminal
Technical Field
The disclosure relates to the technical field of rail transit, in particular to an interaction method and system applied to a customer service terminal.
Background
With the development of cities, rail transit plays an increasingly important role in the traveling process of people, and people inevitably encounter various problems in the process of taking subways, such as ticket card abnormality, information inquiry, lost object searching and the like, so that subway operators have increasingly heavy burden in passenger service. The unmanned customer service terminal can solve the problems of most of passengers in information consultation, ticket card abnormity, lost and found and the like, and effectively reduces the repetitive labor of subway station operators.
However, for special passengers, such as hearing-impaired people and people with reading disorder, touch interaction is difficult to use, and therefore, the unattended customer service terminal is also difficult to effectively provide consultation services for the special passengers, so that the operation efficiency is reduced, and the user experience is influenced.
Disclosure of Invention
In order to solve the technical problems in the prior art, the embodiments of the present disclosure provide an interaction method and system applied to a customer service terminal.
According to a first aspect of the present disclosure, an interaction method applied to a customer service terminal is provided, including:
extracting human body contour features of a target user when the face of the target user is repeatedly detected to be located in an interaction area within a preset time period;
locking and tracking the target user, and determining an interested area according to the human body contour characteristics, wherein the interested area comprises a hand area;
extracting gesture image frames of the target user in the hand area according to a first preset time interval, and identifying the gesture image frames to generate an identification result;
and sending feedback information according to the identification result.
In some embodiments, the repeatedly detecting that the face of the target user is located in the interaction area within the preset time period includes:
when a face image is detected for the first time in the interaction area, timing is started;
and when the face image is detected in the interaction area again according to a preset time interval in a preset time period, determining that the face of the target user is repeatedly detected in the interaction area in the preset time period.
In some embodiments, the extracting the human body contour feature of the target user includes:
and extracting the human body contour features of the target user by combining the Hog features with an SVM classifier, and shielding image regions except the human body contour features.
In some embodiments, the extracting the human body contour features of the target user by combining the Hog features with an SVM classifier includes:
carrying out graying and normalization processing on the image in the interaction area to generate a grayscale image;
calculating a horizontal gradient and a vertical gradient of each pixel point in the gray level image, and further calculating a corresponding gradient amplitude and a corresponding gradient direction;
determining a feature vector of the gray image according to the gradient amplitude and the gradient direction;
and recognizing the feature vector by using a pre-trained SVM classifier, and extracting the human body contour features of the target user.
In some embodiments, further comprising:
and tracking the human body used by the target by adopting a three-frame difference method and a contour feature extraction algorithm, identifying the moving direction and distance of the contour feature points of the human body, tracking the target user and dynamically generating the region of interest.
In some embodiments, the region of interest further comprises a mouth region, the method further comprising:
identifying the characteristics of the mouth region, and recording a starting time and a first offset time in response to the detection of the lip action, wherein the first offset time is the detection time of the lip action;
recording a termination time and a second offset time in response to detecting the stop of the lip motion, wherein the second offset time is a detection time of the stop of the lip motion;
intercepting the recording according to the recording starting time, the first offset time, the termination time and the second offset time;
and outputting reply information according to the result of identifying the intercepted sound recording by the background server.
In some embodiments, the sending feedback information according to the identification result includes:
and sending corresponding voice information and/or displaying corresponding content on an interactive interface according to the recognition result.
According to a second aspect of the present disclosure, there is provided an interactive system applied to a customer service terminal, including:
the human body contour feature extraction module is used for extracting the human body contour features of a target user in response to the situation that the face of the target user is repeatedly detected to be positioned in an interaction area within a preset time period;
the interesting region determining module is used for locking and tracking the target user and determining an interesting region according to the human body contour characteristics, wherein the interesting region comprises a hand region;
the recognition result generation module is used for extracting a gesture image frame of a target user in the hand region according to a first preset time interval, recognizing the gesture image frame and generating a recognition result;
and the information feedback module is used for sending feedback information according to the identification result.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described in the first aspect above when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method as described in the first aspect above.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
The interaction method applied to the customer service terminal can effectively provide consultation services for special passengers, improve operation efficiency and further improve user experience.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. The accompanying drawings are included to provide a further understanding of the present disclosure, and are not intended to limit the disclosure thereto, and the same or similar reference numerals will be used to indicate the same or similar elements, where:
fig. 1 shows a flowchart of an interaction method applied to a customer service terminal according to an embodiment of the present disclosure;
FIG. 2 shows a block diagram of an interaction system applied to a customer service terminal according to an embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present disclosure;
fig. 4 shows a schematic structural diagram of an interaction system applied to a customer service terminal according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The interaction method applied to the customer service terminal can replace and reduce the repetitive labor of station service personnel of the subway station, reduce the labor cost, provide abundant information for passengers, improve the traveling satisfaction of the passengers, provide consultation information for normal passengers, and provide consultation service for special passengers, so that the operation efficiency is improved, and the user experience is improved.
Specifically, as shown in fig. 1, it is a flowchart of an interaction method applied to a customer service terminal according to an embodiment of the present disclosure. The interaction method applied to the customer service terminal in the embodiment may include the following steps:
s101: and extracting the human body contour characteristics of the target user when the face of the target user is repeatedly detected to be positioned in the interaction area within a preset time period.
S102: and locking and tracking the target user, and determining an interested area according to the human body contour characteristics, wherein the interested area comprises a hand area.
S103: and extracting gesture image frames of the target user in the hand area according to a first preset time interval, and identifying the gesture image frames to generate an identification result.
S104: and sending feedback information according to the identification result.
In particular, the method of the embodiment of the present disclosure may be applied to a rail transit system, and in particular, may be applied to an unmanned interactive system of a rail transit system. Fig. 4 is a schematic structural diagram of an interaction system applied to a customer service terminal according to an embodiment of the present disclosure. The unmanned interactive system comprises an edge computing module, a camera, a touch screen, a loudspeaker and a voice recognition module, wherein the camera, the touch screen, the loudspeaker and the voice recognition module are connected with the edge computing module, the edge computing module is connected with the cloud intelligent customer service platform, and the voice recognition module is connected with a sound pickup. In some embodiments, the edge computing module, the camera, the touch screen, the speaker and the voice recognition module, and the sound pickup connected to the voice recognition module may be integrated on a customer service terminal for providing the passenger with counseling service. The process and principle of providing counseling service for normal passengers can be referred to the service terminal in the prior art, and the detailed description of the disclosed embodiment is omitted. The disclosed embodiments are directed to providing counseling services to specific passengers (e.g., hearing impaired people and reading impaired people).
Firstly, image information in an interaction area can be collected through a camera, and when the face of a target user is repeatedly detected to be located in the interaction area within a preset time period, the face position of the target user is locked, and the human body contour characteristics of the target user are extracted.
Specifically, when a face image is detected for the first time in the interaction area, timing is started; when the face image is detected in the interaction region again after a preset time period, determining that the face of the target user is repeatedly detected in the interaction region within the preset time period, locking the target user, and extracting the human body contour features of the target user. Therefore, the situation that the passenger passes through the interaction area or enters the interaction area for a short time due to other reasons without service consultation and then carries out follow-up procedures can be avoided, and the operation efficiency is improved.
When the face of the target user is repeatedly detected to be located in the interaction area within a preset time period, the user can be judged to be the target user to be consulted, at the moment, the face position of the target user is locked, and the human body contour characteristics of the target user are extracted.
In this embodiment, the human body contour features of the target user are extracted by combining the Hog features with an SVM classifier, and image regions except the human body contour features are shielded. Specifically, graying and normalization processing can be performed on the image in the interaction region to generate a grayscale image; calculating a horizontal gradient and a vertical gradient of each pixel point in the gray level image, and further calculating a corresponding gradient amplitude and a corresponding gradient direction; determining a feature vector of the gray image according to the gradient amplitude and the gradient direction; and recognizing the feature vector by using a pre-trained SVM classifier, and extracting the human body contour features of the target user. Gx (x, y) ═ H (x +1, y) -H (x-1, y),
for example, after graying and normalization processing is performed on an image, a horizontal gradient and a vertical gradient are calculated for each pixel point in the image, and the horizontal gradient formula is as follows: gx (x, y) ═ H (x +1, y) -H (x-1, y), and the vertical gradient formula Gy (x, y) ═ H (x, y +1) -H (x, y-1). Wherein Gx (x, y), Gy (x, y), H (x, y) respectively represent the horizontal gradient, vertical gradient and pixel value at the pixel point (x, y). Calculating the gradient amplitude and the gradient direction at the pixel point (x, y), wherein the gradient amplitude formula is as follows:
Figure BDA0003418812900000071
Figure BDA0003418812900000072
wherein G (x, y) is the gradient magnitude, oc (x, y) is the gradient direction, and the gradient direction is absolute value and is between 0 and 180 degrees. The direction was divided into 9 intervals, 20 degrees one interval.
The method comprises the steps of dividing an image into a plurality of pixel units (cells) by taking 8-8 pixels as one unit (cell), calculating the gradient of each pixel point in the unit, averagely dividing the gradient direction into 9 sections (bin), carrying out histogram statistics on the gradient directions of all the pixels in each direction section in each unit, forming a block by every adjacent 4 units, and connecting feature vectors in one block to obtain a 36-dimensional feature vector. And calculating the feature vectors of all blocks in the image to obtain all the feature vectors of the image. And inputting all the feature vectors of the image into a pre-trained SVM classifier, and extracting the human body contour. The SVM classifier manually labels all feature vectors by taking each block of an image as a sample, takes the feature vector containing a human body contour as a positive sample, and takes the vector not containing the human body contour as a negative sample. And then training the pre-created classifier model to generate the SVM classifier.
After extracting the human body contour, determining an interested area according to the human body contour characteristics, wherein the interested area comprises a hand area and a mouth area. The region of interest in the present embodiment is a region used for image recognition. In the present embodiment, since the counseling service is provided to the special passenger, only the hand region may be recognized. In particular, the target user may be tracked during the interaction. The method adopts a three-frame difference method and a contour feature extraction algorithm to track a target user, identify the moving direction and distance of the human body feature points, track the body movement of passengers, dynamically generate an interested region and eliminate the interference c of a complex background on the hand gesture identification
The algorithm of the three-frame differencing method is described as follows:
fn is the pixel value of the (x, y) point of the nth frame image, fn (x, y), and the frame-to-frame difference between fn and fn-1 is:
dn is | fn (x, y) -fn-1(x, y) |, and the inter-frame difference between fn +1 and fn is Dn +1 | fn +1(x, y) -fn (x, y) |. The interframe difference between the three frames is:
DN equals Dn and Dn + 1. After the above calculation, the DN is the position where the moving object is located.
Based on the algorithm, the work flow of the system for recognizing the passenger gestures and eliminating the background interference is as follows:
1) the camera recognizes the face of a person and starts timing
2) After 2 seconds, the face is still in front of the camera, go to 3), the face leaves, and return to 1).
3) And identifying the human body contour, locking the characteristics and acquiring the region of interest.
4) Sign language recognition, tracking human body movement and moving the region of interest.
5) The human body contour leaves the camera field of view, return 1), the human body contour remains in the camera field of view, return 4) loop.
In the process of tracking the movement of the target user, the gesture of the target user is recognized, the content to be expressed by the target user is determined, and the content to be expressed by the target user is responded, for example, the content is responded in a voice broadcast mode or is responded in an interactive interface conversion mode.
The interaction method applied to the customer service terminal can provide consultation services for special passengers, so that the operation efficiency is improved, and the user experience is improved.
In addition, as an alternative embodiment of the present disclosure, the method in the above embodiment may also be applied to provide counseling service for normal passengers. Specifically, when the consultation service is provided for normal passengers, a mouth region is obtained when a human body contour is identified, the characteristics are locked, and an area of interest is obtained, the characteristics of the mouth region are identified, and in response to the detection of lip movement, the starting time and the first offset time are recorded, wherein the first offset time is the detection time of the lip movement; recording a termination time and a second offset time in response to detecting the stop of the lip motion, wherein the second offset time is a detection time of the stop of the lip motion; intercepting the recording according to the recording starting time, the first offset time, the termination time and the second offset time; and sending the intercepted sound record to a background server, identifying the intercepted sound record by the background server, sending the sound record to the front end, and outputting reply information by the front end according to an identification result. Specifically, after the recording is intercepted, voice recognition can be carried out, the recording is converted into characters, then the characters are transmitted to the cloud unmanned customer service, and through NLP semantic understanding, a response is output.
The method of the embodiment can provide the consultation service for the passenger, thereby improving the operation efficiency and improving the user experience.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 2 is a block diagram of an interactive system applied to a customer service terminal according to an embodiment of the disclosure. The interactive system applied to the customer service terminal of the embodiment comprises:
the human body contour feature extraction module 201 is configured to extract a human body contour feature of a target user in response to repeatedly detecting that a human face of the target user is located in an interaction region within a preset time period.
And an interested region determining module 202, configured to perform lock tracking on the target user, and determine an interested region according to the human body contour feature, where the interested region includes a hand region.
The recognition result generating module 203 is configured to extract a gesture image frame of the target user in the hand region according to a first preset time interval, recognize the gesture image frame, and generate a recognition result.
And an information feedback module 204, configured to send feedback information according to the identification result.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 3 shows a schematic block diagram of an electronic device 300 that may be used to implement an embodiment method of the present disclosure. As shown, device 300 includes a Central Processing Unit (CPU)301 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)302 or loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the device 300 can also be stored. The CPU 301, ROM 302, and RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Various components in device 300 are connected to I/O interface 305, including: an input unit 306 such as a keyboard, a mouse, or the like; an output unit 307 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the device 300 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processing unit 301, which tangibly embodies a machine-readable medium, such as the storage unit 308, performs the various methods and processes described above. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 300 via ROM 302 and/or communication unit 309. When the computer program is loaded into the RAM 703 and executed by the CPU 301, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the CPU 301 may be configured to perform the above-described method in any other suitable manner (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An interaction method applied to a customer service terminal is characterized by comprising the following steps:
extracting human body contour features of a target user when the face of the target user is repeatedly detected to be located in an interaction area within a preset time period;
locking and tracking the target user, and determining an interested area according to the human body contour characteristics, wherein the interested area comprises a hand area;
extracting gesture image frames of the target user in the hand area according to a first preset time interval, and identifying the gesture image frames to generate an identification result;
and sending feedback information according to the identification result.
2. The interaction method according to claim 1, wherein repeatedly detecting that the face of the target user is located in the interaction area within a preset time period comprises:
when a face image is detected for the first time in the interaction area, timing is started;
and when the face image is detected in the interaction area again according to a preset time interval in a preset time period, determining that the face of the target user is repeatedly detected in the interaction area in the preset time period.
3. The interaction method according to claim 2, wherein the extracting the human body contour features of the target user comprises:
and extracting the human body contour features of the target user by combining the Hog features with an SVM classifier, and shielding image regions except the human body contour features.
4. The interaction method according to claim 3, wherein the extracting the human body contour features of the target user through the Hog feature combined with the SVM classifier comprises:
carrying out graying and normalization processing on the image in the interaction area to generate a grayscale image;
calculating a horizontal gradient and a vertical gradient of each pixel point in the gray level image, and further calculating a corresponding gradient amplitude and a corresponding gradient direction;
determining a feature vector of the gray image according to the gradient amplitude and the gradient direction;
and recognizing the feature vector by using a pre-trained SVM classifier, and extracting the human body contour features of the target user.
5. The interaction method of claim 4, further comprising:
and tracking the human body used by the target by adopting a three-frame difference method and a contour feature extraction algorithm, identifying the moving direction and distance of the contour feature points of the human body, tracking the target user and dynamically generating the region of interest.
6. The interaction method of claim 1, wherein the region of interest further comprises a mouth region, the method further comprising:
identifying the characteristics of the mouth region, and recording a starting time and a first offset time in response to the detection of the lip action, wherein the first offset time is the detection time of the lip action;
recording a termination time and a second offset time in response to detecting the stop of the lip motion, wherein the second offset time is a detection time of the stop of the lip motion;
intercepting the recording according to the recording starting time, the first offset time, the termination time and the second offset time;
and outputting reply information according to the result of identifying the intercepted sound recording by the background server.
7. The interactive method according to claim 1, wherein said sending feedback information according to the recognition result comprises:
and sending corresponding voice information and/or displaying corresponding content on an interactive interface according to the recognition result.
8. An interactive system applied to a customer service terminal is characterized by comprising:
the human body contour feature extraction module is used for extracting the human body contour features of a target user in response to the situation that the face of the target user is repeatedly detected to be positioned in an interaction area within a preset time period;
the interesting region determining module is used for locking and tracking the target user and determining an interesting region according to the human body contour characteristics, wherein the interesting region comprises a hand region;
the recognition result generation module is used for extracting a gesture image frame of a target user in the hand region according to a first preset time interval, recognizing the gesture image frame and generating a recognition result;
and the information feedback module is used for sending feedback information according to the identification result.
9. An electronic device comprising a memory and a processor, wherein the memory stores a program, and the processor executes the program to implement the interaction method applied to the customer service terminal according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a program is stored, wherein the program, when executed by a processor, implements the interaction method applied to a customer service terminal according to any one of claims 1 to 7.
CN202111554890.7A 2021-12-17 2021-12-17 Interaction method and system applied to customer service terminal Pending CN114265499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111554890.7A CN114265499A (en) 2021-12-17 2021-12-17 Interaction method and system applied to customer service terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111554890.7A CN114265499A (en) 2021-12-17 2021-12-17 Interaction method and system applied to customer service terminal

Publications (1)

Publication Number Publication Date
CN114265499A true CN114265499A (en) 2022-04-01

Family

ID=80827890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111554890.7A Pending CN114265499A (en) 2021-12-17 2021-12-17 Interaction method and system applied to customer service terminal

Country Status (1)

Country Link
CN (1) CN114265499A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100295782A1 (en) * 2009-05-21 2010-11-25 Yehuda Binder System and method for control based on face ore hand gesture detection
JP2011221787A (en) * 2010-04-09 2011-11-04 Casio Comput Co Ltd Information processor and program
US20110317874A1 (en) * 2009-02-19 2011-12-29 Sony Computer Entertainment Inc. Information Processing Device And Information Processing Method
CN103593594A (en) * 2012-01-09 2014-02-19 明智视觉有限公司 System and method for providing secure access to an electronic device using facial biometric identification and screen gesture
US8922485B1 (en) * 2009-12-18 2014-12-30 Google Inc. Behavioral recognition on mobile devices
CN105184301A (en) * 2015-09-07 2015-12-23 复旦大学 Method for distinguishing vehicle azimuth by utilizing quadcopter
US20180004275A1 (en) * 2016-07-01 2018-01-04 Microsoft Technology Licensing, Llc Face detection circuit
US20180025239A1 (en) * 2016-07-19 2018-01-25 Tamkang University Method and image processing apparatus for image-based object feature description
US20190392840A1 (en) * 2018-06-21 2019-12-26 Casio Computer Co., Ltd. Voice recognition device and voice recognition method
CN111062312A (en) * 2019-12-13 2020-04-24 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control method, device, medium and terminal device
US20200209950A1 (en) * 2017-05-18 2020-07-02 Guohua Liu Device-facing human-computer interaction method and system
WO2022116282A1 (en) * 2020-12-04 2022-06-09 郑州中业科技股份有限公司 Method and system for human-machine interaction based on dynamic gesture recognition

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110317874A1 (en) * 2009-02-19 2011-12-29 Sony Computer Entertainment Inc. Information Processing Device And Information Processing Method
US20100295782A1 (en) * 2009-05-21 2010-11-25 Yehuda Binder System and method for control based on face ore hand gesture detection
US8922485B1 (en) * 2009-12-18 2014-12-30 Google Inc. Behavioral recognition on mobile devices
JP2011221787A (en) * 2010-04-09 2011-11-04 Casio Comput Co Ltd Information processor and program
CN103593594A (en) * 2012-01-09 2014-02-19 明智视觉有限公司 System and method for providing secure access to an electronic device using facial biometric identification and screen gesture
CN105184301A (en) * 2015-09-07 2015-12-23 复旦大学 Method for distinguishing vehicle azimuth by utilizing quadcopter
US20180004275A1 (en) * 2016-07-01 2018-01-04 Microsoft Technology Licensing, Llc Face detection circuit
US20180025239A1 (en) * 2016-07-19 2018-01-25 Tamkang University Method and image processing apparatus for image-based object feature description
US20200209950A1 (en) * 2017-05-18 2020-07-02 Guohua Liu Device-facing human-computer interaction method and system
US20190392840A1 (en) * 2018-06-21 2019-12-26 Casio Computer Co., Ltd. Voice recognition device and voice recognition method
CN111062312A (en) * 2019-12-13 2020-04-24 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control method, device, medium and terminal device
WO2022116282A1 (en) * 2020-12-04 2022-06-09 郑州中业科技股份有限公司 Method and system for human-machine interaction based on dynamic gesture recognition

Similar Documents

Publication Publication Date Title
US20200167554A1 (en) Gesture Recognition Method, Apparatus, And Device
US11600008B2 (en) Human-tracking methods, systems, and storage media
CN112488073A (en) Target detection method, system, device and storage medium
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
CN105160318A (en) Facial expression based lie detection method and system
CN110363748B (en) Method, device, medium and electronic equipment for processing dithering of key points
US9218540B2 (en) Apparatus and computer readable medium for signal classification using spectrogram and templates
CN110245554B (en) Pedestrian movement trend early warning method, system platform and storage medium
CN111160202A (en) AR equipment-based identity verification method, AR equipment-based identity verification device, AR equipment-based identity verification equipment and storage medium
US20120128255A1 (en) Part detection apparatus, part detection method, and program
CN111839551A (en) Multi-modal emotion recognition method and system based on video and physiological signals
CN114627526A (en) Fusion duplicate removal method and device based on multi-camera snapshot image and readable medium
CN111191556A (en) Face recognition method and device and electronic equipment
CN111310595B (en) Method and device for generating information
US20220076005A1 (en) Person authentication apparatus, control method, and non-transitory storage medium
CN111091056A (en) Method and device for identifying sunglasses in image, electronic equipment and storage medium
CN114265499A (en) Interaction method and system applied to customer service terminal
CN113642493B (en) Gesture recognition method, device, equipment and medium
CN113657277A (en) System and method for judging shielded state of vehicle
CN113286086A (en) Camera use control method and device, electronic equipment and storage medium
CN112580633A (en) Public transport passenger flow statistical device and method
CN113064118A (en) Sound source positioning method and device
CN111062337B (en) People stream direction detection method and device, storage medium and electronic equipment
CN111860066B (en) Face recognition method and device
CN117292338B (en) Vehicle accident identification and analysis method based on video stream analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination