CN102750555A - Expression robot applied to instant messaging tool - Google Patents

Expression robot applied to instant messaging tool Download PDF

Info

Publication number
CN102750555A
CN102750555A CN2012102244961A CN201210224496A CN102750555A CN 102750555 A CN102750555 A CN 102750555A CN 2012102244961 A CN2012102244961 A CN 2012102244961A CN 201210224496 A CN201210224496 A CN 201210224496A CN 102750555 A CN102750555 A CN 102750555A
Authority
CN
China
Prior art keywords
module
expression
image
emoticon
chat window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102244961A
Other languages
Chinese (zh)
Other versions
CN102750555B (en
Inventor
张纯纯
王崇文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201210224496.1A priority Critical patent/CN102750555B/en
Publication of CN102750555A publication Critical patent/CN102750555A/en
Application granted granted Critical
Publication of CN102750555B publication Critical patent/CN102750555B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an expression robot applied to an instant messaging tool. The expression robot can recognize expression marks used by users in real time and can make same responses as the meaning represented by the expression marks. In addition, an expression recognizing method overcomes the defects that the complexity degree is high, the real-time performance is poor, and a secret key needs to be decoded again after the instant messaging tool is updated. In a system, a chat window monitoring module intercepts chat window images as pictures and stores the pictures when the instant messaging tool has new chat message display, or the chat window monitoring module regularly is timed to intercept the chat window images as pictures and to store the pictures; an expression mark positioning module finds expression marks in the intercepted pictures, and the positions of the expression marks are sent to an expression mark recognizing module; the expression mark recognizing module comprises the positions of the expression marks and expression marks in the existing expression mark bank, and the meanings of the expression marks are determined and are sent to a response module; and the response module makes responses in set expression modes after receiving results sent by the expression mark recognizing module.

Description

Expression robot applied to instant messaging tool
Technical Field
The invention relates to an expression robot based on an instant messaging tool, in particular to the technical field of screen capture and image recognition.
Background
The instant messaging tool is also called an instant chat tool, is a service based on the internet, has a large number of fixed users after the development of over ten years, and permeates all aspects of our life and work. The expression symbol is born in the internet, is originally a network subculture, and is widely accepted by people along with the rapid development and popularization of the network.
The general instant messaging tool has the function of inserting the emoticons, so that the expression of a user is greatly facilitated, and the communication interest and the user experience are enhanced. At present, emoticons mainly develop towards more diversified, more vivid and complex directions, and the invention provides a creative development way: the emoticons are more realistic. The scheme can monitor the window interface of the instant messaging tool, then identify the meaning of the emoticon used by the user, and respond to the identified emoticon according with the meaning of the emoticon. The emoticons used in the chatting process can be more intuitively felt by the emoticons through the emoticon robot, so that the experience and the fun of the instant messaging user are increased. The product is an innovative product and is a fresh reference for the experience of the predecessors.
When the expression symbol recognition scheme is designed, firstly, the data packet transmitted by the instant messaging tool is decrypted and analyzed, and codes of the expression symbols are extracted from the data packet, so that recognition is realized. However, the current instant communication tool can carry out complicated encryption on the chat content in the transmission process, the time for decrypting the secret key is long, the difficulty is high, the instant communication tool can be upgraded irregularly usually, the secret key can be reset every time of upgrading, and therefore the defect that the secret key needs to be decrypted again after the instant communication tool is upgraded is caused, the defect that the secret key needs to be continuously decrypted in the scheme is brought, the complexity is high, the real-time performance is poor, and batch production and popularization cannot be achieved.
Disclosure of Invention
In view of the above, the present invention provides an emoticon applied to an instant messenger, which can recognize an emoticon used by a user in real time and respond with the same meaning as the emoticon represents. Moreover, the expression recognition method does not adopt a method for decrypting and analyzing the data packet transmitted by the instant messaging tool, thereby avoiding the defects of high complexity, poor real-time performance and need of re-deciphering the secret key after the instant messaging tool is upgraded due to the adoption of the scheme.
The method is realized by the following steps:
an expression robot based on an instant messaging tool comprises a chat window monitoring module, an expression symbol positioning module, an expression symbol recognition module and a response module;
the chat window monitoring module is used for monitoring the chat window after the current focus window is determined to be the chat window of the instant messaging tool, intercepting the image of the chat window at regular time or when a new chat message is displayed by the user, and storing the image as a picture;
the emoticon positioning module is used for analyzing the picture intercepted by the chat window monitoring module, searching the emoticon therein, and sending the position of the emoticon to the emoticon identification module after finding the emoticon;
the expression symbol recognition module is used for comparing the expression symbol at the position with the expression symbols in the existing expression symbol library after receiving the position of the expression symbol sent by the expression symbol positioning module, so as to determine the meaning represented by the expression symbol, and then sending the result to the response module;
and the response module is used for responding through a set expression method after receiving the result sent by the expression symbol recognition module.
Wherein the response module responds by sound, image and/or motion.
Preferably, the expression symbol positioning module comprises a cutting module, a graying module, a smoothing module and a Hough detection module:
the cutting module is used for cutting the image of the dialog bar from the image of the chat window according to the position of the dialog bar in the chat window and sending the cut image to the graying module;
the graying module is used for graying the received image to obtain a grayscale image and sending the grayscale image to the smoothing module;
the smoothing module is used for carrying out Gaussian smoothing processing on the received gray level image;
the Hough detection module is used for carrying out Hough transformation on the image subjected to the Gaussian smoothing processing and detecting the position of the circular expression symbol; only the position of the emoticon with the maximum coordinates x and y in the identified emoticon is output; wherein the x-axis and the y-axis use the upper left corner of the dialog box as a zero point. Preferably, the Hough transformation of the Hough detection module starts from the lower right corner of the dialog box, and is calculated from right to left and from bottom to top, and when a first circle is detected, the position of the circle center is output, and the detection is stopped.
Preferably, the expression symbol recognition module comprises an extraction module, a matching module and a recognition module;
the extraction module is used for obtaining an emoticon image from the image of the dialog bar obtained by the cutting module according to the position output by the emoticon positioning module and sending the emoticon image to the matching module;
the matching module is used for taking the received emoticon image as a template, taking a pre-stored image containing all default emoticons as a global image, then carrying out template matching and finally finding out the position of the emoticon image in the global image;
the recognition module is used for judging which expression meaning corresponds to the position of the expression symbol image found by the matching module according to the position range and the expression meaning of each default expression symbol in the known global image, and sending the expression meaning to the response module.
Has the advantages that:
the invention provides an expression robot applied to an instant messaging tool, which can identify an expression symbol used by a user in real time and make a response with the same meaning as the expression symbol, and more importantly, the expression and the expression identification method of the user do not adopt a method for decrypting and analyzing a data packet transmitted by the instant messaging tool, and the instant messaging tool does not need to decode a secret key again after being upgraded, so that the universality is good, and the method is suitable for various instant messaging tools. The method is simple, can be realized by means of a plurality of existing image processing methods, is very simple, and has small calculation amount, thereby being beneficial to improving the real-time performance of the expression recognition.
Secondly, a plurality of circles located at different positions may be obtained after Hough circle detection, and only the position of the circle with the maximum x and y is output according to the characteristics of the instant messaging tool, so that the defects of large subsequent matching calculation amount, disordered response and the like caused by position output of the plurality of circles are avoided.
And thirdly, in the Hough detection calculation, calculation is directly started from the lower right corner of the dialog bar, the first detected circle position is output according to the sequence from right to left and from bottom to top, and subsequent calculation is not needed, so that the calculation amount can be reduced.
Drawings
FIG. 1 is a functional block diagram of the system of the present invention.
FIG. 2 is a flow diagram of a screenshot module.
FIG. 3 is a diagram of the effect of the screenshot module.
Fig. 4 is a QQ default emoticon diagram.
FIG. 5 is a flow chart of the processing of the emoticon location module.
Fig. 6 is an exploded view of a QQ chat window.
Fig. 7 is an effect diagram after graying.
Fig. 8 is a graph of the effect after gaussian smoothing.
Fig. 9 is a diagram of the emoticon positioning result.
FIG. 10 is a flow diagram of an emoticon identification module.
Fig. 11 is a diagram of QQ full default emoticon divisions.
Fig. 12 is a graph showing the expression symbol recognition result.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention adopts a method without decrypting and analyzing chat contents, and analyzes the emoticons by monitoring the screenshot of the chat window, which is called a screenshot comparison method.
The method comprises the steps of monitoring a chat window of an instant messaging tool user, selecting a proper time to capture a picture and store the picture, analyzing and comparing an image of the chat window to find out the position of an emoticon, and finally identifying the meaning represented by the emoticon. Because the size of the emoticon is fixed, the emoticon can be realized by adopting a scheme of template matching in pattern recognition.
The screenshot comparison method has the advantages that the link of cracking the encrypted message is bypassed, so that the method is simple and good in real-time performance, and cross-instant communication tools can be easily realized.
Fig. 1 is a block diagram of an emoticon robot of an instant messenger according to the present invention, and as shown in fig. 1, the emoticon robot includes a chat window monitoring module, an emoticon positioning module, an emoticon recognition module, and a response module. Wherein,
the chat window monitoring module is used for monitoring the chat window after the current focus window is determined to be the chat window of the instant messaging tool, intercepting the image of the chat window at regular time or when a new chat message is displayed by a user, and storing the image as a picture; the focus window refers to a window currently operated by a user, and can be easily judged through an API (application program interface) function under Windows.
The emoticon positioning module is used for analyzing the picture intercepted by the chat window monitoring module, searching the emoticon in the picture, and sending the position of the emoticon to the emoticon identification module after the emoticon is found;
the expression symbol recognition module is used for comparing the expression symbol at the position with the expression symbols in the existing expression symbol library after receiving the position of the expression symbol sent by the expression symbol positioning module, so as to determine the meaning represented by the expression symbol, and then sending the result to the response module;
and the response module is used for responding through a set expression method after receiving the result sent by the expression symbol recognition module.
The function of each module is described in detail below.
● chat window monitoring module
The chat window monitoring screenshot module has the main function of monitoring the chat window of the user, when a new chat message is displayed by the user, the chat window is intercepted and stored as a picture for later processing and analysis, and the workflow of the chat window monitoring module is shown in fig. 2.
Taking QQ of Tencent corporation as an example, the general flow of the chat window monitoring module is that when the system runs, firstly, whether the current focus window of the user is a QQ chat window is judged, if so, the next step is carried out, and if not, no response is carried out. And when the focus window of the user is determined to be the QQ chat window, judging whether a new message enters or not, and if so, screenshot and storing. Or a timing screenshot method can be adopted, and the screenshot is carried out on the chat window at every other fixed time. Whether a new chat message is added or not can be realized by monitoring the change of the image on the chat window, and when the image in the chat window is changed, the new chat message is considered to enter. The effect of the screenshot module is shown in figure 3.
● emoticon positioning module
The screenshot of the QQ chat window is obtained through the chat window monitoring module, and the following problems are how to analyze whether the intercepted picture contains the emoticon or not and how to determine the position of the emoticon.
The default emoticon carried by the QQ itself is first analyzed as shown in fig. 4. For practical and technical reasons, the present system currently considers only circular classical emoticons in support of QQ default emoticons. This series of emoticons has two very prominent features:
1) is circular in shape.
2) All with yellow as the predominant color.
Considering that the user mainly uses Chinese characters, English letters, numbers and emoticons during chatting, the first three items are used as interference items in the recognition process and do not have the circular shape characteristic of the emoticons, but the QQ user can customize the color of the chatting text, so that the QQ user may also have the characteristic of yellow. After comprehensive consideration, the emoticons are marked and extracted by the feature of a circle, which is also the feature of the emoticon used by most instant communication tools.
The characteristics of the emoticons different from Chinese, English letters and numbers are analyzed in the foregoing, and here, some image processing techniques are required to search for circular emoticons, as shown in fig. 5, firstly, the image is cut to remove unnecessary interference regions, which can improve the efficiency of subsequent processing, and after the cutting, the image is grayed and smoothed, both of which are intended to enhance the image so as to improve the accuracy of detection, and finally, the emoticons are detected, and the Hough circle detection function is used, and as a result, the positions of the emoticons, that is, the circular coordinates, are returned.
Therefore, the emoticon positioning module comprises a cutting module, a graying module, a smoothing module and a Hough detection module.
And the cutting module is used for cutting the image of the dialog bar from the image of the chat window according to the position of the dialog bar in the chat window and sending the cut image to the graying module.
Still taking QQ as an example, the screenshot of the QQ chat window is shown in fig. 6, and first analyzing the input screenshot, it can be seen that the QQ chat window can be mainly divided into four areas, and they are divided by red lines, which are respectively a function bar, a chat record display window, an input bar, and a QQ show bar. So, the purpose of this cutting is to separate the chat log display window from the surrounding of the other three areas, and through experimentation i find that while the size of the QQ chat window can be changed according to the needs of the user, the function bar is high: ' HFunction(s)", high in the input field: ' HInput device"and width of QQ show column: "WShow (Chinese character)", is a fixed value and does not change as the window is scaled. By measurement of HFunction(s)=105 pixels, HInput device=155 pixels, WShow (Chinese character)=145 pixels. From these three data, it can be easily deduced that the chat log display window has the start coordinate of (0, 105), and the bottom right corner coordinate of (W-145, H-155), where W is the width of the whole screenshot and H is the height of the whole screenshot. The chat record display window can be displayed independently through the data easily, so that a large interference area is eliminated, and the subsequent processing is more efficient.
And the graying module is used for graying the received image to obtain a grayscale image and sending the grayscale image to the smoothing module.
Graying is the conversion of a color image into a grayscale image. The image acquired by the screenshot module in the system is a color image, which is an image with three components (R, G, B) corresponding to each pixel, the three components represent red, green and blue, each component may take integer values from 0 to 255, each set of values representing a color, thus, a pixel point can have 1600 tens of thousands of color values, even if the number is very large for a computer, therefore, for convenience of processing, the image needs to be grayed, and a grayscale image is an image whose pixel points are also represented by R, G, B three components, but the three components have the same value, and thus, it can be seen that, the value range of one pixel point in the gray image is only 255, and the distribution and the characteristics of the whole and local chromaticity and brightness levels of the whole image can be reflected as the color image. Therefore, when the digital image is processed, the processed object is generally converted into a gray image, and the calculation amount of subsequent processing can be greatly reduced. The graying results are shown in fig. 7.
And the smoothing module is used for carrying out Gaussian smoothing processing on the received gray level image.
The main purpose of image smoothing is to eliminate noise in the original image and to maintain the edge contour and lines of the original image as much as possible. The noise in the image is not limited to the distortion and deformation of the image which can be detected by human eyes, and many noises can be found only when computer image processing is carried out, and the noises are distributed randomly and have irregular sizes and shapes. There are many methods for smoothing images, including linear, nonlinear smoothing and sharpening, pseudo-color processing, filtering, etc. When the chaotic and randomly distributed noise is removed by utilizing Gaussian smoothing, better effect is achieved than that of other smoothing methods, and the most important point is that better image edges can be obtained, so that good accuracy can be achieved in subsequent circle detection. Therefore, the input image is subjected to the gaussian smoothing processing, and the effect after the processing is as shown in fig. 8.
The image after the step of gaussian smoothing is a gray image which is subjected to denoising, and has a precondition for performing circle detection. When simple geometric patterns need to be recognized from an image in image processing, there is a basic method called Hough transform, which is a basic effective method for realizing edge detection and is one of basic methods for recognizing simple geometric patterns from an original image in image processing. The basic principle of Hough transformation is that the duality of points and lines in mathematics is utilized to transform selected points in an original image space to a curve or a curved surface of a parameter space, and points with the same parameter characteristics are transformed and then intersected in the parameter space, so that the detection of the characteristic curve can be completed by judging the accumulation degree, namely the peak value, at the intersection points. The Hough transformation can be used for converting the curve detection problem in the original image into the peak value problem in the parameter searching space, namely converting the whole detection characteristic into the local detection characteristic. Based on the difference of parameter properties, Hough transformation can detect straight lines, circles, ellipses, curves and the like.
The Hough transform is a detection method with global property, is insensitive to random noise and partial covering phenomenon, has strong anti-interference capability, and can well inhibit the interference generated by over-concentration of data points. The Hough transform has good fault tolerance and robustness in detecting objects of known shape, and can be correctly identified even if the objects are defective or polluted. The present invention uses this method to detect the emoticon's circular shape. Since the radius of the emoticon is known to be about 25 pixel points, the emoticon can be found and separated as long as the center of the circle is located, and the effect is shown in fig. 9.
In order to test the effectiveness and the anti-interference capability of the method, particularly, the screenshot contains main interference items such as numbers, English letters, Chinese characters and the like, and the detection result is output and identified in order to observe the detection result conveniently. Two circles are detected in the result graph and are just emoticons, the centers of the circles are marked by green dots, and the coordinates of the centers of the circles are output, so that the result is basically the same as the expected result.
After the positioning module detects a screenshot, there is a few seconds to wait for the input of a new screenshot, and since the time is short, the chat log content in the new screenshot partially overlaps with the previous screenshot, and if the overlapping portion includes an emoticon, the emoticon is already positioned, and if the detected result is output at this time, a repeat is generated.
The reason for this problem is that the area of the QQ displaying the chat log is fixed, when a new chat log arrives, the old chat log is topped up, and if an emoticon appears at the bottom of a screenshot, i.e. the emoticon is just received by the user, then the emoticon appears in the subsequent screenshots until there are enough messages to push the emoticon out of the chat log display area.
Due to the above situation, the module cannot be set to store all detected emoticons as a result and output the result, and in order to solve the problem of repeated positioning, the module is designed as follows: and outputting only the position of the emoticon of which the coordinates x and y are the maximum in the identified emoticon.
Therefore, the Hough detection module has the function of carrying out Hough transformation on the image subjected to Gaussian smoothing processing to detect the position of the circular expression symbol; only the position of the emoticon with the maximum coordinates x and y in the identified emoticon is output; wherein the x-axis and the y-axis use the upper left corner of the dialog box as a zero point. In order to reduce the calculation amount, preferably, the Hough transform of the Hough detection module starts from the lower right corner of the dialog box, calculates in the order from right to left and from bottom to top, outputs the position of the circle center when the first circle is detected, and stops the detection.
The positions of the emoticons can be acquired in the emoticon positioning module, so that the emoticons can be separated independently. However, the system only knows that the emoticon appears in the chat window, and does not know the meaning represented by the emoticon, so that a corresponding response cannot be made, and the process of identifying the meaning represented by the separated emoticon is the function which can be realized by the emoticon identification module to realize correct response to different emoticons.
● emoticon recognition module
The main idea of the emoticon identification module is to use the emoticon image acquired by the positioning module as a template, use the image containing all default emoticons as a global image, then perform template matching, and finally find out the positions of the emoticons in the global image, since all the emoticons are contained in the global image and each emoticon has a fixed position, the meaning represented by the emoticon can be determined according to the positions, and the basic flow of the module is as shown in fig. 10.
Therefore, the expression symbol recognition module comprises an extraction module, a matching module and a recognition module; wherein,
the extraction module is used for obtaining an emoticon image from the image of the dialog bar obtained by the cutting module according to the position output by the emoticon positioning module and sending the emoticon image to the matching module;
the matching module is used for taking the received emoticon image as a template, taking a pre-stored image containing all default emoticons as a global image, then carrying out template matching and finally finding out the position of the emoticon image in the global image;
and the recognition module is used for judging which expression meaning corresponds to the position of the expression symbol image found by the matching module according to the position range and the expression meaning of each default expression symbol in the known global image and sending the expression meaning to the response module.
Still taking QQ as an example, fig. 11 is a screenshot of all default emoticons of QQ.
It can be seen from the figure that there are a total of 8 rows and 14 list emoticons, 105 in total, each emoticon occupies a fixed position and has a fixed circumscribed rectangle, and the area size occupied by the emoticon rectangle is measured to be 33 × 33 pixels, here, the emoticon 11 can be divided into 112 small rectangular areas, each area represents an emoticon, and it is only necessary to determine in which area the coordinates of the emoticon returned in the template matching fall to in order to identify the emoticon, for example:
the smile emoticon area is 0 < x < 33, and 0 < y < 33.
The area of the left-hand emoticons is 33 < x < 66, and 0 < y < 33.
The regions of the surprising emoticons are 0 < x < 33, 33 < y < 66.
The difficult emoticons are 33 < x < 66, 33 < y < 66.
As shown in fig. 12, in order to facilitate observation of the recognition result, the image of the emoticon to be recognized is displayed together with the images of all default emoticons and the output of the recognition result, and the result of template matching is marked by a line, so that it is seen that template matching can be successfully performed by this method and the meaning of the emoticon can be recognized by the position.
● response module
The ideal response mode of the response module is accuracy and certain interest, that is, the module can accurately make the response corresponding to the emoticon, so that the user can easily understand the meaning of the response and requires certain interest, and thus, the user experience can be improved.
The response module planned in the early days of the paper research was to find a specially made peripheral which should have the following characteristics:
1) the face can be simulated to make various expressions such as joy, anger, sadness and the like.
2) May be connected to the user's host in some manner to communicate with the user's host.
3) The peripheral can be programmed to make different expressions.
The transmission mode can be selected according to actual needs, the content of the transmitted data should be designed as follows, 105 emoticons are numbered in the identification module, and the identification result is the number of the emoticon, so that the states of the emoticon can also be numbered, and the number is consistent with the number of the emoticon. Thus, only the number is sent to the response module, the corresponding emoticon response can be made, so that the transmitted data content can be designed into an integer between 1 and 105, and when the response module receives the number, the response module can be changed into the corresponding state, and the state is just the corresponding state of the emoticon.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. An expression robot based on an instant messaging tool is characterized by comprising a chat window monitoring module, an expression symbol positioning module, an expression symbol recognition module and a response module;
the chat window monitoring module is used for monitoring the chat window after the current focus window is determined to be the chat window of the instant messaging tool, intercepting the image of the chat window at regular time or when a new chat message is displayed by the user, and storing the image as a picture;
the emoticon positioning module is used for analyzing the picture intercepted by the chat window monitoring module, searching the emoticon therein, and sending the position of the emoticon to the emoticon identification module after finding the emoticon;
the expression symbol recognition module is used for comparing the expression symbol at the position with the expression symbols in the known expression symbol library after receiving the position of the expression symbol sent by the expression symbol positioning module, so as to determine the meaning represented by the expression symbol, and then sending the result to the response module;
and the response module is used for responding through a set expression method after receiving the result sent by the expression symbol recognition module.
2. The expressive robot as claimed in claim 1, wherein the response module responds by sound, image and/or motion.
3. The emoji robot of claim 1, wherein the emoji positioning module comprises a cutting module, a graying module, a smoothing module, and a Hough detection module:
the cutting module is used for cutting the image of the dialog bar from the image of the chat window according to the position of the dialog bar in the chat window and sending the cut image to the graying module;
the graying module is used for graying the received image to obtain a grayscale image and sending the grayscale image to the smoothing module;
the smoothing module is used for carrying out Gaussian smoothing processing on the received gray level image;
the Hough detection module is used for carrying out Hough transformation on the image subjected to the Gaussian smoothing processing and detecting the position of the circular expression symbol; only the position of the emoticon with the maximum coordinates x and y in the identified emoticon is output; wherein the x-axis and the y-axis use the upper left corner of the dialog box as a zero point.
4. The expression robot of claim 3, wherein the Hough transform of the Hough detection module starts from the lower right corner of the dialog box, is calculated in the order from right to left and from bottom to top, and when a first circle is detected, the position of the center of the circle is output and the detection is stopped.
5. The emoji robot of claim 3 or 4, wherein the emoji recognition module comprises an extraction module, a matching module, and a recognition module;
the extraction module is used for obtaining an emoticon image from the image of the dialog bar obtained by the cutting module according to the position output by the emoticon positioning module and sending the emoticon image to the matching module;
the matching module is used for taking the received emoticon image as a template, taking a pre-stored image containing all default emoticons as a global image, then carrying out template matching and finally finding out the position of the emoticon image in the global image;
the recognition module is used for judging which expression meaning corresponds to the position of the expression symbol image found by the matching module according to the position range and the expression meaning of each default expression symbol in the known global image, and sending the expression meaning to the response module.
CN201210224496.1A 2012-06-28 2012-06-28 Expression identification device applied to instant messaging tool Expired - Fee Related CN102750555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210224496.1A CN102750555B (en) 2012-06-28 2012-06-28 Expression identification device applied to instant messaging tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210224496.1A CN102750555B (en) 2012-06-28 2012-06-28 Expression identification device applied to instant messaging tool

Publications (2)

Publication Number Publication Date
CN102750555A true CN102750555A (en) 2012-10-24
CN102750555B CN102750555B (en) 2015-04-22

Family

ID=47030719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210224496.1A Expired - Fee Related CN102750555B (en) 2012-06-28 2012-06-28 Expression identification device applied to instant messaging tool

Country Status (1)

Country Link
CN (1) CN102750555B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914005A (en) * 2012-12-31 2014-07-09 北京新媒传信科技有限公司 Robot control method and terminal
CN104680166A (en) * 2013-11-27 2015-06-03 施耐德电器工业公司 Information identification method and information identification device
CN104699662A (en) * 2015-03-18 2015-06-10 北京交通大学 Method and device for recognizing whole symbol string
CN105204748A (en) * 2014-06-27 2015-12-30 阿里巴巴集团控股有限公司 Terminal interaction method and device
CN106228156A (en) * 2016-07-18 2016-12-14 百度在线网络技术(北京)有限公司 A kind of method and apparatus determining information alert content
CN106445478A (en) * 2015-08-12 2017-02-22 腾讯科技(深圳)有限公司 Graphic expression conversion method and apparatus
CN106530096A (en) * 2016-10-08 2017-03-22 广州阿里巴巴文学信息技术有限公司 Emotion icon processing method, device and electronic apparatus
CN108268583A (en) * 2017-08-21 2018-07-10 广州市动景计算机科技有限公司 The method and apparatus of emoticon meaning displaying
CN110689009A (en) * 2019-09-18 2020-01-14 北京三快在线科技有限公司 Information identification method and device, electronic equipment and computer readable storage medium
CN111597966A (en) * 2020-05-13 2020-08-28 北京达佳互联信息技术有限公司 Expression image recognition method, device and system
CN112784293A (en) * 2019-11-08 2021-05-11 游戏橘子数位科技股份有限公司 Recording notification method for picture capture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494618A (en) * 2008-11-28 2009-07-29 腾讯科技(深圳)有限公司 Display system and method for instant communication terminal window
CN102289339A (en) * 2010-06-21 2011-12-21 腾讯科技(深圳)有限公司 Method and device for displaying expression information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494618A (en) * 2008-11-28 2009-07-29 腾讯科技(深圳)有限公司 Display system and method for instant communication terminal window
CN102289339A (en) * 2010-06-21 2011-12-21 腾讯科技(深圳)有限公司 Method and device for displaying expression information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔忠艾: "聊天工具仿真表情插件的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 30 September 2010 (2010-09-30), pages 9 - 54 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914005A (en) * 2012-12-31 2014-07-09 北京新媒传信科技有限公司 Robot control method and terminal
CN104680166A (en) * 2013-11-27 2015-06-03 施耐德电器工业公司 Information identification method and information identification device
CN105204748A (en) * 2014-06-27 2015-12-30 阿里巴巴集团控股有限公司 Terminal interaction method and device
CN105204748B (en) * 2014-06-27 2019-09-17 阿里巴巴集团控股有限公司 Terminal interaction method and its device
CN104699662A (en) * 2015-03-18 2015-06-10 北京交通大学 Method and device for recognizing whole symbol string
CN104699662B (en) * 2015-03-18 2017-12-22 北京交通大学 The method and apparatus for identifying overall symbol string
CN106445478A (en) * 2015-08-12 2017-02-22 腾讯科技(深圳)有限公司 Graphic expression conversion method and apparatus
CN106228156B (en) * 2016-07-18 2019-09-20 百度在线网络技术(北京)有限公司 A kind of method and apparatus of determining information alert content
CN106228156A (en) * 2016-07-18 2016-12-14 百度在线网络技术(北京)有限公司 A kind of method and apparatus determining information alert content
CN106530096A (en) * 2016-10-08 2017-03-22 广州阿里巴巴文学信息技术有限公司 Emotion icon processing method, device and electronic apparatus
CN108268583A (en) * 2017-08-21 2018-07-10 广州市动景计算机科技有限公司 The method and apparatus of emoticon meaning displaying
CN108268583B (en) * 2017-08-21 2022-06-14 阿里巴巴(中国)有限公司 Method and equipment for displaying emoticon meanings
CN110689009A (en) * 2019-09-18 2020-01-14 北京三快在线科技有限公司 Information identification method and device, electronic equipment and computer readable storage medium
CN110689009B (en) * 2019-09-18 2021-09-07 北京三快在线科技有限公司 Information identification method and device, electronic equipment and computer readable storage medium
CN112784293A (en) * 2019-11-08 2021-05-11 游戏橘子数位科技股份有限公司 Recording notification method for picture capture
CN112784293B (en) * 2019-11-08 2024-06-04 游戏橘子数位科技股份有限公司 Method for recording notice of picture acquisition
CN111597966A (en) * 2020-05-13 2020-08-28 北京达佳互联信息技术有限公司 Expression image recognition method, device and system
CN111597966B (en) * 2020-05-13 2023-10-10 北京达佳互联信息技术有限公司 Expression image recognition method, device and system

Also Published As

Publication number Publication date
CN102750555B (en) 2015-04-22

Similar Documents

Publication Publication Date Title
CN102750555B (en) Expression identification device applied to instant messaging tool
CN110275834B (en) User interface automatic test system and method
CN103543277B (en) A kind of blood group result recognizer based on gray analysis and category identification
CN108563559A (en) A kind of test method of identifying code, device, terminal device and storage medium
CN105718783B (en) Verification code interaction method and device, client and server
CN110490141B (en) Method, device, terminal and storage medium for identifying filling information
CN111368744A (en) Method and device for identifying unstructured table in picture
WO2021159802A1 (en) Graphical captcha recognition method, apparatus, computer device, and storage medium
US20120237131A1 (en) Information processing apparatus to acquire character information
CN113704111A (en) Page automatic testing method, device, equipment and storage medium
CN104794485A (en) Method and device for recognizing written words
CN113569677B (en) Paper test report generation method based on scanning piece
CN109857499B (en) Universal method for acquiring cash register software screen amount based on windows system
JP6628336B2 (en) Information processing system
CN112835807B (en) Interface identification method and device, electronic equipment and storage medium
CN110598575B (en) Form layout analysis and extraction method and related device
CN110766001A (en) Bank card number positioning and end-to-end identification method based on CNN and RNN
CN111125672A (en) Method and device for generating image verification code
JP6883199B2 (en) Image processor, image reader, and program
CN111738250B (en) Text detection method and device, electronic equipment and computer storage medium
CN111580902B (en) Mobile terminal element positioning method and system based on picture analysis
CN110502990B (en) Method and system for data acquisition by image processing
CN108537225A (en) A method of for hollow character in automatic identification identifying code
CN112861843A (en) Method and device for analyzing selection frame based on feature image recognition
KR102064974B1 (en) Method for recogniting character based on blob and apparatus using the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150422

Termination date: 20170628

CF01 Termination of patent right due to non-payment of annual fee