CN112199015A - Intelligent interaction all-in-one machine and writing method and device thereof - Google Patents

Intelligent interaction all-in-one machine and writing method and device thereof Download PDF

Info

Publication number
CN112199015A
CN112199015A CN202010965080.XA CN202010965080A CN112199015A CN 112199015 A CN112199015 A CN 112199015A CN 202010965080 A CN202010965080 A CN 202010965080A CN 112199015 A CN112199015 A CN 112199015A
Authority
CN
China
Prior art keywords
writing
coordinate system
dimensional world
target
world coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010965080.XA
Other languages
Chinese (zh)
Other versions
CN112199015B (en
Inventor
冯森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Hongcheng Opto Electronics Co Ltd
Original Assignee
Anhui Hongcheng Opto Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Hongcheng Opto Electronics Co Ltd filed Critical Anhui Hongcheng Opto Electronics Co Ltd
Priority to CN202010965080.XA priority Critical patent/CN112199015B/en
Publication of CN112199015A publication Critical patent/CN112199015A/en
Application granted granted Critical
Publication of CN112199015B publication Critical patent/CN112199015B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The invention discloses an intelligent interaction all-in-one machine and a writing method and a writing device thereof, wherein the method comprises the following steps: acquiring a plurality of frames of images collected by a front camera, wherein the plurality of frames of images comprise target writing points; inputting a plurality of frames of images into a coordinate conversion model, converting a coordinate system of each frame of image into a two-dimensional world coordinate system, wherein the two-dimensional world coordinate system is a coordinate system parallel to a display screen, and coordinates of a preset area in the two-dimensional world coordinate system are mapped with pixel points of the display screen one by one; recognizing a target writing point in a two-dimensional world coordinate system, and determining a moving track of the target writing point in the two-dimensional world coordinate system, wherein the moving track is used for expressing a writing track of the target writing point; and displaying the moving track in the display screen to present the writing track. On the premise of not adding additional auxiliary equipment, large-screen writing is realized, hardware cost is reduced, and writing experience of a user is improved.

Description

Intelligent interaction all-in-one machine and writing method and device thereof
Technical Field
The invention relates to the technical field of data processing of intelligent interactive all-in-one machines, in particular to an intelligent interactive all-in-one machine and a writing method and device thereof.
Background
With the continuous progress of science and technology development, the technology of the intelligent interactive all-in-one machine is more and more perfect. Because the large-size display screen (called as a large screen for short) which is convenient to display is configured, the display screen is also increasingly applied to various scenes, such as the fields of classroom teaching, meetings and the like.
When a large screen is used, when a reporter needs to explain or label a certain content on the large screen, the reporter often needs to walk to the front and touch the large screen with hands to write, which is inconvenient in the process of a conference. Most of the current writing is contact type, namely, the writing can be performed by touching a screen. Although there are also space writing schemes, it is not convenient in a conference to have special equipment or media.
Therefore, how to realize large-screen writing becomes an urgent technical problem to be solved without adding additional auxiliary equipment.
Disclosure of Invention
Based on the current situation, the invention mainly aims to provide an intelligent interactive all-in-one machine and a writing method and device thereof, so as to realize large-screen writing on the premise of not adding additional auxiliary equipment.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, the embodiment of the invention discloses a writing method of an intelligent interactive all-in-one machine, wherein the intelligent interactive all-in-one machine comprises a front camera and a display screen, and the writing method comprises the following steps: acquiring a plurality of frames of images collected by a front camera, wherein the plurality of frames of images comprise target writing points; inputting a plurality of frames of images into a coordinate conversion model, converting a coordinate system of each frame of image into a two-dimensional world coordinate system, wherein the two-dimensional world coordinate system is a coordinate system parallel to a display screen, and coordinates of a preset area in the two-dimensional world coordinate system are mapped with pixel points of the display screen one by one; recognizing a target writing point in a two-dimensional world coordinate system, and determining a moving track of the target writing point in the two-dimensional world coordinate system, wherein the moving track is used for expressing a writing track of the target writing point; and displaying the moving track in the display screen to present the writing track.
Optionally, between determining the movement track of the target writing point in the two-dimensional world coordinate system and displaying the movement track on the display screen, the method further includes: smoothing the moving track in the multi-frame image to obtain a smooth moving track; displaying the movement track in the display screen includes: and displaying the smooth moving track in the display screen.
Optionally, between converting the coordinate system of each frame image into a two-dimensional world coordinate system and identifying the target writing point in the two-dimensional world coordinate system, the method further includes: judging whether a trigger event representing the recognition writing point is acquired; and if the trigger event is acquired, identifying a target writing point in the two-dimensional world coordinate system in response to the trigger event, and determining the moving track of the target writing point in the two-dimensional world coordinate system.
Optionally, the trigger event is a preset gesture of a hand of the user; the target writing point is a characteristic point of a designated finger in a preset gesture; judging whether a trigger event representing the recognition writing point is acquired comprises the following steps: recognizing a hand gesture of a user; judging whether the hand gesture is a preset gesture or not; if the hand gesture is a preset gesture, determining to acquire a trigger event representing the recognition writing point; identifying a target writing point in a two-dimensional world coordinate system includes: and identifying the characteristic points of the designated fingers in the preset gesture in the two-dimensional world coordinate system to obtain the target writing points.
Optionally, the trigger event is a specific gesture presented by the first hand and the second hand of the user; the target writing point is a characteristic point of a second hand designated finger in the specific gesture; the first hand and the second hand are different hands; judging whether a trigger event representing the recognition writing point is acquired comprises the following steps: judging whether specific gestures presented by the first hand and the second hand are recognized or not; if a particular gesture presented by the first and second hands is recognized, it is determined that a trigger event representing a recognition writing point is acquired. Identifying a target writing point in a two-dimensional world coordinate system includes: and in the two-dimensional world coordinate system, recognizing the characteristic point of the second hand-designated finger in the specific gesture to obtain the target writing point.
Optionally, identifying the target writing point in the two-dimensional world coordinate system comprises: identifying a target object in two-dimensional world coordinates; extracting the edge of the target object in the preselection frame to obtain the contour of the target object; and extracting preset characteristic points in the contour of the target object to obtain target writing points.
Optionally, between identifying the target object in the two-dimensional world coordinates and extracting an edge of the target object within the preselected frame, further comprising: marking a foreground portion of the target object; extracting the edge of the target object, and obtaining the target object contour comprises the following steps: and extracting the foreground part to obtain the contour of the target object.
In a second aspect, the embodiment of the invention discloses a writing device of an intelligent interactive all-in-one machine, the intelligent interactive all-in-one machine comprises a front camera and a display screen, and the writing device comprises: the image acquisition module is used for acquiring multi-frame images collected by the front camera, and the multi-frame images comprise target writing points; the coordinate conversion module is used for inputting a plurality of frames of images into the coordinate conversion model, converting the coordinate system of each frame of image into a two-dimensional world coordinate system, wherein the two-dimensional world coordinate system is parallel to the coordinate system of the display screen, and the coordinates of the preset area in the two-dimensional world coordinate system are mapped with the pixel points of the display screen one by one; the track determining module is used for identifying a target writing point in the two-dimensional world coordinate system and determining a moving track of the target writing point in the two-dimensional world coordinate system, wherein the moving track is used for expressing a writing track of the target writing point; and the track presenting module is used for displaying the moving track in the display screen so as to present the writing track.
In a third aspect, an embodiment of the present invention discloses a computer storage medium, on which a computer program is stored, the computer program being configured to be executed to implement the method disclosed in the first aspect.
In a fourth aspect, an embodiment of the present invention discloses an intelligent interactive all-in-one machine, including: a front camera; a display screen; and a processor for executing a program to implement the method disclosed in the first aspect.
[ PROBLEMS ] the present invention
According to the intelligent interactive all-in-one machine and the writing method and device thereof disclosed by the embodiment, a front-facing camera of the intelligent interactive all-in-one machine is used for acquiring a plurality of frames of images containing target writing points, then, a coordinate system of each frame of image is converted into a two-dimensional world coordinate system, the target writing points are identified in the two-dimensional world coordinate system, and a moving track of the target writing points in the two-dimensional world coordinate system is determined, wherein the moving track represents a writing track; the two-dimensional world coordinate system is parallel to the coordinate system of the display screen, and the coordinates of the preset area in the two-dimensional world coordinate system are mapped with the pixel points of the display screen one by one, so that the writing track can be presented when the moving track is displayed in the display screen. Compared with the prior art, a sensor needs to be additionally configured, in the application, the writing track can be acquired through the existing front-facing camera of the all-in-one machine, namely, the large-screen writing is realized on the premise of not increasing additional auxiliary equipment, and on one hand, the hardware cost is reduced; on the other hand, because additional auxiliary equipment is not needed, the user can realize writing operation through the action of the hand, so that the writing operation of the user is facilitated, and the writing experience of the user is improved.
As an optional scheme, the moving trajectory in the multi-frame image is smoothed to obtain a smooth moving trajectory, and the smooth moving trajectory is displayed on the display screen. Therefore, the writing track displayed in the display screen is smooth, and the readability of the writing track is improved.
As an optional scheme, whether a trigger event representing recognition of a writing point is acquired is judged, and if the trigger event is acquired, a target writing point is recognized in a two-dimensional world coordinate system in response to the trigger event, so that track confusion caused by acquiring a movement track of a hand at any time can be avoided, that is, the target writing point is recognized in response to the trigger event, so that the recognition operation of the target writing point is controllable, and the interference of invalid actions of the hand is reduced.
As an optional scheme, the trigger event is a preset gesture of the hand of the user, and the target writing point is a feature point of the designated finger in the preset gesture, so that the trigger event is not required to be provided through additional auxiliary equipment, the hardware cost is reduced, in addition, the feature point of the designated finger in the preset gesture can be identified only by making the preset gesture by the hand of the user, and the writing operation of the single hand of the user is facilitated.
As an optional scheme, if the specific gestures presented by the first hand and the second hand are recognized, it is determined that the trigger event representing the recognition writing point is obtained, and the target writing point is the feature point of the second hand, that is, the second hand of the user can be kept in a moving state, and when recognition is needed, the feature point of the second hand can be recognized by stretching out the first hand, so that frequent switching of the gestures of the second hand of the user is avoided, the writing operation mode of the user is simplified, and the user experience is improved.
Other advantages of the present invention will be described in the detailed description, through the introduction of specific technical features and technical solutions, which will be understood by those skilled in the art.
Drawings
Embodiments according to the present invention will be described below with reference to the accompanying drawings. In the figure:
fig. 1 is a schematic structural diagram of an intelligent interactive all-in-one machine disclosed in this embodiment;
FIG. 2 is a flow chart of a writing method of the intelligent interactive all-in-one machine disclosed in the present embodiment;
FIG. 3 is a schematic diagram of a target writing point according to the present disclosure;
fig. 4 is a schematic diagram of a front camera for collecting a target writing point according to the embodiment;
fig. 5 is a schematic structural diagram of a writing device of the intelligent interactive all-in-one machine disclosed in this embodiment.
Detailed Description
In order to implement large-screen writing without adding additional auxiliary devices, the embodiment discloses a writing method for an intelligent interactive all-in-one machine, please refer to fig. 1, which is a schematic structural diagram of the intelligent interactive all-in-one machine disclosed in the embodiment, the intelligent interactive all-in-one machine includes a front camera 1 and a display screen 2, wherein the display screen 2 is used for displaying image-text data of the intelligent interactive all-in-one machine, and generally speaking, the front camera 1 is a camera of a sub-band of the intelligent interactive all-in-one machine and is used for collecting video image data of a site.
Referring to fig. 2, a flow chart of a writing method of the intelligent interactive all-in-one machine disclosed in this embodiment is shown, where the writing method includes:
and S100, acquiring a plurality of frames of images collected by the front camera, wherein the plurality of frames of images comprise target writing points. In this embodiment, the front-facing camera is a monocular camera, and the target writing point is a preset feature point of the hand of the user.
Referring to fig. 1, in this embodiment, multi-frame image acquisition is performed by a front-facing camera 1 of the intelligent interactive all-in-one machine, and of course, for an intelligent interactive all-in-one machine not equipped with a front-facing camera, the front-facing camera 1 may be replaced by an external camera.
Please refer to fig. 3, which is a schematic diagram of a target writing point disclosed in this embodiment, the target writing point is a preset feature point 3 of a hand of a user, for example, a right hand of the user, a finger tip of a finger of the right hand may be used as the preset feature point 3, and the preset feature point 3 is a writing point. In a specific implementation process, the front-facing camera 1 is required to acquire and capture the preset feature points 3, and obtain the moving track of the preset feature points 3.
Referring to fig. 4, for a schematic diagram of the front camera acquiring the target writing point disclosed in this embodiment, the front camera 1 may acquire the target writing point in a preset distance interval, that is, the front camera 1 may acquire the target writing point located in the preset distance interval in front of the front camera 1, in a specific implementation process, the preset distance interval is related to the resolution of the front camera 1, when the resolution of the front camera 1 is higher, the acquired distance may be farther, and conversely, when the resolution of the front camera 1 is lower, the acquired distance is shorter.
In the present embodiment, the term "multi-frame image" refers to a plurality of frame images that are consecutive in time sequence, but of course, a certain time interval may exist between the frame images, and the time interval may be equal or unequal. In particular, it may be determined with sampling accuracy.
Step S200, inputting a plurality of frame images into a coordinate conversion model, and converting the coordinate system of each frame image into a two-dimensional world coordinate system. In this embodiment, the two-dimensional world coordinate system XOY is a coordinate system parallel to the display screen, and the coordinates of the preset area in the two-dimensional world coordinate system XOY are mapped to the pixels of the display screen one by one. In a specific embodiment, the front-facing camera 1 collects the target writing point through a polar coordinate system O-uv, in this embodiment, the polar coordinate system O-uv is converted into a coordinate in a two-dimensional world coordinate system XOY through a coordinate conversion model, and a coordinate of a Z axis is ignored, that is, a distance from an area where the target writing point is located to the front-facing camera 1 is ignored, specifically, internal and external parameters of the front-facing camera 1 may be obtained, and the polar coordinate of the target writing point is converted into the two-dimensional world coordinate by calibrating the front-facing camera 1 (for example, zhangyou calibration rule). In this embodiment, since the coordinates in the two-dimensional world coordinate system XOY correspond to the pixel points of the display screen in a one-to-one manner, the position of the converted target writing point can be represented in the display screen, for example, the coordinates of the target writing point are (x0, y0), and the position of the target writing point in the display screen can also be (x0, y 0).
And step S300, identifying a target writing point in the two-dimensional world coordinate system, and determining the moving track of the target writing point in the two-dimensional world coordinate system. In the present embodiment, the movement trajectory is used to indicate the writing trajectory of the target writing point, and specifically, the movement trajectory of, for example, the tip of the index finger may be used as the writing trajectory. In a specific embodiment, the target writing point can be recognized through a neural network model, and specifically, in order to accurately recognize the human hand, a deep learning network model (e.g., YOLOv3) can be used to train a large number of pictures of the human hand, so as to achieve the effect of recognizing the human hand. Taking the YOLOv3 network model as an example, a recognition program of YOLOv3 can be constructed by python, and the weight file after sample training is imported into the model, so that the image acquired by the front camera 1 can be subjected to human hand recognition.
In an alternative embodiment, identifying the target writing point in the two-dimensional world coordinate system comprises: identifying a target object in two-dimensional world coordinates; extracting the edge of the target object in the preselection frame to obtain the contour of the target object; and extracting preset characteristic points in the contour of the target object to obtain target writing points. Specifically, the target writing point may be obtained through a neural network, for example, the target recognition may be performed through a pre-selection box, and specifically, the recognition of the target writing point in the two-dimensional world coordinate system includes: identifying a target object in two-dimensional world coordinates through a preselected frame of a neural network model; extracting the edge of the target object in the preselection frame by using an edge extraction function to obtain the contour of the target object; and extracting preset characteristic points in the contour of the target object to obtain target writing points. In particular, the pre-selection box may be a rectangular area. In the specific implementation process, before the edge of the target object is extracted, a series of preprocessing including graying, filtering, opening operation, closing operation, binarization and the like can be performed on the image in the preselected frame. In this embodiment, the preset feature point may be, for example, a fingertip of a finger, and the preset feature point is a target writing point.
In order to accurately extract the contour of the human hand, the region can be segmented (for example, GrabCut segmentation algorithm and the like), and a foreground part of a target object is marked between the target object in the two-dimensional world coordinate identification and the edge of the target object in the pre-selection frame extraction; extracting the edge of the target object, and obtaining the target object contour comprises the following steps: and extracting the foreground part to obtain the contour of the target object. Specifically, marking a foreground part of a target object in a preselection frame through a segmentation algorithm; the method for extracting the edge of the target object in the preselection frame by using the edge extraction function to obtain the contour of the target object comprises the following steps: and extracting the foreground part by using an edge extraction function to obtain the contour of the target object. In this embodiment, the part in the pre-selection frame is marked as the foreground part, and most of the interference elements can be removed after segmentation, as shown in fig. 3.
And step S400, displaying the moving track in the display screen to present the writing track. In a specific embodiment, since the pixel points of the display screen correspond to the coordinates of the two-dimensional world coordinate system one to one, after the coordinates of the target writing point in the two-dimensional world coordinate system are obtained according to the time sequence, the target writing point can be presented on the pixel points corresponding to the display screen, and therefore, the moving track, namely the writing track, of the target writing point is presented on the display screen according to the time sequence.
Referring to fig. 4, there is illustrated: the user writes a 'person' character in the air by taking a fingertip as a target writing point, can acquire the moving track 'person' of the target writing point through the front-facing camera 1, and then displays the moving track on the display screen, thereby displaying the writing track 'person' of the target writing point.
In order to smooth the writing track displayed on the display screen and improve the readability of the writing track, in an optional embodiment, between determining the moving track of the target writing point in the two-dimensional world coordinate system and displaying the moving track on the display screen, the method further comprises the following steps: smoothing the moving track in the multi-frame image to obtain a smooth moving track; displaying the movement track in the display screen includes: and displaying the smooth moving track in the display screen. Specifically, after target writing points are acquired by the front-facing camera 1 according to a time sequence, the target writing points can be displayed at corresponding positions of the display screen, and then all the obtained target writing points are connected on the display screen by a smooth curve, so that the writing track of the human hand can be obtained.
In order to make the operation of identifying the target writing point controllable and reduce the interference of the ineffective action of the hand, referring to fig. 2, in an alternative embodiment, between converting the coordinate system of each frame image into the two-dimensional world coordinate system and identifying the target writing point in the two-dimensional world coordinate system, the method further includes:
in step S210, it is determined whether a trigger event indicating that a writing point is recognized is acquired, specifically, see the following description. If the trigger event is acquired, step S300 is executed, specifically, the target writing point is identified in the two-dimensional world coordinate system in response to the trigger event, and the movement track of the target writing point in the two-dimensional world coordinate system is determined.
In this embodiment, it is determined whether a trigger event indicating recognition of a writing point is acquired, and if the trigger event is acquired, a target writing point is recognized in a two-dimensional world coordinate system in response to the trigger event, so that track confusion caused by acquiring a movement track of a hand at any time can be avoided, that is, the target writing point is recognized in response to the trigger event, so that the recognition operation of the target writing point is controllable, and interference of invalid actions of the hand is reduced.
For step S210, the target writing point is a preset feature point of the hand of the user, and in one embodiment, the trigger event is a preset gesture of the hand, specifically, for example, the user makes a fist with the right hand and stretches out the index finger. The target writing points are: the preset gesture specifies a characteristic point of a finger, such as a tip of an index finger.
Judging whether a trigger event representing the recognition writing point is acquired comprises the following steps: recognizing a hand gesture of a user; judging whether the hand gesture is a preset gesture or not; and if the hand gesture is a preset gesture, determining that a trigger event representing the recognition writing point is acquired. At this time, the recognizing the target writing point in the two-dimensional world coordinate system includes: and identifying the characteristic points of the designated fingers in the preset gesture in the two-dimensional world coordinate system to obtain the target writing points. Taking a specific gesture as a right hand to make a fist and stretch out an index finger as an example, when the right hand to make a fist is recognized and the index finger stretches out, the fact that the user starts writing at the moment is indicated, and at the moment, it can be determined that a trigger event representing the recognized writing point is acquired; when the right hand of the user is not the "right hand fist" and the index finger is extended, the user is considered not to be in the writing state at this time, and the target writing point does not need to be recognized. Take writing "man" as an example: (1) the user "right hand fist and reach index finger", moving the index finger tip to form a line ", at which point the display screen displays a moving trajectory line"; (2) the user releases his fist, which is then in a non-right-handed fist, and extends the index finger to a preset position, which also indicates that the user is not in writing, and the user can move his fingers to right fall
Figure RE-GDA0002811479850000081
The display screen does not display the moving track of the user target writing point at the moment; (3) the user's right hand makes a fist and stretches out the index finger', moves the tip of the index finger to form a right-hand press
Figure RE-GDA0002811479850000083
At this time, the display screen displays a right-falling stroke of the movement locus
Figure RE-GDA0002811479850000082
Thereby, writing and displaying of the "man" character are completed.
In this embodiment, the trigger event is the gesture of predetermineeing of user's hand, and the target writing point is the characteristic point of appointed finger in predetermineeing the gesture for need not to provide the trigger event through extra auxiliary assembly, reduced hardware cost, in addition, owing to only need user's hand to make predetermineeing the gesture, can discern the characteristic point of appointed finger in predetermineeing the gesture, thereby the user one hand of being convenient for is write the operation.
For step S210, in another embodiment, the triggering event is a specific gesture presented by the first hand and the second hand of the user, for example, a gesture of presenting left and right hands simultaneously; the target writing point is a feature point of a second finger in a particular gesture, such as the tip of the index finger of the right hand. Wherein the first hand and the second hand are different hands; the specific gesture here can be analogized to a preset gesture. In this embodiment, the first hand and the second hand are different hands, for example, the first hand is a left hand, and the second hand is a right hand, for example, although the correspondence relationship between the first hand and the second hand and the left and right hands may be interchanged.
In this embodiment, determining whether to acquire a trigger event indicating that a writing point is recognized includes: judging whether the specific gestures presented by the first hand and the second hand are recognized or not; if the specific gestures presented by the first hand and the second hand are recognized, it is determined that a trigger event representing the recognition of the writing point is acquired. Taking the first hand as a left hand and the second hand as a right hand as an example, when the left hand and the right hand are recognized simultaneously, it is indicated that the user starts writing at the moment, and at the moment, it can be determined that the trigger event representing the recognition writing point is obtained; when the left hand of the user is not recognized, the user is not in the writing state at the time, and the target writing point of the right hand does not need to be recognized. Take writing "man" as an example: (1) the user stretches out the left hand and the right hand simultaneously, and moves the index finger tip of the right hand to form a horizontal line, at the moment, the display screen displays a moving track, the horizontal line and the vertical line; (2) the user can put the left hand back up and is in a state of not recognizing the specific gestures presented by the first hand and the second hand, namely, the user is not in a writing state, and the user can move the fingers of the right hand to press down
Figure RE-GDA0002811479850000091
The display screen does not display the moving track of the user target writing point at the moment; (3) the user stretches the left hand again, i.e., stretches the left and right hands, and moves the tip of the index finger to form a right-hand press
Figure RE-GDA0002811479850000092
At the moment, the display screen displays a moving track
Figure RE-GDA0002811479850000093
Thereby, writing and displaying of the "man" character are completed.
In this embodiment, if the specific gestures of the first hand and the second hand are recognized, it is determined that the trigger event indicating the recognition writing point is obtained, and the target writing point is the feature point of the second hand, that is, the second hand of the user can be kept in a moving state, and when recognition is needed, the feature point of the second hand can be recognized by stretching out the first hand, so that frequent switching of the gestures of the second hand of the user is avoided, the writing operation mode of the user is simplified, and user experience is improved.
This embodiment has still disclosed the device of writing of mutual all-in-one of intelligence, and mutual all-in-one of intelligence includes leading camera and display screen, please refer to fig. 5, for the device schematic structure that writes of the mutual all-in-one of intelligence that this embodiment discloses, should write the device and include: an image acquisition module 100, a coordinate conversion module 200, a trajectory determination module 300, and a trajectory presentation module 400, wherein:
the image acquisition module 100 is configured to acquire a plurality of frames of images acquired by a front-facing camera, where the plurality of frames of images include a target writing point; the coordinate conversion module 200 is configured to input multiple frames of images into a coordinate conversion model, and convert a coordinate system of each frame of image into a two-dimensional world coordinate system, where the two-dimensional world coordinate system is parallel to a coordinate system of the display screen, and coordinates of a preset region in the two-dimensional world coordinate system are mapped with pixels of the display screen one by one; the trajectory determination module 300 is configured to identify a target writing point in the two-dimensional world coordinate system, and determine a movement trajectory of the target writing point in the two-dimensional world coordinate system, where the movement trajectory is used to represent a writing trajectory of the target writing point; the trajectory presentation module 400 is configured to display the movement trajectory in the display screen to present the writing trajectory.
In an optional embodiment, the method further comprises: the smoothing module is used for smoothing the moving track in the multi-frame image to obtain a smooth moving track; the track presenting module is used for displaying the smooth moving track in the display screen.
In an optional embodiment, the method further comprises: the judging module is used for judging whether a triggering event representing the recognition writing point is acquired; and the track determining module is used for identifying a target writing point in the two-dimensional world coordinate system in response to the trigger event and determining the moving track of the target writing point in the two-dimensional world coordinate system if the judging module judges that the trigger event is acquired.
In an alternative embodiment, the trigger event is a preset gesture of the user's hand; the target writing points are: specifying characteristic points of fingers in a preset gesture; the judgment module is used for identifying the hand gesture of the user; judging whether the hand posture is a preset posture or not; if the hand gesture is a preset gesture, determining to acquire a trigger event representing the recognition writing point; in the trajectory determination module, identifying the target writing point in the two-dimensional world coordinate system comprises: and identifying the characteristic points of the designated fingers in the preset gesture in the two-dimensional world coordinate system to obtain the target writing points.
In an alternative embodiment, the triggering event is a particular gesture presented by the first and second hands of the user; the target writing point is a feature point of a second hand designated finger in the specific gesture; the first hand and the second hand are different hands. The judging module is used for judging whether the specific gestures presented by the first hand and the second hand are recognized or not; if the specific gestures presented by the first hand and the second hand are recognized, it is determined that a trigger event representing the recognition writing point is acquired. In the trajectory determination module, identifying the target writing point in the two-dimensional world coordinate system comprises: and in a two-dimensional world coordinate system, recognizing the characteristic point of the second hand-designated finger in the specific gesture to obtain a target writing point.
The embodiment also discloses a computer storage medium, on which a computer program is stored, the computer program being used for being executed to implement the method disclosed by the above embodiment.
The embodiment also discloses an intelligent interactive all-in-one machine, please refer to fig. 1, the intelligent interactive all-in-one machine includes: the system comprises a front camera 1, a display screen 2 and a processor (reference numerals are not shown), wherein the processor is used for executing programs to realize the method disclosed by the embodiment.
According to the intelligent interactive all-in-one machine and the writing method and device thereof disclosed by the embodiment, a front-facing camera of the intelligent interactive all-in-one machine is used for acquiring a plurality of frames of images containing target writing points, then, a coordinate system of each frame of image is converted into a two-dimensional world coordinate system, the target writing points are identified in the two-dimensional world coordinate system, and a moving track of the target writing points in the two-dimensional world coordinate system is determined, wherein the moving track represents a writing track; the two-dimensional world coordinate system is parallel to the coordinate system of the display screen, and the coordinates of the preset area in the two-dimensional world coordinate system are mapped with the pixel points of the display screen one by one, so that the writing track can be presented when the moving track is displayed in the display screen. Compared with the prior art, a sensor needs to be additionally configured, in the application, the writing track can be acquired through the existing front-facing camera of the all-in-one machine, namely, the large-screen writing is realized on the premise of not increasing additional auxiliary equipment, and on one hand, the hardware cost is reduced; on the other hand, because additional auxiliary equipment is not needed, the user can realize writing operation through the action of the hand, so that the writing operation of the user is facilitated, and the writing experience of the user is improved.
It should be noted that step numbers (letter or number numbers) are used in the present invention to refer to certain specific method steps only for the purpose of convenience and brevity of description, and the order of the method steps is not limited by letters or numbers in any way. It will be clear to a person skilled in the art that the order of the steps of the method in question, as determined by the technology itself, should not be unduly limited by the presence of step numbers.
It will be appreciated by those skilled in the art that the above-described preferred embodiments may be freely combined, superimposed, without conflict.
It will be understood that the embodiments described above are illustrative only and not restrictive, and that various obvious and equivalent modifications and substitutions for details described herein may be made by those skilled in the art without departing from the basic principles of the invention.

Claims (10)

1. A writing method of an intelligent interactive all-in-one machine is provided, the intelligent interactive all-in-one machine comprises a front camera and a display screen, and the writing method comprises the following steps:
acquiring a plurality of frames of images collected by the front camera, wherein the plurality of frames of images comprise target writing points;
inputting the multi-frame images into a coordinate conversion model, and converting a coordinate system of each frame image into a two-dimensional world coordinate system, wherein the two-dimensional world coordinate system is parallel to a coordinate system of the display screen, and coordinates of a preset area in the two-dimensional world coordinate system are mapped with pixel points of the display screen one by one;
recognizing the target writing point in the two-dimensional world coordinate system, and determining a moving track of the target writing point in the two-dimensional world coordinate system, wherein the moving track is used for expressing a writing track of the target writing point;
displaying the movement track in the display screen to present the writing track.
2. The writing method of the intelligent interactive all-in-one machine according to claim 1, wherein between the determining of the movement track of the target writing point in the two-dimensional world coordinate system and the displaying of the movement track in the display screen, the method further comprises:
smoothing the moving track in the multi-frame image to obtain a smooth moving track;
the displaying the movement track in the display screen includes: and displaying the smooth moving track in the display screen.
3. The writing method of the smart interactive all-in-one machine according to claim 1, wherein between the converting the coordinate system of each frame image into the two-dimensional world coordinate system and the identifying the target writing point in the two-dimensional world coordinate system, further comprising:
judging whether a trigger event representing the recognition writing point is acquired;
and if the trigger event is acquired, identifying the target writing point in the two-dimensional world coordinate system in response to the trigger event, and determining the moving track of the target writing point in the two-dimensional world coordinate system.
4. The writing method of the intelligent interactive all-in-one machine according to claim 3, wherein the trigger event is a preset gesture of a hand of a user; the target writing points are as follows: specifying characteristic points of fingers in the preset gesture;
the judging whether the trigger event representing the recognition writing point is acquired comprises the following steps:
identifying a hand gesture of the user;
judging whether the hand gesture is a preset gesture or not;
if the hand gesture is a preset gesture, determining to acquire a trigger event representing a recognition writing point;
the identifying the target writing point in the two-dimensional world coordinate system comprises: and identifying the characteristic points of the appointed fingers in the preset gesture in the two-dimensional world coordinate system to obtain the target writing point.
5. The writing method of the smart interactive all-in-one machine according to claim 3, wherein the trigger event is a specific gesture presented by a first hand and a second hand of a user; the target writing point is a feature point of a second hand designated finger in the specific gesture; the first hand and the second hand are different hands;
the judging whether the trigger event representing the recognition writing point is acquired comprises the following steps:
judging whether specific gestures presented by the first hand and the second hand are recognized or not;
determining to acquire a trigger event representing a recognition writing point if a particular gesture presented by the first hand and the second hand is recognized.
The identifying the target writing point in the two-dimensional world coordinate system comprises: and in the two-dimensional world coordinate system, identifying the characteristic point of the second hand-designated finger in the specific gesture to obtain the target writing point.
6. The writing method of the smart interactive all-in-one machine according to any one of claims 1 to 5, wherein the identifying the target writing point in the two-dimensional world coordinate system comprises:
identifying a target object in two-dimensional world coordinates;
extracting the edge of the target object in the preselection frame to obtain the contour of the target object;
and extracting preset characteristic points in the contour of the target object to obtain the target writing points.
7. The writing method of the smart interactive all-in-one machine according to claim 6, wherein between the recognition of the target object in the two-dimensional world coordinates and the extraction of the edge of the target object within the preselected frame, the method further comprises:
marking a foreground portion of the target object;
extracting the edge of the target object, and obtaining the target object contour comprises: and extracting the foreground part to obtain the contour of the target object.
8. The utility model provides a device of writing of mutual all-in-one of intelligence, mutual all-in-one of intelligence includes leading camera and display screen, its characterized in that, write the device and include:
the image acquisition module is used for acquiring a plurality of frames of images collected by the front camera, and the plurality of frames of images comprise target writing points;
the coordinate conversion module is used for inputting the multi-frame images into a coordinate conversion model and converting a coordinate system of each frame image into a two-dimensional world coordinate system, the two-dimensional world coordinate system is parallel to a coordinate system of the display screen, and coordinates of a preset area in the two-dimensional world coordinate system are mapped with pixel points of the display screen one by one;
the track determining module is used for identifying the target writing point in the two-dimensional world coordinate system and determining a moving track of the target writing point in the two-dimensional world coordinate system, wherein the moving track is used for expressing a writing track of the target writing point;
and the track presenting module is used for displaying the moving track in the display screen so as to present the writing track.
9. A computer storage medium having a computer program stored thereon, the computer program being adapted to be executed to implement the method of any one of claims 1-7.
10. An interactive all-in-one of intelligence which characterized in that includes:
a front camera;
a display screen; and
a processor for executing a program to implement the method of any one of claims 1 to 7.
CN202010965080.XA 2020-09-15 2020-09-15 Intelligent interaction all-in-one machine and writing method and device thereof Active CN112199015B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010965080.XA CN112199015B (en) 2020-09-15 2020-09-15 Intelligent interaction all-in-one machine and writing method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010965080.XA CN112199015B (en) 2020-09-15 2020-09-15 Intelligent interaction all-in-one machine and writing method and device thereof

Publications (2)

Publication Number Publication Date
CN112199015A true CN112199015A (en) 2021-01-08
CN112199015B CN112199015B (en) 2022-07-22

Family

ID=74014916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010965080.XA Active CN112199015B (en) 2020-09-15 2020-09-15 Intelligent interaction all-in-one machine and writing method and device thereof

Country Status (1)

Country Link
CN (1) CN112199015B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253837A (en) * 2021-04-01 2021-08-13 作业帮教育科技(北京)有限公司 Air writing method and device, online live broadcast system and computer equipment
CN114745579A (en) * 2022-03-18 2022-07-12 阿里巴巴(中国)有限公司 Interaction method based on space writing interface, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426509A (en) * 2011-11-08 2012-04-25 北京新岸线网络技术有限公司 Method, device and system for displaying hand input
CN104992192A (en) * 2015-05-12 2015-10-21 浙江工商大学 Visual motion tracking telekinetic handwriting system
CN107179839A (en) * 2017-05-23 2017-09-19 三星电子(中国)研发中心 Information output method, device and equipment for terminal
CN110989902A (en) * 2019-11-29 2020-04-10 北京小米移动软件有限公司 Information processing method and device, writing equipment and terminal equipment
CN111382598A (en) * 2018-12-27 2020-07-07 北京搜狗科技发展有限公司 Identification method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426509A (en) * 2011-11-08 2012-04-25 北京新岸线网络技术有限公司 Method, device and system for displaying hand input
CN104992192A (en) * 2015-05-12 2015-10-21 浙江工商大学 Visual motion tracking telekinetic handwriting system
CN107179839A (en) * 2017-05-23 2017-09-19 三星电子(中国)研发中心 Information output method, device and equipment for terminal
CN111382598A (en) * 2018-12-27 2020-07-07 北京搜狗科技发展有限公司 Identification method and device and electronic equipment
CN110989902A (en) * 2019-11-29 2020-04-10 北京小米移动软件有限公司 Information processing method and device, writing equipment and terminal equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253837A (en) * 2021-04-01 2021-08-13 作业帮教育科技(北京)有限公司 Air writing method and device, online live broadcast system and computer equipment
CN114745579A (en) * 2022-03-18 2022-07-12 阿里巴巴(中国)有限公司 Interaction method based on space writing interface, terminal and storage medium

Also Published As

Publication number Publication date
CN112199015B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN103150019B (en) A kind of hand-written input system and method
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
CN106774850B (en) Mobile terminal and interaction control method thereof
CN102096471B (en) Human-computer interaction method based on machine vision
CN102270035A (en) Apparatus and method for selecting and operating object in non-touch mode
CN112199015B (en) Intelligent interaction all-in-one machine and writing method and device thereof
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN110796018A (en) Hand motion recognition method based on depth image and color image
CN110688965A (en) IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN105242776A (en) Control method for intelligent glasses and intelligent glasses
CN105912126A (en) Method for adaptively adjusting gain, mapped to interface, of gesture movement
CN114445853A (en) Visual gesture recognition system recognition method
CN108614988A (en) A kind of motion gesture automatic recognition system under complex background
Hartanto et al. Real time hand gesture movements tracking and recognizing system
CN103176603A (en) Computer gesture input system
CN109359543B (en) Portrait retrieval method and device based on skeletonization
CN111914808A (en) Gesture recognition system realized based on FPGA and recognition method thereof
Sonoda et al. A letter input system based on handwriting gestures
Abdallah et al. An overview of gesture recognition
CN106648423A (en) Mobile terminal and interactive control method thereof
Barhate et al. A Survey of fingertip character identification in open-air using Image Processing and HCI
CN111913584B (en) Mouse cursor control method and system based on gesture recognition
Dhamanskar et al. Human computer interaction using hand gestures and voice
Lakshmi et al. Real-Time Hand Gesture Recognition for Improved Communication with Deaf and Hard of Hearing Individuals
CN114333056A (en) Gesture control method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant