CN107292221B - Track processing method and device and track processing device - Google Patents

Track processing method and device and track processing device Download PDF

Info

Publication number
CN107292221B
CN107292221B CN201610204685.0A CN201610204685A CN107292221B CN 107292221 B CN107292221 B CN 107292221B CN 201610204685 A CN201610204685 A CN 201610204685A CN 107292221 B CN107292221 B CN 107292221B
Authority
CN
China
Prior art keywords
annotation information
drawing track
intelligent terminal
track
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610204685.0A
Other languages
Chinese (zh)
Other versions
CN107292221A (en
Inventor
马腾
李良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201610204685.0A priority Critical patent/CN107292221B/en
Publication of CN107292221A publication Critical patent/CN107292221A/en
Application granted granted Critical
Publication of CN107292221B publication Critical patent/CN107292221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a track processing method and device and a track processing device, wherein the track processing method specifically comprises the following steps: receiving a drawing track sent by a first intelligent terminal; analyzing the drawing track to obtain corresponding annotation information; and sending the drawing track and the corresponding annotation information thereof to the second intelligent terminal and/or the first intelligent terminal. The embodiment of the invention not only can meet the requirements of children on emotion, emotion and expression of understanding of things, improve the capability of describing things and expressing themselves of children, but also can improve the flexibility and convenience of drawing operation; moreover, information can be transmitted between the children and parents, and the bridge for the parents to know the children is increased.

Description

Track processing method and device and track processing device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a trajectory processing method, a trajectory processing apparatus, and a trajectory processing apparatus.
Background
Currently, information technology has penetrated into various fields of human life. For example, information technology is changing the traditional way of learning in the field of education, and educational software is becoming more and more widely used.
For example, the child education software is a typical education software, and the typical education software specifically includes: idiom story learning software, English learning software, Chinese character/pinyin learning software, digital learning software and the like.
Although the child education software has a certain promoting effect on the education of children, the child education software generally only enables children to simply select and judge to obtain feedback and cannot be operated according to the wishes of the children, so that the development of the children in the aspects of emotion, emotion and expression of understanding of things is not facilitated. Moreover, when parents are not around the child, the child breeding software cannot enable the parents to know the current situation of the child in time.
In addition, in practical applications, children generally generate repetitive selection and judgment operations through the child breeding software, so that parents of the children cannot directly know the education conditions of the children according to the repetitive selection and judgment operations.
In addition, since children are easily affected by the outside, there is a need for expression anywhere and anytime. For example, there is a need for children to think of mothers before afternoon nap; as another example, a child suddenly has a need to eat a certain food, and so on. As another example, a child suddenly remembers a favorite birthday present, etc. The child breeding software obviously cannot meet the expression requirements anytime anywhere.
Disclosure of Invention
In view of the above problems, embodiments of the present invention have been made to provide a trajectory processing method, a trajectory processing apparatus, and an apparatus for trajectory processing that overcome or at least partially solve the above problems, which can not only meet the needs of children in terms of emotion, and expression of understanding of things, improve the ability of children to describe things and express themselves, and improve the flexibility and convenience of drawing operations; moreover, information can be transmitted between the children and parents, and the bridge for the parents to know the children is increased.
In order to solve the above problem, the present invention discloses a trajectory processing method, including:
receiving a drawing track sent by a first intelligent terminal;
analyzing the drawing track to obtain corresponding annotation information;
and sending the drawing track and the corresponding annotation information to the second intelligent terminal and/or the first intelligent terminal.
Optionally, the step of analyzing the drawing trace to obtain corresponding annotation information includes:
determining the similarity between the drawing track and a preset drawing track;
and determining annotation information corresponding to the drawing track according to the label information of the preset drawing track with the similarity meeting the preset similarity condition.
Optionally, the step of analyzing the drawing trace to obtain corresponding annotation information includes:
clustering the intelligent terminal according to the historical drawing track, the historical annotation information and/or the corresponding feedback information;
and acquiring a target intelligent terminal with the same category as the first intelligent terminal, and determining annotation information corresponding to the drawing track according to historical annotation information corresponding to the historical drawing track of the target intelligent terminal.
Optionally, the step of analyzing the drawing trace to obtain corresponding annotation information includes:
extracting features of the drawing track;
inputting the characteristics into a drawing recognition model, and outputting annotation information corresponding to the characteristics by the drawing recognition model.
Optionally, the step of analyzing the drawing trace to obtain corresponding annotation information further includes:
and calibrating the annotation information corresponding to the drawing track according to the feedback information of the user on the historical annotation information corresponding to the historical drawing track of the first intelligent terminal so as to obtain the calibrated annotation information.
Optionally, the step of analyzing the drawing trace to obtain corresponding annotation information further includes:
and calibrating the annotation information corresponding to the drawing track according to the association degree of the current environment information of the first intelligent terminal and a preset theme to obtain calibrated annotation information.
Optionally, the preset drawing trajectory comprises: at least one of a simple drawing track, a basic drawing track and a historical drawing track of at least one intelligent terminal.
In another aspect, the present invention discloses a trajectory processing method, including:
detecting an operation coordinate generated by a user in an operation space corresponding to the first intelligent terminal;
generating a corresponding drawing track according to the operation coordinate;
and sending the drawing track to a server.
Optionally, the method further comprises:
receiving the drawing track sent by the server and the annotation information corresponding to the drawing track;
or sending the drawing track and the corresponding annotation information to the second intelligent terminal.
Optionally, the method further comprises:
displaying the virtual drawing board;
or displaying a virtual drawing board, and displaying the drawing track on the virtual drawing board.
Optionally, the step of displaying the virtual drawing board includes:
and displaying the virtual drawing board in a plane or curved surface form in an optical mode.
In another aspect, the present invention discloses a trajectory processing apparatus, including:
the receiving module is used for receiving the drawing track sent by the first intelligent terminal;
the analysis module is used for analyzing the drawing track to obtain corresponding annotation information; and
and the sending module is used for sending the drawing track and the corresponding annotation information thereof to a second intelligent terminal and/or the first intelligent terminal.
In another aspect, the present invention discloses a trajectory processing apparatus, including:
the detection module is used for detecting an operation coordinate generated by a user in an operation space corresponding to the first intelligent terminal;
the generating module is used for generating a corresponding drawing track according to the operation coordinate; and
and the sending module is used for sending the drawing track to a server.
In yet another aspect, an apparatus for trajectory processing is disclosed that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by the one or more processors to include instructions for:
receiving a drawing track sent by a first intelligent terminal;
analyzing the drawing track to obtain corresponding annotation information;
and sending the drawing track and the corresponding annotation information to the second intelligent terminal and/or the first intelligent terminal.
In yet another aspect, an apparatus for trajectory processing is disclosed that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by the one or more processors to include instructions for:
detecting an operation coordinate generated by a user in an operation space corresponding to the first intelligent terminal;
generating a corresponding drawing track according to the operation coordinates;
and sending the drawing track to a server.
The embodiment of the invention has the following advantages:
according to the embodiment of the invention, the user can generate the drawing operation through the first intelligent terminal, and for the child user, the drawing operation is a mode for expressing emotion, emotion and understanding to objects, and the drawing operation can project anxiety, joy, anger, desire and the like which are difficult to express by words in the deep mind of the child user in the drawing work, so that the requirements of the child on the aspects of emotion, emotion and expression of understanding to the objects can be met, and the flexibility and convenience of the drawing operation can be improved. For example, when the child has a need to think about the mother before noon nap, the corresponding need can be expressed by drawing the mother; for another example, when a child has a demand for eating bananas, the corresponding demand can be expressed by drawing bananas, and the like.
In addition, because the first intelligent terminal is usually a portable device, and particularly, the wearable device included in the first intelligent terminal is a portable device that is directly worn on the body or integrated into the clothes or accessories of the user, in practical applications, the drawing operation of the embodiment of the invention can be performed on a wall, a table, or a space without an entity support, so that the carry-on property of the first intelligent terminal can meet the expression requirements of the child user at any time and any place.
In addition, the embodiment of the present invention may send the drawing track and the annotation information corresponding to the drawing track to the second intelligent terminal, and the second intelligent terminal may be a terminal used by a parent, so that the embodiment of the present invention may enable the parent to know the current situation of the child in real time through the current drawing track of the child, for example, to know whether the child is happy or unhappy currently, and for example, to know what the current internal needs of the child are. Therefore, the bridge for parents to know children can be increased.
Drawings
FIG. 1 is a flowchart illustrating a first embodiment of a trajectory processing method according to the present invention;
FIG. 2 is a schematic structural view of a wearable device of the present invention;
FIG. 3 is a schematic diagram of a virtual drawing board and corresponding drawing tracks of the present invention;
FIG. 4 is a flowchart illustrating the steps of a second embodiment of a trajectory processing method;
FIG. 5 is a flowchart of the third step of a track processing method according to a third embodiment of the present invention;
FIG. 6 is a flowchart illustrating the fourth step of an embodiment of a trajectory processing method;
FIG. 7 is a block diagram of a first embodiment of a trace processing apparatus according to the present invention;
FIG. 8 is a block diagram of a second embodiment of a track processing apparatus according to the present invention;
FIG. 9 is a block diagram of an apparatus 900 for trace processing of the present invention; and
fig. 10 is a schematic diagram of a server according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The child cultivation software in the existing scheme has a certain promotion effect on the education of children, but because the child cultivation software generally only enables the children to simply select and judge to obtain feedback, the existing scheme cannot meet the requirements of the children on the aspects of emotion, emotion and expression for understanding things and cannot meet the technical problem of expression requirements anytime and anywhere.
However, children (especially children) have strange land and uncertain cloudy and clear world. In a family with busy work, parents and children are often separated from each other, and the parents can not know whether the children are happy or too difficult.
According to the embodiment of the invention, the operation space for generating the drawing operation can be provided for the user through the first intelligent terminal, and the corresponding drawing track is generated according to the operation coordinate generated by the user in the operation space corresponding to the first intelligent terminal.
According to the embodiment of the invention, the user can generate the drawing operation through the first intelligent terminal, and for the child user, the drawing operation is a mode for expressing emotion, emotion and understanding to objects, and the drawing operation can project anxiety, joy, anger, desire and the like which are difficult to express by words in the deep mind of the child user in the drawing work, so that the requirements of the child on the aspects of emotion, emotion and expression of understanding to the objects can be met, and the flexibility and convenience of the drawing operation can be improved. For example, when the child has a need to think about the mother before noon nap, the corresponding need can be expressed by drawing the mother; for another example, when a child has a demand for eating bananas, the corresponding demand can be expressed by drawing bananas, and the like.
In addition, because the first intelligent terminal is usually a portable device, and particularly, the wearable device included in the first intelligent terminal is a portable device that is directly worn on the body or integrated into the clothes or accessories of the user, in practical applications, the drawing operation of the embodiment of the invention can be performed on a wall, a table, or a space without an entity support, so that the portable property of the wearable device can meet the expression requirements of the child user at any time and any place.
In addition, the embodiment of the invention can also analyze the drawing track of the first intelligent terminal to obtain corresponding annotation information, and send the drawing track and the corresponding annotation information to the second intelligent terminal; the second intelligent terminal can be a terminal used by a parent, so that the embodiment of the invention can enable the parent to know the current situation of the child in real time through the current drawing track of the child, for example, to know whether the child is happy or too young currently, and for example, to know what the current internal needs of the child are.
In summary, the embodiment of the invention provides a children expression requirement and a channel for communication between children and parents, which can not only meet the requirements of children on emotion, emotion and expression of understanding things, improve the ability of children to describe things and express themselves, but also improve the flexibility and convenience of drawing operation; moreover, information can be transmitted between the children and the parents, and the bridge for the parents to know the children is increased.
The track processing method provided by the embodiment of the invention can be applied to application environments corresponding to the client and the server, wherein the client and the server can be positioned in a wired or wireless network, and the client and the server perform data interaction through the wired or wireless network.
Specifically, the client may operate on an intelligent terminal, and the intelligent terminal may specifically include but is not limited to: smart phones, tablet computers, electronic book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop portable computers, car-mounted computers, desktop computers, set-top boxes, smart televisions, wearable devices, and the like. Further, the wearable device specifically includes, but is not limited to: smart watches, smart shoes, smart glasses, and the like, and the embodiments of the present invention do not limit the specific wearable devices. The embodiment of the invention mainly takes the wearable device as an example of the first intelligent terminal to describe the track processing flow, and the track processing flows of other types of first intelligent terminals can be referred to each other.
In an optional embodiment of the present invention, the client running on the first intelligent terminal may detect an operation coordinate generated by the user in an operation space corresponding to the first intelligent terminal, generate a corresponding drawing track according to the operation coordinate, and send the drawing track to the server. The server can receive the drawing track sent by the first intelligent terminal, analyze the drawing track to obtain corresponding annotation information, and send the drawing track and the corresponding annotation information to the first intelligent terminal and/or the second intelligent terminal. After receiving the drawing track and the annotation information corresponding to the drawing track, the client running on the first intelligent terminal and/or the second intelligent terminal may generate corresponding feedback information for the annotation information, where the feedback information may include: evaluation information (such as evaluation level and evaluation score), shared information, saved information, and the like.
Method embodiment one
Referring to fig. 1, a flowchart illustrating a first step of a first embodiment of a trajectory processing method according to the present invention is shown, which may specifically include the following steps:
step 101, detecting an operation coordinate generated by a user in an operation space corresponding to a first intelligent terminal;
in the embodiment of the present invention, the operation space may be used to represent a detection space generated according to a wireless signal such as a magnetic field, infrared, ultrasound, and X-ray, and may specifically include a two-dimensional space, a three-dimensional space, and the like. When a user generates drawing operation in the operation space, the energy of the wireless signal in the operation space changes, so that the characteristic of the change of the signal energy in the operation space can be utilized to detect the operation coordinate corresponding to the drawing operation, and the drawing operation detection under the condition of an entity-free drawing board can be realized.
It should be noted that the operation coordinate in the embodiment of the present invention may be a three-dimensional cartesian coordinate, a cylindrical coordinate, or a spherical coordinate, and the expression form of the operation coordinate is not limited in the embodiment of the present invention.
In an alternative embodiment of the present invention, the operation space may be displayed optically, so that the user generates corresponding operation coordinates in the operation space. The optical mode may use light rays such as infrared light, ultraviolet light, and visible light, and may use light rays with preset colors to enable a user to perceive the operation space.
In a specific implementation, the three-dimensional space may be a spherical space with the wearable device as a center of a circle, or a cube space with the wearable device as an origin of coordinates, and so on. It can be understood that the center of the spherical space or the origin of coordinates of the cubic space may also be a spatial point having a preset distance from the wearable device, and it can be understood that the center of the spherical space or the origin of coordinates of the cubic space is not limited in the embodiments of the present invention. In addition, the size of the three-dimensional space can be set by a person skilled in the art according to actual conditions, for example, the radius of the spherical space can be a value calculated according to the length of the arms of an ordinary person.
Referring to fig. 2, a schematic structural diagram of a wearable device of the present invention is shown, which may specifically include: a bracelet 200, the bracelet 200 may include a signal Emitting module 201, a USB (Universal Serial Bus) interface 202, a positioning apparatus 203, a flexible LED (Light Emitting Diode) strip 204, a support part 205 and a lithium ion battery 206.
The USB interface 202 may be connected to a mobile power supply, a charger, and other devices, and is used to charge the lithium ion battery 206; the flexible LED strip 204 may be used to identify a working state, so that a user may perceive that the flexible LED strip 204 is in a standby state or a working state, for example, when the flexible LED strip 204 is lit, the identification bracelet 200 is in the working state, and when the flexible LED strip 204 is not lit, the identification bracelet 200 is in the standby state; the supporting part 205 may play a role of fixing and supporting the bracelet 200 to facilitate the user to wear the bracelet 200 at the wrist of the user, for example, the supporting part 205 may be a watch band; the lithium ion battery 206 can be used for providing working power for the signal emitting module 201, the positioning instrument 203 and the flexible LED strip 204.
In practical applications, the bracelet 200 can be worn on the wrist of the user, and the positioning device 203 can be used for positioning the wrist. In the embodiment of the present invention, the position of the wrist located by the position finder 203 may be used as the origin of coordinates in a three-dimensional space.
The signal transmitting module 201 may specifically include: the system comprises a motion sensor and a signal transmitter, wherein the signal transmitter can send out a wireless signal by taking a positioning instrument 203 as a coordinate origin to form a three-dimensional space; the motion sensor may be configured to detect displacement information of the hand or the finger of the user in the three-dimensional space, so that the position finder 203 may calculate an operation coordinate of a position point corresponding to the hand or the finger of the user when each drawing operation is performed according to the displacement information obtained by the motion sensor.
It should be noted that the above bracelet 200 is only an example of a wearable device, and actually, a person skilled in the art may use various wearable devices to implement the trajectory processing method of the present invention, and the embodiment of the present invention does not limit the specific wearable device.
102, generating a corresponding drawing track according to the operation coordinates;
in practical applications, the step 102 may generate a corresponding drawing track for the continuous operation coordinates according to the trigger time of the drawing operation, where the drawing track may have a corresponding start point and end point.
In an optional embodiment of the invention, the method may further comprise:
displaying the virtual drawing board;
or displaying a virtual drawing board, and displaying the drawing track on the virtual drawing board.
Further, the step of displaying the virtual drawing board may specifically include:
and displaying the virtual drawing board in a plane or curved surface form in an optical mode. Wherein, the curved surface may include: the drawing method comprises the following steps of generating a corresponding plane or three-dimensional drawing track aiming at continuous operation coordinates when a virtual drawing board in a curved surface form is adopted, wherein the drawing method comprises a cylindrical curved surface, a hyperboloid and an irregular curved surface.
Referring to fig. 3, a schematic diagram of a virtual drawing board and a drawing trace corresponding to the virtual drawing board of the present invention is shown, where the virtual drawing board C may be located on a vertical plane XOY or may form a certain angle with the vertical plane XOY. It can be understood that the embodiment of the invention can generate the corresponding drawing track in real time and display the generated drawing track in real time, so that a user can obtain the experience equivalent to the physical drawing board. In addition, the light color corresponding to the drawing track may be different from the optical color corresponding to the virtual drawing board, so that the user can clearly define the drawn drawing track.
In an alternative embodiment of the invention, the operation space comprises: in a three-dimensional space, there may be a case where the operation coordinate is not located on the virtual drawing board, in this case, the step 102 may first project the operation coordinate on the virtual drawing board to obtain a corresponding projection coordinate, and then generate a corresponding drawing track for consecutive projection coordinates according to a trigger time of the drawing operation, where the drawing track may have a corresponding start point and an end point. It is to be understood that the embodiment of the present invention does not impose a limitation on the specific generation process of the drawing trace.
And 103, sending the drawing track to a server.
In practical applications, the embodiment of the present invention does not limit the specific trigger time for sending the drawing trace to the server. For example, the drawing trace may be sent to the server upon determining that the user completed a complete drawing. The process of determining that the user completes one complete drawing specifically may include: and detecting no operation coordinate within a preset time interval, and indicating that the user finishes one complete drawing because the drawing is stopped. Or, the process of determining that the user completes a complete drawing specifically may include: the distance between the operation coordinate generated in the latest time period in the drawing track and other operation coordinates exceeds a first threshold, in which case the user can be explained to restart a new drawing. Or, the process of determining that the user completes a complete drawing specifically may include: the overlap of the operation coordinate generated in the last time period in the drawing trace with other operation coordinates exceeds a second threshold, for example, the user redraws a floret shape on the face area of the drawing trace shown in fig. 3, in which case the user may be said to restart a new drawing.
In an optional embodiment of the present invention, the virtual drawing board or the first intelligent terminal may be provided with a preset control, and the preset control specifically includes: and starting a drawing control, a brush selecting control with different colors or different thicknesses, an eraser control, a finishing control and the like, so that the embodiment of the invention can respond to the triggering operation of the user on the preset control and execute corresponding drawing operation. For example, after receiving a trigger operation of a user for starting a drawing control, the display of the virtual drawing board may be triggered; for another example, after receiving a trigger operation of a user for a brush pen selection control, a current brush pen with a corresponding color and a corresponding thickness may be selected, and a drawing track adapted to the current brush pen may be selected on the virtual drawing board, where for example, the color of the current brush pen is red, the thickness of a line is 3, the color of the drawing track may also be red, and the thickness of the line may be 3; for another example, after receiving a trigger operation of a user for an eraser control, the eraser control may be used to erase the drawing trace; or after receiving the trigger operation of the user for completing the control, the drawing can be shown to be completed, so that the drawing track can be sent to the server.
In summary, according to the embodiment of the invention, the user can generate the drawing operation through the first intelligent terminal, so that the requirements of the children on the aspects of emotion, emotion and expression of understanding of things can be met, and the flexibility and convenience of the drawing operation can be improved. For example, when a child has a need to think about a mother before afternoon nap, the corresponding need can be expressed by drawing the mother; for another example, when a child has a demand for eating bananas, the corresponding demand can be expressed by drawing bananas, and the like. For another example, when the child is happy, a smiling face may be drawn to release the mood, etc. Alternatively, the child may draw a rocket trail similar to that of FIG. 4 to express a gift or the like that the child wants.
In addition, because the first intelligent terminal has the portable property, in practical application, the drawing operation of the embodiment of the invention can be carried out on a wall, a desk or a space without an entity support, so the portable property of the wearable device can meet the expression requirements of a child user at any time and any place.
Method embodiment two
Referring to fig. 4, a flowchart illustrating steps of a second embodiment of a trajectory processing method according to the present invention is shown, which may specifically include the following steps:
step 401, displaying a virtual drawing board;
step 402, detecting an operation coordinate generated by a user in an operation space corresponding to a first intelligent terminal;
step 403, generating a corresponding drawing track according to the operation coordinates;
step 404, displaying the drawing track on the virtual drawing board;
step 405, sending the drawing track to a server.
In practical applications, the execution order of step 401 and step 402 is not limited in the embodiment of the present invention, and the two steps may be executed sequentially or in parallel.
Method embodiment three
Referring to fig. 5, a flowchart of a third step of an embodiment of a track processing method according to the present invention is shown, which specifically includes the following steps:
step 501, detecting an operation coordinate generated by a user in an operation space corresponding to a first intelligent terminal;
502, generating a corresponding drawing track according to the operation coordinates;
step 503, sending the drawing track to a server;
with respect to the first embodiment of the method shown in fig. 1, the trajectory processing method of this embodiment may further include:
step 504, receiving the drawing track sent by the server and the annotation information corresponding to the drawing track; or
And 505, sending the drawing track and the corresponding annotation information to the intelligent terminal corresponding to the first intelligent terminal.
In an embodiment of the present invention, the annotation information may be used to explain the meaning of the drawing track, so that a user or other users of the first intelligent terminal can know the object or the expressed information described by the drawing track, so as to enable, for example, a parent to know the current status of the child in real time through, for example, the current drawing track of the child, for example, to know whether the child is happy or unhappy currently, and, for example, to know what the current internal needs of the child are.
In an optional embodiment of the present invention, the annotation information may specifically include: object information and/or expression information. The object information may be used to indicate an object described in the drawing trace, and the object may be an object similar to fig. 3 or a person. The expression information can be used for expressing the emotion or emotion expressed by the drawing track. For example, when the object described by the drawing track is a character avatar, the drawing track may express a happy mood when the eyebrows and eyes of the character avatar are bent and the mouth is wide open. For another example, when the eyebrows of the figure are picked up and the mouth is closed to a slant line, the drawing trace can express an angry emotion.
In an optional embodiment of the present invention, after receiving the drawing track and the annotation information corresponding to the drawing track sent by the server, the annotation information may be played in a voice form, and a preset interface may be used to collect feedback information of the drawer on the annotation information.
In practical application, a mapping relationship between the first intelligent terminal and the second intelligent terminal may be established, and in this way, step 505 may determine, according to the mapping relationship, the second intelligent terminal corresponding to the first intelligent terminal, and send the drawing track and the annotation information corresponding to the drawing track to the second intelligent terminal corresponding to the first intelligent terminal through any communication mode, such as 2G, 3G, 4G, mobile network, WIFI, and the like. For example, a mobile phone or a mobile phone number of a parent may be bound to the first intelligent terminal in advance, and the drawing track and the corresponding annotation information thereof may be sent to the mobile phone number of the parent in a short message manner.
Method example four
Referring to fig. 6, a flowchart illustrating a fourth step of an embodiment of a trajectory processing method according to the present invention is shown, which may specifically include the following steps:
601, receiving a drawing track sent by a first intelligent terminal;
step 602, analyzing the drawing track to obtain corresponding annotation information;
step 603, sending the drawing track and the corresponding annotation information to the first intelligent terminal and/or the second intelligent terminal.
The embodiment can be applied to a client or a server side, the server can be a common server or a cloud server, and because the embodiment can analyze and process the drawing track through the server side, the advantage that the computing resources of the server side (for example, cloud resources such as a large database and a large number of computing devices for computing can be arranged in the cloud server) are rich can be exerted, so that the analysis efficiency and the analysis precision of the drawing track can be improved.
The second intelligent terminal may be a terminal other than the first intelligent terminal, and may have a preset mapping relationship with the first intelligent terminal. According to the embodiment of the invention, a user or other users of the first intelligent terminal can know the object or the expressed information described by the drawing track, so that parents can know the current condition of the child in real time through the current drawing track of the child.
The embodiment of the invention can provide the following technical scheme for analyzing the drawing track:
technical solution 1
In technical solution 1, the step 602 of analyzing the drawing trajectory to obtain corresponding annotation information may specifically include:
a1, determining the similarity between the drawing track and a preset drawing track;
step A2, according to the label information of the preset drawing track with the similarity meeting the preset similarity condition, determining the annotation information corresponding to the drawing track.
In an optional embodiment of the invention, the preset drawing trajectory comprises: at least one of a simple drawing track, a basic drawing track and a historical drawing track of at least one intelligent terminal. The simple strokes are in a planar and stylized form and a simple and elegant stroke method, and can generally draw the main characteristics of the object image by using the simplest lines and planar forms; the basic drawing can be a relatively simple drawing method. As the drawing technology of children, especially infants, is generally low, in practical application, for example, the simple drawing tracks of the children and the basic drawing tracks of the children can be collected from the national or worldwide drawing databases of the children, so that the richness and the analysis precision of the preset drawing tracks are ensured. Or the children sketch trajectory and the children basic drawing trajectory can be crawled from a vertical website related to the children drawing, and the specific acquisition mode of the sketch trajectory and the basic drawing trajectory is not limited in the embodiment of the invention.
In practical application, the server may collect the history drawing tracks uploaded by the client, and use the history drawing tracks as the preset drawing tracks, or select the history drawing tracks meeting preset rules from the history drawing tracks as the preset drawing tracks.
In practical applications, the step a1 may determine the similarity between the drawing track and a preset drawing track by using a graph matching method, and if the similarity ranges from 0% to 100%, where 0 indicates no similarity and 100% indicates the same similarity, the preset similarity condition may specifically be that the similarity is greater than a similarity threshold, where the similarity threshold may be determined by a person skilled in the art according to practical application requirements, and may be, for example, a value of 85% or 90%.
In an embodiment of the present invention, the label may be used to identify a drawing track, and specifically includes: object information and/or expression information, as well as author information, drawing time, etc. In a specific implementation, the label of the preset drawing track may be obtained in advance. Wherein, the label can be directly obtained from the description information of the drawing track in the children drawing database (for example, the description information can describe what the drawing track is, what emotion is expressed by the lyric, etc.); or, the label can be directly obtained from the information of the drawing track included in the webpage to which the drawing track belongs; alternatively, the label may be obtained by analyzing characteristics such as a shape and a detail of the preset drawing trace. It can be understood that the embodiment of the present invention does not limit the specific obtaining manner of the label of the preset drawing track.
In an application example of the present invention, the number of the preset drawing trajectories is 10000, and the number of the preset drawing trajectories having a similarity exceeding 90% to the drawing trajectory corresponding to fig. 4 may be 10, the annotation information of the drawing trajectory corresponding to fig. 4 may be obtained according to the label of the 10 pairs of preset drawing trajectories: provided is a rocket.
In summary, in the technical solution 1, the annotation information corresponding to the drawing track is determined according to the label information of the preset drawing track whose similarity meets the preset similarity condition, which has the advantages of simple operation and high processing efficiency, and more accurate annotation information can be obtained under the condition that the richness of the preset drawing track can be ensured, that is, the analysis precision of the drawing track can be improved.
Technical solution 2
In technical scheme 2, the step 602 of analyzing the drawing trajectory to obtain corresponding annotation information may specifically include:
b1, clustering the intelligent terminals according to the historical drawing tracks, the historical annotation information and/or the corresponding feedback information;
and step B2, acquiring a target intelligent terminal with the same category as the first intelligent terminal, and determining annotation information corresponding to the drawing track according to historical annotation information corresponding to the historical drawing track of the target intelligent terminal.
In the embodiment of the present invention, the annotation information may be feedback information corresponding to the annotation information generated by the user, where the user may be the painter itself, or a related user such as a parent of the painter, and the feedback information may reflect the recognition degree of the user for the annotation information. In practical applications, the feedback information may include: the evaluation information (such as evaluation level and evaluation score), shared information, saved information, and correction information, etc. if the evaluation score of the user with respect to the comment information is high, or if the user saves or shares the comment information, it indicates that the user has a high recognition degree with respect to the comment information. Alternatively, the user may generate correction information for the annotation information, for example, by modifying the history drawing trace from "happy" to "sad" with respect to the history expression information, and the corrected history expression information may be used as the final history annotation information.
Step B1 may be configured to cluster the historical annotation information and/or the corrected historical annotation information with the recognition degree of the user exceeding the recognition degree threshold, so as to classify the intelligent terminals having the same or similar drawing habits and/or expression habits into the same category. For example, some children like to express "happy" by a gesture of putting on both hands, they can be classified into the same type; while some children like to express "impatience" by a gesture of putting both hands on, they can be classified into the same category.
Step B2 may obtain a target intelligent terminal having the same category as the first intelligent terminal, and since the target intelligent terminal and the first intelligent terminal have the same or similar drawing habit and/or expression habit, the annotation information corresponding to the drawing track may be determined by using the historical annotation information that has been generated by the target intelligent terminal. For example, the target smart terminal has a history drawing track 2 and history annotation information 2 which are not present in the first smart terminal, in addition to the history drawing track 1 and history annotation information 1 which are the same as those of the first smart terminal, and when the first smart terminal has generated a drawing track similar to the history drawing track 2, the annotation information of the drawing track can be obtained according to the history annotation information 2.
In practical applications, in order to improve the accuracy of the annotation information, the annotation information of the drawing trace can be obtained according to the historical annotation information and/or the corrected historical annotation information, wherein the recognition degree of the historical annotation information exceeds a recognition degree threshold value. It can be understood that, in the embodiment of the present invention, a specific process of determining annotation information corresponding to a drawing track according to history annotation information corresponding to the history drawing track of the target intelligent terminal is not limited.
Technical solution 3
In technical solution 3, the step 602 of analyzing the drawing trace to obtain corresponding annotation information may specifically include:
c1, extracting the characteristics of the drawing track;
and step C2, inputting the characteristics into a drawing recognition model, and outputting annotation information corresponding to the characteristics by the drawing recognition model.
Technical scheme 3 can utilize drawing recognition model to carry out the discernment of drawing the orbit. In practical application, a drawing track sample and corresponding annotation information generated by a child user can be collected through a client running on a first intelligent terminal or other clients, and the drawing track sample is trained to obtain a corresponding drawing recognition model, wherein the drawing recognition model has recognition capability of a drawing track, specifically, a preset feature of the preset annotation information can be matched with a feature of the drawing track, and if the matching is successful, the annotation information of the feature can be recognized as the preset annotation information. The preset comment information may include: at least one thing and corresponding expression, at least one character and corresponding expression, and the like. In addition, a drawing recognition model with multiple types of recognition capabilities can be established, wherein the number of types can correspond to the number of objects and corresponding expressions, and the like.
Technical solution 4
With respect to any one of technical solution 1, technical solution 2, and technical solution 3, in technical solution 4, the step 602 of analyzing the drawing trajectory to obtain corresponding annotation information may further include:
and D1, calibrating the annotation information corresponding to the drawing track according to the feedback information of the user on the historical annotation information corresponding to the historical drawing track of the first intelligent terminal, so as to obtain calibrated annotation information.
The step D1 may calibrate the annotation information obtained in any one of claim 1, claim 2, and claim 3 using the feedback information, so as to improve the accuracy of the annotation information.
In practical application, the annotation information of the drawing track can be calibrated according to the historical annotation information and/or the corrected historical annotation information, wherein the recognition degree of the historical annotation information exceeds a recognition degree threshold; the calibration can improve the accuracy of the annotation information because the historical drawing track of the first intelligent terminal corresponds to the historical annotation information which can reflect the drawing habit and/or the expression habit of the current user. For example, the history annotation information corresponding to the history drawing track of the first intelligent terminal indicates that the user of the first intelligent terminal is familiar with the expression of "happy" by the eyebrows in the shape of "regular eight", and the label of the eyebrows in the shape of "regular eight" in the preset drawing track in technical scheme 1 includes "sad", so that technical scheme 4 can better achieve the correction effect.
Technical solution 5
With respect to any one of technical solution 1, technical solution 2, and technical solution 3, in technical solution 5, the step 602 of analyzing the drawing trajectory to obtain corresponding annotation information may further include:
and E1, calibrating the annotation information corresponding to the drawing track according to the relevance between the current environment information of the first intelligent terminal and a preset theme, so as to obtain calibrated annotation information.
In this embodiment of the present invention, the current environment information of the first intelligent terminal may be used to indicate an environment where the first intelligent terminal is located, and may specifically include: the environment information may specifically include at least one of the following information: time information, location information, light information, weather information, and the like.
The embodiment of the invention can pre-establish the mapping relation between the preset environment information and the association degree of the preset theme, wherein the preset theme can be related to festivals (child birthdays, child festivals) and child activities. For example, the preset time range corresponding to the preset topic "children section" is "6 months, 1 days and 5 days before", when the current time information of the first intelligent terminal is in the preset time range, the "children section" may be used to calibrate the annotation information, for example, a "cheerful" atmosphere expressed by the "children section" is added to the calibrated annotation information. Similarly, the preset theme may further include: themes corresponding to festivals of children such as Christmas, Halloween and the like.
For another example, the preset theme "zoo" may have a corresponding preset position range, and when the current position information of the first intelligent terminal is in the preset position range, the "zoo" may be used to calibrate the annotation information, for example, adding an atmosphere such as "lovely", "nature", and the like expressed by the "zoo" to the calibrated annotation information. Similarly, the preset theme may further include: themes corresponding to children's places such as ' amusement parks ' and ' ocean halls ', and the like.
In summary, according to the technical scheme 5, the annotation information corresponding to the drawing track is calibrated according to the association degree between the current environment information of the first intelligent terminal and the preset theme, so that the degree of engagement between the annotation information and the current environment information can be improved, and therefore, the accuracy of the annotation information can be improved.
The above-mentioned technical solution 1-technical solution 5 are used to describe the analysis process of the drawing track in detail, and it can be understood that a person skilled in the art can adopt any one or combination of the above-mentioned technical solution 1-technical solution 5 according to an actual application requirement, or can also adopt other analysis processes of the drawing track, for example, an artificial intelligence technology can be used to simulate the expression thinking of the drawing track by a current user and/or other users, and obtain annotation information and the like corresponding to the drawing track according to the expression thinking, and the embodiment of the present invention does not limit the specific analysis process of the drawing track.
In summary, in the embodiment of the present invention, the intelligent terminal corresponding to the first intelligent terminal may be a terminal used by a parent, so that the parent can know the current situation of the child in real time through the current drawing track of the child, for example, know whether the child is happy or too hard at present, and for example, know what the current internal needs of the child are. Therefore, the bridge that parents know about children can be increased.
Example of the method
To better understand the embodiments of the present invention, an example of a trajectory processing method of the present invention is provided herein.
In this example, the intelligent terminal may provide a virtual drawing board, such as an optical drawing board, to the child in the form of light beam projection, so that the child draws his or her expression or a desired gift or even mood at the moment through the optical drawing board. Specifically, the client running on the intelligent terminal can detect an operation coordinate generated by a child in an operation space corresponding to the first intelligent terminal, generate a corresponding drawing track according to the operation coordinate, and send the drawing track to the server.
The server can be provided with cloud resources such as a large database and a large number of computing devices for computing, and can analyze the drawing track by using the cloud resources and technologies such as pattern matching, pattern recognition and artificial intelligence to obtain corresponding annotation information; and the server can calibrate the annotation information corresponding to the drawing track according to the feedback information of the user on the historical annotation information corresponding to the historical drawing track of the first intelligent terminal, so that the calibrated annotation information conforms to the drawing habit and/or expression habit of the current child. Or, the server may calibrate the annotation information corresponding to the drawing track according to the association degree between the current environment information of the first intelligent terminal and the preset theme, so that the degree of engagement between the annotation information and the current environment information can be improved, and therefore, the accuracy of the annotation information can be improved.
Further, the server can also send the drawing track and the corresponding annotation information to the intelligent terminal corresponding to the parents.
Assuming that a child clicks a "completion" control after drawing a rocket figure through the optical drawing board, the first intelligent terminal can send the figure to a server or a second intelligent terminal; the server or the second intelligent terminal can know that the object corresponding to the graph is the rocket through analysis, and can know that the birthday of the child is close, so that the annotation information of the rocket and birthday gift can be obtained, and further the graph and the annotation information can be sent to a client account of a preset contact, such as mom. The mother and other preset contacts know that the child wants to remind her of the well-spoken birthday gift of the rocket toy through the annotation information and do not forget to send the birthday gift to the child.
To sum up, the embodiment of the invention provides a channel for expressing demands of children and communicating children with parents, which can not only meet the demands of children in the aspects of emotion, emotion and expression of understanding things, improve the capability of describing things and expressing themselves by children, but also improve the flexibility and convenience of drawing operation; moreover, information can be transmitted between the children and the parents, and the bridge for the parents to know the children is increased.
It should be noted that, for simplicity of description, the method embodiments are described as a series of motion combinations, but those skilled in the art should understand that the present invention is not limited by the described motion sequences, because some steps may be performed in other sequences or simultaneously according to the present invention. Furthermore, those skilled in the art will appreciate that the embodiments described in this specification are presently preferred and that the motion described is not required for the embodiments of the invention.
Apparatus embodiment one
Referring to fig. 7, a block diagram of a first embodiment of a track processing apparatus according to the present invention is shown, which may specifically include the following modules:
a receiving module 701, configured to receive a drawing track sent by a first intelligent terminal;
an analysis module 702, configured to analyze the drawing trace to obtain corresponding annotation information; and
a sending module 703, configured to send the drawing track and the annotation information corresponding to the drawing track to the first intelligent terminal and/or the second intelligent terminal.
In another optional embodiment of the present invention, the analysis module may specifically include:
the first determining submodule is used for determining the similarity between the drawing track and a preset drawing track; and
and the second determining submodule is used for determining annotation information corresponding to the drawing track according to the label information of the preset drawing track with the similarity meeting the preset similarity condition.
In another optional embodiment of the present invention, the analysis module may specifically include:
the clustering submodule is used for clustering the intelligent terminal according to the historical drawing track, the historical annotation information and/or the corresponding feedback information; and
and the third determining submodule is used for acquiring a target intelligent terminal with the same category as the first intelligent terminal and determining annotation information corresponding to the drawing track according to historical annotation information corresponding to the historical drawing track of the target intelligent terminal.
In another optional embodiment of the present invention, the analysis module may specifically include:
the extraction sub-module is used for extracting the characteristics of the drawing track; and
and the recognition sub-module is used for inputting the features into a drawing recognition model and outputting annotation information corresponding to the features by the drawing recognition model.
In an optional embodiment of the present invention, the analysis module may further include:
and the first calibration sub-module is used for calibrating the annotation information corresponding to the drawing track according to the feedback information of the user on the historical annotation information corresponding to the historical drawing track of the first intelligent terminal so as to obtain calibrated annotation information.
In yet another optional embodiment of the present invention, the analysis module may further include:
and the second calibration submodule is used for calibrating the annotation information corresponding to the drawing track according to the relevance between the current environment information of the first intelligent terminal and a preset theme so as to obtain calibrated annotation information.
In yet another optional embodiment of the present invention, the preset drawing trace may specifically include: at least one of a child sketch trace, a child basic drawing trace and a historical drawing trace of at least one intelligent terminal.
Device embodiment II
Referring to fig. 8, a block diagram of a second embodiment of a track processing apparatus according to the present invention is shown, which may specifically include the following modules:
the detection module 801 is configured to detect an operation coordinate generated by a user in an operation space corresponding to the first intelligent terminal;
a generating module 802, configured to generate a corresponding drawing track according to the operation coordinate; and
a sending module 803, configured to send the drawing trace to a server.
In an optional embodiment of the present invention, the apparatus may further comprise:
the receiving module is used for receiving the drawing track sent by the server and the annotation information corresponding to the drawing track; or
And the second sending module is used for sending the drawing track and the annotation information corresponding to the drawing track to the second intelligent terminal.
In another optional embodiment of the present invention, the operation space may specifically include: the virtual drawing board, the apparatus may further include:
the first display module is used for displaying the virtual drawing board;
and the second display module is used for displaying a virtual drawing board, and displaying the drawing track on the virtual drawing board.
In yet another optional embodiment of the present invention, the first display module may specifically include:
and the display unit is used for displaying the virtual drawing board in a plane or curved surface form in an optical mode.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The embodiments in the present specification are all described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same and similar between the embodiments may be referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 9 is a block diagram illustrating an apparatus 900 for trajectory processing according to an exemplary embodiment. For example, the apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 9, the apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 1030 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 906 provides power to the various components of the device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia component 908 comprises a screen providing an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide motion action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, the sensor assembly 914 may detect an open/closed state of the device 900, the relative positioning of the components, such as a display and keypad of the apparatus 900, the sensor assembly 914 may also detect a change in the position of the apparatus 900 or a component of the apparatus 900, the presence or absence of user contact with the apparatus 900, orientation or acceleration/deceleration of the apparatus 900, and a change in the temperature of the apparatus 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The apparatus 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 630 of the apparatus 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of a server, enable the processor to perform a trajectory processing method, the method comprising: receiving a drawing track sent by a current intelligent terminal; analyzing the drawing track to obtain corresponding annotation information; and sending the drawing track and the corresponding annotation information to the intelligent terminal corresponding to the current intelligent terminal.
Fig. 10 is a schematic structural diagram of a server in an embodiment of the present invention. The server 1900 may vary widely by configuration or performance and may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) storing applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instructions operating on a server. Still further, a central processor 1922 may be provided in communication with the storage medium 1930 to execute a sequence of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input-output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is only limited by the appended patent claims
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
The above detailed description is provided for a track processing method, a track processing apparatus and a track processing apparatus provided by the present invention, and the present document applies specific examples to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (32)

1. A trajectory processing method, comprising:
receiving a drawing track sent by a first intelligent terminal;
analyzing the drawing track to obtain corresponding annotation information;
sending the drawing track and corresponding annotation information thereof to a second intelligent terminal and/or the first intelligent terminal;
the step of analyzing the drawing track to obtain corresponding annotation information includes:
calibrating annotation information corresponding to the drawing track according to the relevance between the current environment information of the first intelligent terminal and a preset theme to obtain calibrated annotation information;
the annotation information includes: object information and/or expression information; the object information is used for representing the object described by the drawing track, and the expression information is used for representing the emotion or emotion expressed by the drawing track.
2. The method of claim 1, wherein the step of analyzing the drawing trace for corresponding annotation information comprises:
determining the similarity between the drawing track and a preset drawing track;
and determining annotation information corresponding to the drawing track according to the label information of the preset drawing track with the similarity meeting the preset similarity condition.
3. The method according to claim 1, wherein the step of analyzing the drawing trace for corresponding annotation information comprises:
clustering the intelligent terminals according to the historical drawing tracks, the historical annotation information and/or the corresponding feedback information;
and acquiring a target intelligent terminal with the same category as the first intelligent terminal, and determining annotation information corresponding to the drawing track according to historical annotation information corresponding to the historical drawing track of the target intelligent terminal.
4. The method of claim 1, wherein the step of analyzing the drawing trace for corresponding annotation information comprises:
extracting features of the drawing track;
inputting the characteristics into a drawing recognition model, and outputting annotation information corresponding to the characteristics by the drawing recognition model.
5. The method according to claim 2, 3 or 4, wherein the step of analyzing the drawing trace for corresponding annotation information further comprises:
and calibrating the annotation information corresponding to the drawing track according to the feedback information of the user on the historical annotation information corresponding to the historical drawing track of the first intelligent terminal so as to obtain the calibrated annotation information.
6. The method according to claim 2, wherein the preset drawing trajectory comprises: at least one of a simple drawing track, a basic drawing track and a historical drawing track of at least one intelligent terminal.
7. A trajectory processing method, comprising:
detecting an operation coordinate generated by a user in an operation space corresponding to the first intelligent terminal;
generating a corresponding drawing track according to the operation coordinates;
sending the drawing track to a server; the drawing track is used for the server to analyze the drawing to obtain corresponding annotation information, and the drawing track and the corresponding annotation information are sent to a second intelligent terminal and/or the first intelligent terminal; wherein the step of analyzing the drawing trace to obtain corresponding annotation information comprises: calibrating annotation information corresponding to the drawing track according to the relevance between the current environment information of the first intelligent terminal and a preset theme to obtain calibrated annotation information;
the annotation information includes: object information and/or expression information; the object information is used for representing the object described by the drawing track, and the expression information is used for representing the emotion or emotion expressed by the drawing track.
8. The method of claim 7, further comprising:
receiving the drawing track sent by the server and the annotation information corresponding to the drawing track;
or sending the drawing track and the corresponding annotation information to the second intelligent terminal.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
displaying the virtual drawing board;
or displaying a virtual drawing board, and displaying the drawing track on the virtual drawing board.
10. The method of claim 9, wherein the step of displaying the virtual palette comprises:
and displaying the virtual drawing board in a plane or curved surface form in an optical mode.
11. A trajectory processing device characterized by comprising:
the receiving module is used for receiving the drawing track sent by the first intelligent terminal;
the analysis module is used for analyzing the drawing track to obtain corresponding annotation information; and
the sending module is used for sending the drawing track and the annotation information corresponding to the drawing track to a second intelligent terminal and/or the first intelligent terminal;
the analysis module may further include:
the second calibration submodule is used for calibrating the annotation information corresponding to the drawing track according to the relevance between the current environment information of the first intelligent terminal and a preset theme so as to obtain calibrated annotation information;
the annotation information includes: object information and/or expression information; the object information is used for representing the object described by the drawing track, and the expression information is used for representing the emotion or emotion expressed by the drawing track.
12. The apparatus of claim 11, wherein the analysis module comprises:
the first determining submodule is used for determining the similarity between the drawing track and a preset drawing track; and
and the second determining submodule is used for determining annotation information corresponding to the drawing track according to the label information of the preset drawing track with the similarity meeting the preset similarity condition.
13. The apparatus of claim 11, wherein the analysis module comprises:
the clustering submodule is used for clustering the intelligent terminal according to the historical drawing track, the historical annotation information and/or the corresponding feedback information; and
and the third determining submodule is used for acquiring a target intelligent terminal with the same category as the first intelligent terminal and determining annotation information corresponding to the drawing track according to historical annotation information corresponding to the historical drawing track of the target intelligent terminal.
14. The apparatus of claim 11, wherein the analysis module comprises:
the extraction submodule is used for extracting the characteristics of the drawing track; and
and the recognition sub-module is used for inputting the features into a drawing recognition model and outputting annotation information corresponding to the features by the drawing recognition model.
15. The apparatus of claim 12, 13 or 14, wherein the analysis module further comprises:
and the first calibration submodule is used for calibrating the annotation information corresponding to the drawing track according to the feedback information of the user on the historical annotation information corresponding to the historical drawing track of the first intelligent terminal so as to obtain calibrated annotation information.
16. The apparatus of claim 12, wherein the preset drawing trace comprises: at least one of a child sketch trace, a child basic drawing trace and a historical drawing trace of at least one intelligent terminal.
17. A trajectory processing device characterized by comprising:
the detection module is used for detecting an operation coordinate generated by a user in an operation space corresponding to the first intelligent terminal;
the generating module is used for generating a corresponding drawing track according to the operation coordinate; and
the sending module is used for sending the drawing track to a server; the drawing track is used for the server to analyze the drawing to obtain corresponding annotation information, and the drawing track and the corresponding annotation information are sent to a second intelligent terminal and/or the first intelligent terminal; wherein the step of analyzing the drawing trace to obtain corresponding annotation information comprises: calibrating annotation information corresponding to the drawing track according to the relevance between the current environment information of the first intelligent terminal and a preset theme to obtain calibrated annotation information;
the annotation information includes: object information and/or expression information; the object information is used for representing the object described by the drawing track, and the expression information is used for representing the emotion or emotion expressed by the drawing track.
18. The apparatus of claim 17, further comprising:
the receiving module is used for receiving the drawing track sent by the server and the annotation information corresponding to the drawing track; or
And the second sending module is used for sending the drawing track and the annotation information corresponding to the drawing track to the second intelligent terminal.
19. The apparatus of claim 17 or 18, wherein the operating space comprises: virtual drawing board, the device still includes:
the first display module is used for displaying the virtual drawing board;
and the second display module is used for displaying a virtual drawing board, and displaying the drawing track on the virtual drawing board.
20. The apparatus of claim 19, wherein the first display module comprises:
and the display unit is used for displaying the virtual drawing board in a plane or curved surface form in an optical mode.
21. An apparatus for trajectory processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein execution of the one or more programs by one or more processors comprises instructions for:
receiving a drawing track sent by a first intelligent terminal;
analyzing the drawing track to obtain corresponding annotation information;
sending the drawing track and corresponding annotation information thereof to a second intelligent terminal and/or the first intelligent terminal;
the analyzing the drawing track to obtain corresponding annotation information includes:
calibrating annotation information corresponding to the drawing track according to the relevance between the current environment information of the first intelligent terminal and a preset theme to obtain calibrated annotation information;
the annotation information includes: object information and/or expression information; the object information is used for representing the object described by the drawing track, and the expression information is used for representing the emotion or emotion expressed by the drawing track.
22. The apparatus of claim 21, wherein the analyzing the drawing trace for corresponding annotation information comprises:
determining the similarity between the drawing track and a preset drawing track;
and determining annotation information corresponding to the drawing track according to the label information of the preset drawing track with the similarity meeting the preset similarity condition.
23. The apparatus of claim 21, wherein the analyzing the drawing trace for corresponding annotation information comprises:
clustering the intelligent terminal according to the historical drawing track, the historical annotation information and/or the corresponding feedback information;
and acquiring a target intelligent terminal with the same category as the first intelligent terminal, and determining annotation information corresponding to the drawing track according to historical annotation information corresponding to the historical drawing track of the target intelligent terminal.
24. The apparatus of claim 21, wherein the analyzing the drawing trace for corresponding annotation information comprises:
extracting features of the drawing track;
inputting the characteristics into a drawing recognition model, and outputting annotation information corresponding to the characteristics by the drawing recognition model.
25. The apparatus according to claim 22, 23 or 24, wherein the analyzing the drawing trace for corresponding annotation information further comprises:
and calibrating the annotation information corresponding to the drawing track according to the feedback information of the user on the historical annotation information corresponding to the historical drawing track of the first intelligent terminal so as to obtain the calibrated annotation information.
26. The apparatus of claim 22, wherein the preset drawing trace comprises: at least one of a simple drawing track, a basic drawing track and a historical drawing track of at least one intelligent terminal.
27. An apparatus for trajectory processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein execution of the one or more programs by one or more processors comprises instructions for:
detecting an operation coordinate generated by a user in an operation space corresponding to the first intelligent terminal;
generating a corresponding drawing track according to the operation coordinates;
sending the drawing track to a server; the drawing track is used for the server to analyze the drawing to obtain corresponding annotation information, and the drawing track and the corresponding annotation information are sent to a second intelligent terminal and/or the first intelligent terminal; wherein the step of analyzing the drawing trace to obtain corresponding annotation information comprises: calibrating annotation information corresponding to the drawing track according to the association degree of the current environment information of the first intelligent terminal and a preset theme to obtain calibrated annotation information;
the annotation information includes: object information and/or expression information; the object information is used for representing the object described by the drawing track, and the expression information is used for representing the emotion or emotion expressed by the drawing track.
28. The device of claim 27, wherein the device is also configured to execute the one or more programs by one or more processors includes instructions for:
receiving the drawing track sent by the server and the annotation information corresponding to the drawing track;
or sending the drawing track and the corresponding annotation information to the second intelligent terminal.
29. The device of claim 27 or 28, wherein the device is also configured to execute the one or more programs by one or more processors includes instructions for:
displaying the virtual drawing board;
or displaying a virtual drawing board, and displaying the drawing track on the virtual drawing board.
30. The apparatus of claim 29, wherein the display virtual palette comprises:
and displaying the virtual drawing board in a plane or curved surface form in an optical mode.
31. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method of one or more of claims 1-6.
32. One or more machine readable media having instructions stored thereon that, when executed by one or more processors, cause an apparatus to perform the method of one or more of claims 7-10.
CN201610204685.0A 2016-04-01 2016-04-01 Track processing method and device and track processing device Active CN107292221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610204685.0A CN107292221B (en) 2016-04-01 2016-04-01 Track processing method and device and track processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610204685.0A CN107292221B (en) 2016-04-01 2016-04-01 Track processing method and device and track processing device

Publications (2)

Publication Number Publication Date
CN107292221A CN107292221A (en) 2017-10-24
CN107292221B true CN107292221B (en) 2022-09-30

Family

ID=60087340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610204685.0A Active CN107292221B (en) 2016-04-01 2016-04-01 Track processing method and device and track processing device

Country Status (1)

Country Link
CN (1) CN107292221B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080757B (en) * 2018-10-19 2023-08-22 舜宇光学(浙江)研究院有限公司 Drawing method based on inertial measurement unit, drawing system and computing system thereof
CN110315902A (en) * 2019-05-15 2019-10-11 郑州工程技术学院 A kind of fine arts track display system and its method
CN111063009B (en) * 2019-12-18 2021-04-06 山东山科智能科技有限公司 Chinese character writing animation demonstration method and device
CN112925470B (en) * 2021-05-10 2021-10-01 广州朗国电子科技股份有限公司 Touch control method and system of interactive electronic whiteboard and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637199A (en) * 2012-02-29 2012-08-15 浙江大学 Image marking method based on semi-supervised subject modeling
CN103279583A (en) * 2013-06-28 2013-09-04 百视通新媒体股份有限公司 Real-time search method and system based on electronic drawing board
CN103412677A (en) * 2013-08-05 2013-11-27 广东欧珀移动通信有限公司 Method and device for hand-painted content recognition
CN105260899A (en) * 2015-10-27 2016-01-20 清华大学深圳研究生院 Electronic business subject credibility evaluation method and system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200732985A (en) * 2006-02-17 2007-09-01 Univ Chung Yuan Christian System and method for handwriting analysis and psychological diagnosis
US20070239430A1 (en) * 2006-03-28 2007-10-11 Microsoft Corporation Correcting semantic classification of log data
CN101895626A (en) * 2010-05-19 2010-11-24 济南北秀信息技术有限公司 Handwriting analyzing device and method thereof for mobile phone
CN102298786B (en) * 2010-06-22 2015-09-02 上海科技馆 The devices and methods therefor that a kind of virtual drawing realizes
CN102289585B (en) * 2011-08-15 2014-06-18 重庆大学 Real-time monitoring method for energy consumption of public building based on data mining
CN103034439A (en) * 2012-11-30 2013-04-10 广东欧珀移动通信有限公司 Method and device for generating drawing file
CN103926997A (en) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 Method for determining emotional information based on user input and terminal
CN103106346A (en) * 2013-02-25 2013-05-15 中山大学 Character prediction system based on off-line writing picture division and identification
JP6359253B2 (en) * 2013-08-27 2018-07-18 株式会社ジオクリエイツ Emotion extraction method, emotion extraction program, emotion extraction device, and building design method
CN104766355B (en) * 2015-04-22 2018-04-10 哈尔滨工业大学 Bold and vigorous colour painting interactive system based on graphology analysis and generate the method that colour painting is sprinkled in digitlization in real time using the system
CN105118518B (en) * 2015-07-15 2019-05-10 百度在线网络技术(北京)有限公司 A kind of semantic analysis and device of sound

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637199A (en) * 2012-02-29 2012-08-15 浙江大学 Image marking method based on semi-supervised subject modeling
CN103279583A (en) * 2013-06-28 2013-09-04 百视通新媒体股份有限公司 Real-time search method and system based on electronic drawing board
CN103412677A (en) * 2013-08-05 2013-11-27 广东欧珀移动通信有限公司 Method and device for hand-painted content recognition
CN105260899A (en) * 2015-10-27 2016-01-20 清华大学深圳研究生院 Electronic business subject credibility evaluation method and system

Also Published As

Publication number Publication date
CN107292221A (en) 2017-10-24

Similar Documents

Publication Publication Date Title
US11593984B2 (en) Using text for avatar animation
KR102558437B1 (en) Method For Processing of Question and answer and electronic device supporting the same
US10262462B2 (en) Systems and methods for augmented and virtual reality
Kołakowska et al. A review of emotion recognition methods based on data acquired via smartphone sensors
EP3321787A1 (en) Method for providing application, and electronic device therefor
CN107292221B (en) Track processing method and device and track processing device
US20170351330A1 (en) Communicating Information Via A Computer-Implemented Agent
CN111538456A (en) Human-computer interaction method, device, terminal and storage medium based on virtual image
KR20170052976A (en) Electronic device for performing motion and method for controlling thereof
US20210076173A1 (en) Non-textual communication and user states management
US20230410441A1 (en) Generating user interfaces displaying augmented reality graphics
US11922096B1 (en) Voice controlled UIs for AR wearable devices
US12008152B1 (en) Distance determination for mixed reality interaction
CN112138410B (en) Interaction method of virtual objects and related device
US20230394770A1 (en) Input modalities for ar wearable devices
CN114816036A (en) Emotion processing method, device and medium
CN117555413A (en) Interaction method, interaction device, electronic equipment and storage medium
CN111240561A (en) Voice interaction method and user equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant