CN108874292B - Comment display method and device and intelligent interactive panel - Google Patents

Comment display method and device and intelligent interactive panel Download PDF

Info

Publication number
CN108874292B
CN108874292B CN201810779592.XA CN201810779592A CN108874292B CN 108874292 B CN108874292 B CN 108874292B CN 201810779592 A CN201810779592 A CN 201810779592A CN 108874292 B CN108874292 B CN 108874292B
Authority
CN
China
Prior art keywords
annotation
determining
pixel points
position coordinates
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810779592.XA
Other languages
Chinese (zh)
Other versions
CN108874292A (en
Inventor
邱伟波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201810779592.XA priority Critical patent/CN108874292B/en
Publication of CN108874292A publication Critical patent/CN108874292A/en
Priority to PCT/CN2018/118233 priority patent/WO2020015269A1/en
Application granted granted Critical
Publication of CN108874292B publication Critical patent/CN108874292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a comment display method, a comment display device and an intelligent interactive panel, wherein the method comprises the following steps: detecting an annotation operation in an annotation state, wherein the annotation operation is an operation received by an annotation layer, and the annotation layer is a transparent layer covered on the content layer; displaying an annotation track corresponding to the annotation operation; determining a target area in the content layer according to the annotation track; extracting image information of a target area in the content layer; determining a target object corresponding to the annotation track according to the annotation track and the extracted image information; clearing the annotation track; and displaying the target object. By the invention, the effect of intelligently displaying the annotations is achieved.

Description

Comment display method and device and intelligent interactive panel
Technical Field
The invention relates to the field of intelligent interactive panels, in particular to a comment display method and device and an intelligent interactive panel.
Background
An intelligent interactive tablet, also called an interactive intelligent tablet, is an integrated device that controls content displayed on a display tablet (display screen) and implements human-computer interaction operations through a touch technology. The equipment integrates multiple functions of a projector, an electronic whiteboard, a curtain, a sound device, a television and a video conference terminal, is suitable for group communication occasions, and intensively meets the requirements of remote audio and video communication in a conference, high-definition display of conference documents in various formats, video file playing, on-site sound, screen writing, file marking, storage, printing, distribution and other systematized conferences; meanwhile, a television receiving function and surround sound are also built in, and the requirements of audio-visual entertainment can be met after the mobile phone works. The equipment is widely applied to the fields of education and teaching, enterprise meetings, commercial exhibition and the like, and can effectively improve the communication environment and improve the group communication efficiency.
After the intelligent interaction panel triggers the annotation mode, an annotation layer is newly built, the annotation layer covers the content layer, the annotation layer receives annotation operation of a user, and the display screen displays an annotation track corresponding to the annotation operation. However, there is a problem that, for example, a user wants to annotate, touches the display screen with a hand or a pen and slides, the annotation layer receives the annotation operation of the user, and the display screen displays an annotation track corresponding to the annotation operation, as shown in fig. 1, it can be seen that the user wants to annotate several characters in the fourth row, and thus draws a track similar to a triangle. However, part of the annotation track is displayed on the annotation layer corresponding to the third row of text, but the third row of text is not the part that the user wants to annotate, which results in that the annotation display is not intelligent.
Disclosure of Invention
The invention mainly aims to provide an annotation display method, an annotation display device and an intelligent interactive panel, and aims to solve the problem that annotations cannot be intelligently displayed in the prior art.
In order to achieve the above object, according to an aspect of the present invention, there is provided a comment display method including: detecting an annotation operation in an annotation state, wherein the annotation operation is an operation received by an annotation layer, and the annotation layer is a transparent layer covered on a content layer; displaying an annotation track corresponding to the annotation operation; determining a target area in the content layer according to the annotation track; extracting image information of the target area in the content layer; determining a target object corresponding to the annotation track according to the annotation track and the extracted image information; clearing the annotation track; and displaying the target object.
Further, determining a target area in the content layer according to the annotation track includes: determining position coordinates of pixel points forming the annotation track; and determining the target area according to the position coordinates of the pixel points forming the annotation track.
Further, determining the target area according to the position coordinates of the pixel points forming the annotation track includes: determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track; and determining the target area according to the position coordinates of the pixel points at the edge of the target area.
Further, determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track, including: screening edge position coordinates from the position coordinates of all pixel points forming the annotation track, wherein the value of the abscissa or ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all pixel points forming the annotation track; and adding or subtracting the edge position coordinates and a preset numerical value to obtain the position coordinates of the pixel points at the edge of the target area.
Further, before the target object is displayed, the position of the target object is determined according to the positions of the pixel points forming the annotation track.
In order to achieve the above object, according to one aspect of the present invention, there is provided an annotation display device, the device comprising: the detection unit is used for detecting annotation operation in an annotation state, wherein the annotation operation is the operation received by an annotation layer, and the annotation layer is a transparent layer covered on the content layer; the first display unit is used for displaying the annotation track corresponding to the annotation operation; the first determining unit is used for determining a target area in the content layer according to the annotation track; an extracting unit, configured to extract image information of the target area in the content layer; the second determining unit is used for determining a target object corresponding to the annotation track according to the annotation track and the extracted image information; a clearing unit for clearing the annotation track; and the second display unit is used for displaying the target object.
Further, the first determination unit includes: the first determining subunit is used for determining the position coordinates of the pixel points forming the annotation track; and the second determining subunit is used for determining the target area according to the position coordinates of the pixel points forming the annotation track.
Further, the second determining subunit includes: the first determining module is used for determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track; and the second determining module is used for determining the target area according to the position coordinates of the pixel points at the edge of the target area.
Further, the first determining module comprises: the screening submodule is used for screening out edge position coordinates from the position coordinates of all the pixel points forming the annotation track, and the value of the abscissa or the ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all the pixel points forming the annotation track; and the calculating submodule is used for adding or subtracting the edge position coordinate and a preset numerical value to obtain the position coordinate of the pixel point at the edge of the target area.
Further, the apparatus further comprises: and the third determining unit is used for determining the display position of the target object according to the positions of the pixel points forming the annotation track before the target object is displayed by the second display unit.
In order to achieve the above object, according to an aspect of the present invention, an intelligent interactive panel is provided, where the intelligent interactive panel includes a frame, a cover plate, a controller, and a display screen, where the controller is configured to detect an annotation operation in an annotation state, where the annotation operation is an operation received by an annotation layer, and the annotation layer is a transparent layer covering a content layer; the display screen is used for displaying the annotation track corresponding to the annotation operation; the controller is further used for determining a target area in the content layer according to the annotation track; the controller is further configured to extract image information of the target area in the content layer; the controller is also used for determining a target object corresponding to the annotation track according to the annotation track and the extracted image information; the controller is also used for clearing the annotation track; the display screen is also used for displaying the target object.
Further, the controller is configured to: determining position coordinates of pixel points forming the annotation track; and determining the target area according to the position coordinates of the pixel points forming the annotation track.
Further, the controller is configured to: determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track; and determining the target area according to the position coordinates of the pixel points at the edge of the target area.
Further, the controller is configured to: screening edge position coordinates from the position coordinates of all pixel points forming the annotation track, wherein the value of the abscissa or ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all pixel points forming the annotation track; and adding or subtracting the edge position coordinates and a preset numerical value to obtain the position coordinates of the pixel points at the edge of the target area.
Further, the controller is configured to: and before the target object is displayed, determining the display position of the target object according to the positions of the pixel points forming the annotation track.
In order to achieve the above object, according to one aspect of the present invention, there is provided a storage medium including a stored program, wherein the apparatus on which the storage medium is located is controlled to execute the above-described annotation display method when the program runs.
To achieve the above object, according to one aspect of the present invention, there is provided an intelligent interactive tablet, including a memory for storing information including program instructions and a processor for controlling execution of the program instructions, wherein the program instructions are loaded and executed by the processor to implement the steps of the annotation display method described above.
In the embodiment of the application, after entering the annotation mode, an annotation layer is newly created, the annotation layer is a transparent layer, and covers on the content layer, the annotation layer receives the annotation operation, the display screen displays the annotation track corresponding to the annotation operation, determining a target area in the content layer according to the annotation track, extracting image information of the target area in the content layer, determining a target object corresponding to the annotation track according to the annotation track and the extracted image information, clearing the annotation track, displaying the target object, wherein the target object may be a standardized figure such as a triangle, a rectangle, a circle, an ellipse and the like, or a standardized line such as a straight line, a wavy line and the like, because the image information of the content layer is considered in the process of determining the target object corresponding to the annotation track, the target object is accurately displayed at the place where the user wants to annotate, and the effect of intelligently displaying the annotation is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a display annotation according to the prior art;
FIG. 2 is a flow chart of an alternative annotation display method according to an embodiment of the application;
FIG. 3 is a schematic diagram of a display annotation according to an embodiment of the present application;
FIG. 4 is a flow chart of another alternative annotation display method according to an embodiment of the application;
FIG. 5 is a schematic diagram of a display annotation according to the prior art;
FIG. 6 is a schematic diagram of a display annotation according to an embodiment of the present application;
FIG. 7 is a flow chart of yet another alternative annotation display method in accordance with an embodiment of the present application;
fig. 8 is a schematic diagram of an alternative annotation display device according to an embodiment of the application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 2 is a flowchart of an alternative annotation display method according to an embodiment of the application, and as shown in fig. 2, the method includes the following steps:
step S102: and detecting the annotation operation in an annotation state, wherein the annotation operation is the operation received by the annotation layer, and the annotation layer is a transparent layer covered on the content layer.
The annotation status can be an annotation mode. In the annotation state, the annotation layer and the content layer are provided, the annotation layer is a transparent layer covering the content layer, and the annotation layer and the content layer may be equal in size. The annotation operation may be an operation in which the user touches the display screen with a hand or a pen.
Step S104: and displaying an annotation track corresponding to the annotation operation.
And after the annotating operation is detected, displaying an annotating track corresponding to the annotating operation on the display screen. The annotation trajectory corresponding to the annotation operation can be an irregular pattern or a linear pattern.
Step S106: and determining a target area in the content layer according to the annotation track.
Step S106 may specifically be: and determining a target area in the content layer according to the position of the annotation track.
Step S108: and extracting image information of the target area in the content layer.
Step S110: and determining a target object corresponding to the annotation track according to the annotation track and the extracted image information.
The target object may be a standardized figure, such as a triangle, a rectangle, a circle, an ellipse, etc., and the target object may also be a standardized line, such as a straight line, a wavy line, etc.
And determining a target object corresponding to the annotation track according to the annotation track and the extracted image information, wherein the step of determining whether the type of the target object is a graph or a line at least comprises the step of determining. If the type of the target object is determined to be a graph, the size of the target object needs to be determined.
Step S112: and clearing the annotation track.
Step S114: and displaying the target object.
The annotation track is as shown in fig. 1, the target object corresponding to the annotation track is determined to be a triangle by a fitting method, the size of the triangle is determined according to the extracted image information, and the size of the triangle can ensure that the triangle is displayed on the annotation layer corresponding to the fourth row of characters but not displayed on the annotation layer corresponding to the third row of characters, as shown in fig. 3.
As an alternative embodiment, before step S114, determining the position of the target object displayed in the annotation layer may be: determining the position of the target object displayed in the annotation layer according to the positions of the pixel points forming the annotation track, specifically, there are a plurality of methods, for example, method one: establishing a coordinate system in the annotation layer, determining position coordinates of pixel points forming the annotation track, averaging abscissa and ordinate of the pixel points forming the annotation track to obtain X0 and Y0, and taking X0 and Y0 as coordinates of geometric centers of target objects in the annotation layer; the second method comprises the following steps: establishing a coordinate system in the annotation layer, determining position coordinates of pixel points forming the annotation track, screening out a pixel point with the largest abscissa (assuming that the abscissa is X1), a pixel point with the smallest abscissa (assuming that the abscissa is X2), a pixel point with the largest ordinate (assuming that the ordinate is Y1) and a pixel point with the smallest ordinate (assuming that the ordinate is Y2) from all the pixel points, and calculating the average value of X1 and X2 to obtain X3; the average value of Y1 and Y2 is obtained to obtain Y3, and coordinates of the geometric center of the target object, X3 and Y3, in the annotation layer are determined.
In the embodiment of the application, after entering the annotation mode, an annotation layer is newly created, the annotation layer is a transparent layer, and covers on the content layer, the annotation layer receives the annotation operation, the display screen displays the annotation track corresponding to the annotation operation, determining a target area in the content layer according to the annotation track, extracting image information of the target area in the content layer, determining a target object corresponding to the annotation track according to the annotation track and the extracted image information, clearing the annotation track, displaying the target object, wherein the target object may be a standardized figure such as a triangle, a rectangle, a circle, an ellipse and the like, or a standardized line such as a straight line, a wavy line and the like, because the image information of the content layer is considered in the process of determining the target object corresponding to the annotation track, the target object is accurately displayed at the place where the user wants to annotate, and the effect of intelligently displaying the annotation is achieved. Moreover, the target object is a standardized graph or a standardized line, so that the display effect of the annotation is attractive.
Optionally, determining the target area in the content layer according to the annotation track includes: determining position coordinates of pixel points forming the annotation track; and determining a target area according to the position coordinates of the pixel points forming the annotation track.
There are various methods for determining the target area according to the position coordinates of the pixel points constituting the annotation trajectory. For example, the position coordinates of the pixel points at the edge of the target area may be determined, and then the target area may be determined according to the position coordinates of the pixel points at the edge of the target area, specifically, for example, the edge position coordinates are screened from the position coordinates of all the pixel points constituting the annotation trajectory, and the abscissa or ordinate of the edge position coordinates is the maximum or minimum coordinate value among the position coordinates of all the pixel points constituting the annotation trajectory; and adding or subtracting the edge position coordinates and a preset numerical value to obtain the position coordinates of the pixel points at the edge of the target area.
Fig. 4 is a flowchart of another alternative annotation display method according to an embodiment of the application, and as shown in fig. 4, the method includes the following steps:
step S200: and entering an annotation mode.
Step S201: and (5) newly building an annotation layer, wherein the original display picture is a background layer.
The annotation layer is a transparent layer covering the content layer.
Step S202: a touch point Down event is detected.
The moment the user touches the display screen with a hand or pen, a touch point Down event is generated. The user touches the display screen by hand or pen and moves from one position of the display screen to another position of the display screen, and a touch point Move event is generated. The user's hand or pen leaves the display screen and a touch point Up event is generated.
Step S203: and detecting a Move event of the touch point, and displaying the current track in real time.
For example, in the case where it is detected that the user's hand or pen is slid on the display screen, the current trajectory is displayed in real time as shown in fig. 5.
Step S204: a touch point Up event is detected.
Step S205: and judging lines or images needing to be generated according to the input track and the background layer information.
And fitting the input track to obtain a standardized graph or line corresponding to the input track. If the fitting result indicates that the input trajectory is a graph, the size of the graph also needs to be calculated.
Step S206: and deleting the newly input original track in the annotation layer.
Step S207: in the annotation layer, a new standardized graph or line is generated, as shown in fig. 6.
Step S208 may be that the user continues to perform other operations, such as writing, drawing, etc. on the display screen.
Step S209: and exiting the annotation mode.
Fig. 7 is a flowchart of another annotation display method according to an embodiment of the application, and as shown in fig. 7, after entering an annotation mode, a new annotation layer is created, where the annotation layer is a transparent layer, the annotation layer covers the content layer, and the annotation layer receives an annotation operation. In the annotation layer, the cutout takes the original touch track as the center, and the horizontal and vertical coordinates of the pixel points on the original touch track are respectively increased by 100 units to obtain an area Zoom 1. And fitting the original touch trajectory by using an OpenCV related fitting algorithm to obtain a fitting result. The fitting result may be a line type, a triangle, a quadrangle, a circle, or an unrecognizable figure or line. If the fitting result is a line type, the area Zoom2 is extracted from the background layer according to the coordinates of the area where Zoom1 is located, and the areas of the area Zoom2 and the area Zoom1 can be equal. And determining whether the line is a downward sliding straight line, a downward sliding wavy line or a deleting straight line according to the distribution of the pixel points forming the original touch track, and outputting the determined line. And if the fitting result is a trilateral shape, judging whether the trilateral shape is an equilateral triangle or a common triangle according to the lengths of the trilateral shapes, and outputting the determined triangle. And if the fitting result is a quadrangle, judging whether the quadrangle is a square or a rectangle according to the length of the quadrangle, and outputting the determined square or rectangle. And if the fitting result is a circle, judging whether the circle or the ellipse is the circle according to the lengths of the major axis and the minor axis, and outputting the determined circle or ellipse. And if the fitting result is an unrecognizable graph or line, outputting an original graph or straight line.
In the embodiment of the application, after entering the annotation mode, an annotation layer is newly created, the annotation layer is a transparent layer, and covers on the content layer, the annotation layer receives the annotation operation, the display screen displays the annotation track corresponding to the annotation operation, determining a target area in the content layer according to the annotation track, extracting image information of the target area in the content layer, determining a target object corresponding to the annotation track according to the annotation track and the extracted image information, clearing the annotation track, displaying the target object, wherein the target object may be a standardized figure such as a triangle, a rectangle, a circle, an ellipse and the like, or a standardized line such as a straight line, a wavy line and the like, because the image information of the content layer is considered in the process of determining the target object corresponding to the annotation track, the target object is accurately displayed at the place where the user wants to annotate, and the effect of intelligently displaying the annotation is achieved.
Fig. 8 is a schematic diagram of an alternative annotation display device according to an embodiment of the application, as shown in fig. 8, the device includes: the device comprises a detection unit 10, a first display unit 20, a first determination unit 30, an extraction unit 40, a second determination unit 50, a clearing unit 60 and a second display unit 70.
The detecting unit 10 is configured to detect an annotation operation in an annotation state, where the annotation operation is an operation received by an annotation layer, and the annotation layer is a transparent layer covering the content layer.
The first display unit 20 is configured to display an annotation track corresponding to the annotation operation.
The first determining unit 30 is configured to determine a target area in the content layer according to the annotation track.
And an extracting unit 40, configured to extract image information of the target area in the content layer.
And the second determining unit 50 is configured to determine a target object corresponding to the annotation trajectory according to the annotation trajectory and the extracted image information.
And a clearing unit 60 for clearing the annotation trace.
And a second display unit 70 for displaying the target object.
In the embodiment of the application, after entering the annotation mode, an annotation layer is newly created, the annotation layer is a transparent layer, and covers on the content layer, the annotation layer receives the annotation operation, the display screen displays the annotation track corresponding to the annotation operation, determining a target area in the content layer according to the annotation track, extracting image information of the target area in the content layer, determining a target object corresponding to the annotation track according to the annotation track and the extracted image information, clearing the annotation track, displaying the target object, wherein the target object may be a standardized figure such as a triangle, a rectangle, a circle, an ellipse and the like, or a standardized line such as a straight line, a wavy line and the like, because the image information of the content layer is considered in the process of determining the target object corresponding to the annotation track, the target object is accurately displayed at the place where the user wants to annotate, and the effect of intelligently displaying the annotation is achieved.
Optionally, the first determination unit 30 includes: the device comprises a first determining subunit and a second determining subunit. And the first determining subunit is used for determining the position coordinates of the pixel points forming the annotation track. And the second determining subunit is used for determining the target area according to the position coordinates of the pixel points forming the annotation track.
Optionally, the second determining subunit includes: the device comprises a first determining module and a second determining module. And the first determining module is used for determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track. And the second determining module is used for determining the target area according to the position coordinates of the pixel points at the edge of the target area.
Optionally, the first determining module includes: a screening submodule and a calculating submodule. And the screening submodule is used for screening out edge position coordinates from the position coordinates of all the pixel points forming the annotation track, and the value of the abscissa or the ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all the pixel points forming the annotation track. And the calculating submodule is used for adding or subtracting the edge position coordinate and a preset numerical value to obtain the position coordinate of the pixel point at the edge of the target area.
Optionally, the apparatus further comprises: and the third determining unit is used for determining the display position of the target object according to the positions of the pixel points forming the annotation track before the target object is displayed by the second display unit.
It should be noted that, in order to avoid redundancy, some of the technical features already described in the foregoing embodiments are not described repeatedly. Technical features not described in detail in the embodiment are described in other embodiments.
It should be noted that, for the sake of brevity, the present application is not intended to be exhaustive, and any features that are not mutually inconsistent can be freely combined to form alternative embodiments of the present application.
The embodiment of the invention provides a storage medium, which comprises a stored program, wherein when the program runs, equipment where the storage medium is located is controlled to execute the annotation display method of the intelligent interactive tablet.
The device on which the storage medium is located may be a smart interactive tablet.
The embodiment of the invention provides a processor, wherein the processor is used for running a program, and the annotation display method of the intelligent interactive panel is executed when the program runs.
The embodiment of the invention provides an intelligent interactive panel. The intelligent interaction panel comprises a frame, a cover plate, a controller and a display screen.
The controller is used for detecting annotation operation in an annotation state, the annotation operation is operation received by the annotation layer, and the annotation layer is a transparent layer covering the content layer.
The display screen is used for displaying the annotation track corresponding to the annotation operation.
The controller is further configured to determine a target area in the content layer according to the annotation track.
The controller is further configured to extract image information of the target area in the content layer.
The controller is further used for determining a target object corresponding to the annotation track according to the annotation track and the extracted image information.
The controller is also used for clearing the annotation track.
The display screen is also used for displaying the target object.
In the embodiment of the application, after entering the annotation mode, an annotation layer is newly created, the annotation layer is a transparent layer, and covers on the content layer, the annotation layer receives the annotation operation, the display screen displays the annotation track corresponding to the annotation operation, determining a target area in the content layer according to the annotation track, extracting image information of the target area in the content layer, determining a target object corresponding to the annotation track according to the annotation track and the extracted image information, clearing the annotation track, displaying the target object, wherein the target object may be a standardized figure such as a triangle, a rectangle, a circle, an ellipse and the like, or a standardized line such as a straight line, a wavy line and the like, because the image information of the content layer is considered in the process of determining the target object corresponding to the annotation track, the target object is accurately displayed at the place where the user wants to annotate, and the effect of intelligently displaying the annotation is achieved.
Optionally, the controller is to: determining position coordinates of pixel points forming the annotation track; and determining a target area according to the position coordinates of the pixel points forming the annotation track.
Optionally, the controller is to: determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track; and determining the target area according to the position coordinates of the pixel points at the edge of the target area.
Optionally, the controller is to: screening edge position coordinates from the position coordinates of all pixel points forming the annotation track, wherein the value of the abscissa or ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all pixel points forming the annotation track; and adding or subtracting the edge position coordinates and a preset numerical value to obtain the position coordinates of the pixel points at the edge of the target area.
Optionally, the controller is to: before the target object is displayed, the display position of the target object is determined according to the positions of the pixel points forming the annotation track.
It should be noted that the terms "first," "second," "third," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (5)

1. A method of annotation display, the method comprising:
detecting an annotation operation in an annotation state, wherein the annotation operation is an operation received by an annotation layer, and the annotation layer is a transparent layer covered on a content layer;
displaying an annotation track corresponding to the annotation operation;
determining a target area in the content layer according to the annotation track;
extracting image information of the target area in the content layer;
determining a target object corresponding to the annotation track according to the annotation track and the extracted image information;
clearing the annotation track;
displaying the target object, wherein the target object is a standardized graph or a standardized line, the standardized graph at least comprises a triangle, a rectangle, a circle and an ellipse, the standardized line at least comprises a straight line and a wavy line, and the target area in the content layer is determined according to the annotation track, and the method comprises the following steps: determining position coordinates of pixel points forming the annotation track; determining the target area according to the position coordinates of the pixel points forming the annotation track, and determining the target area according to the position coordinates of the pixel points forming the annotation track, wherein the method comprises the following steps: determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track; determining the target area according to the position coordinates of the pixel points at the edge of the target area, and determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track, wherein the method comprises the following steps: screening edge position coordinates from the position coordinates of all pixel points forming the annotation track, wherein the value of the abscissa or ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all pixel points forming the annotation track; adding or subtracting the edge position coordinates and a preset numerical value to obtain the position coordinates of the pixel points at the edge of the target area, wherein before the target object is displayed, the method further comprises the following steps: determining the display position of the target object according to the positions of the pixel points forming the annotation track, and determining the display position of the target object according to the positions of the pixel points forming the annotation track, wherein the method comprises the following steps: and determining the position of the geometric center of the target object according to the positions of the pixel points forming the annotation track, and determining the position of the target object display according to the position of the geometric center.
2. An annotation display device, comprising:
the detection unit is used for detecting annotation operation in an annotation state, wherein the annotation operation is the operation received by an annotation layer, and the annotation layer is a transparent layer covered on the content layer;
the first display unit is used for displaying the annotation track corresponding to the annotation operation;
the first determining unit is used for determining a target area in the content layer according to the annotation track;
an extracting unit, configured to extract image information of the target area in the content layer;
the second determining unit is used for determining a target object corresponding to the annotation track according to the annotation track and the extracted image information;
a clearing unit for clearing the annotation track;
a second display unit, configured to display the target object, where the target object is a standardized graph or a standardized line, the standardized graph includes at least a triangle, a rectangle, a circle, and an ellipse, the standardized line includes at least a straight line and a wavy line, and the first determination unit includes: the first determining subunit is used for determining the position coordinates of the pixel points forming the annotation track; a second determining subunit, configured to determine the target area according to position coordinates of pixel points forming the annotation trajectory, where the second determining subunit includes: the first determining module is used for determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track, the second determining module is used for determining the target area according to the position coordinates of the pixel points at the edge of the target area, and the first determining module comprises: the device comprises a screening submodule and a calculating submodule, wherein the screening submodule is used for screening out edge position coordinates from position coordinates of all pixel points forming an annotation track, the value of the abscissa or the ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all the pixel points forming the annotation track, the calculating submodule is used for adding or subtracting the edge position coordinates and a preset numerical value to obtain the position coordinates of the pixel points at the edge of a target area, and the device also comprises: a third determining unit, configured to determine, before the second displaying unit displays the target object, a position where the target object is displayed according to positions of pixel points constituting the annotation trajectory, where the third determining unit includes:
the third determining subunit is used for determining the position of the geometric center of the target object according to the positions of the pixel points forming the annotation track, and the fourth determining subunit is used for determining the position of the target object display according to the position of the geometric center.
3. An intelligent interactive flat plate, which comprises a frame, a cover plate, a controller and a display screen and is characterized in that,
the controller is used for detecting annotation operation in an annotation state, wherein the annotation operation is the operation received by an annotation layer, and the annotation layer is a transparent layer covered on the content layer;
the display screen is used for displaying the annotation track corresponding to the annotation operation;
the controller is further used for determining a target area in the content layer according to the annotation track;
the controller is further configured to extract image information of the target area in the content layer;
the controller is also used for determining a target object corresponding to the annotation track according to the annotation track and the extracted image information;
the controller is also used for clearing the annotation track;
the display screen is further used for displaying the target object, the target object is a standardized figure or a standardized line, the standardized figure at least comprises a triangle, a rectangle, a circle and an ellipse, the standardized line at least comprises a straight line and a wave line, and the controller is used for: determining position coordinates of pixel points forming the annotation track; determining the target area according to the position coordinates of the pixel points forming the annotation track, wherein the controller is used for: determining the position coordinates of the pixel points at the edge of the target area according to the position coordinates of the pixel points forming the annotation track; determining the target area according to the position coordinates of the pixel points at the edge of the target area, wherein the controller is used for: screening edge position coordinates from the position coordinates of all pixel points forming the annotation track, wherein the value of the abscissa or ordinate of the edge position coordinates is the maximum or minimum coordinate value in the position coordinates of all pixel points forming the annotation track; adding or subtracting the edge position coordinate and a preset numerical value to obtain the position coordinate of the pixel point at the edge of the target area, wherein the controller is used for: before the target object is displayed, the display position of the target object is determined according to the positions of the pixel points forming the annotation track, the controller is used for determining the position of the geometric center of the target object according to the positions of the pixel points forming the annotation track, and the display position of the target object is determined according to the position of the geometric center.
4. A storage medium characterized by comprising a stored program, wherein a device on which the storage medium is located is controlled to execute the annotation display method according to claim 1 when the program runs.
5. An intelligent interactive tablet comprising a memory for storing information including program instructions and a processor for controlling execution of the program instructions, wherein: the program instructions, when loaded and executed by a processor, implement the steps of the annotation display method of claim 1.
CN201810779592.XA 2018-07-16 2018-07-16 Comment display method and device and intelligent interactive panel Active CN108874292B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810779592.XA CN108874292B (en) 2018-07-16 2018-07-16 Comment display method and device and intelligent interactive panel
PCT/CN2018/118233 WO2020015269A1 (en) 2018-07-16 2018-11-29 Annotation display method and apparatus, and intelligent interactive tablet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810779592.XA CN108874292B (en) 2018-07-16 2018-07-16 Comment display method and device and intelligent interactive panel

Publications (2)

Publication Number Publication Date
CN108874292A CN108874292A (en) 2018-11-23
CN108874292B true CN108874292B (en) 2021-12-03

Family

ID=64302109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810779592.XA Active CN108874292B (en) 2018-07-16 2018-07-16 Comment display method and device and intelligent interactive panel

Country Status (2)

Country Link
CN (1) CN108874292B (en)
WO (1) WO2020015269A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108874292B (en) * 2018-07-16 2021-12-03 广州视源电子科技股份有限公司 Comment display method and device and intelligent interactive panel
CN115185445A (en) * 2019-04-17 2022-10-14 华为技术有限公司 Method for adding annotations and electronic equipment
CN112001924A (en) * 2020-06-30 2020-11-27 深圳点猫科技有限公司 Matting method and device based on local browser
CN114385284A (en) * 2020-10-22 2022-04-22 华为技术有限公司 Display method of annotations and electronic equipment
CN112672199B (en) * 2020-12-22 2022-07-29 海信视像科技股份有限公司 Display device and multi-layer overlapping method
WO2022089043A1 (en) 2020-10-30 2022-05-05 海信视像科技股份有限公司 Display device, geometry recognition method, and multi-pattern layer superimposed display method
CN112347744A (en) * 2020-11-04 2021-02-09 广州朗国电子科技有限公司 Method and device for automatically triggering annotation mode by touch device and storage medium
CN116188628B (en) * 2022-12-02 2024-01-12 广东保伦电子股份有限公司 Free painting page-crossing drawing and displaying method and server
CN117114978B (en) * 2023-10-24 2024-03-29 深圳软牛科技集团股份有限公司 Picture cropping and restoring method and device based on iOS and related medium thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365570A (en) * 2012-03-26 2013-10-23 华为技术有限公司 Content selecting method and content selecting device
CN103376921A (en) * 2012-04-25 2013-10-30 鸿富锦精密工业(深圳)有限公司 Laser labeling system and method
CN104156145A (en) * 2014-08-13 2014-11-19 天津三星通信技术研究有限公司 Text content selection method based on handwriting pen and portable terminal
CN104360788A (en) * 2014-10-20 2015-02-18 深圳市天时通科技有限公司 Transparent marking method and desktop writing control method
CN104462039A (en) * 2014-11-19 2015-03-25 北京新唐思创教育科技有限公司 Annotation generating method and device
CN104951234A (en) * 2015-06-26 2015-09-30 武汉传神信息技术有限公司 Data processing method and system based on touch screen terminal
CN106598928A (en) * 2016-12-01 2017-04-26 广州视源电子科技股份有限公司 Method and system thereof for annotating on display screen

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339275B (en) * 2010-07-20 2014-11-19 汉王科技股份有限公司 Comment processing method and device for electronic book
CN101968716A (en) * 2010-10-20 2011-02-09 鸿富锦精密工业(深圳)有限公司 Electronic reading device and method thereof for adding comments
CN102520855A (en) * 2011-12-03 2012-06-27 鸿富锦精密工业(深圳)有限公司 Electronic equipment with touch screen and page turning method for electronic equipment
CN108874292B (en) * 2018-07-16 2021-12-03 广州视源电子科技股份有限公司 Comment display method and device and intelligent interactive panel

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365570A (en) * 2012-03-26 2013-10-23 华为技术有限公司 Content selecting method and content selecting device
CN103376921A (en) * 2012-04-25 2013-10-30 鸿富锦精密工业(深圳)有限公司 Laser labeling system and method
CN104156145A (en) * 2014-08-13 2014-11-19 天津三星通信技术研究有限公司 Text content selection method based on handwriting pen and portable terminal
CN104360788A (en) * 2014-10-20 2015-02-18 深圳市天时通科技有限公司 Transparent marking method and desktop writing control method
CN104462039A (en) * 2014-11-19 2015-03-25 北京新唐思创教育科技有限公司 Annotation generating method and device
CN104951234A (en) * 2015-06-26 2015-09-30 武汉传神信息技术有限公司 Data processing method and system based on touch screen terminal
CN106598928A (en) * 2016-12-01 2017-04-26 广州视源电子科技股份有限公司 Method and system thereof for annotating on display screen

Also Published As

Publication number Publication date
WO2020015269A1 (en) 2020-01-23
CN108874292A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108874292B (en) Comment display method and device and intelligent interactive panel
CN108491131B (en) Operation method and device of intelligent interaction panel and intelligent interaction panel
CN110069204B (en) Graph processing method, device and equipment based on writing track and storage medium
WO2021072912A1 (en) File sharing method, apparatus, and system, interactive smart device, source end device, and storage medium
CN110928459B (en) Writing operation method, device, equipment and storage medium of intelligent interactive tablet
CN111580714A (en) Page editing method, device, equipment and storage medium of intelligent interactive tablet
CN110045909B (en) Ellipse processing method, device and equipment based on writing track and storage medium
CN110045840B (en) Writing track association method, device, terminal equipment and storage medium
US8624928B2 (en) System and method for magnifying a webpage in an electronic device
US11372540B2 (en) Table processing method, device, interactive white board and storage medium
CN110941373B (en) Interaction method and device for intelligent interaction panel, terminal equipment and storage medium
CN108492349B (en) Processing method, device and equipment for writing strokes and storage medium
CN108762657B (en) Operation method and device of intelligent interaction panel and intelligent interaction panel
CN113934356A (en) Display operation method, device, equipment and storage medium of intelligent interactive panel
CN111580903B (en) Real-time voting method, device, terminal equipment and storage medium
CN112162669A (en) Screen recording method and device of intelligent terminal, storage medium and processor
CN109873980B (en) Video monitoring method and device and terminal equipment
CN109814787B (en) Key information determination method, device, equipment and storage medium
WO2019218622A1 (en) Element control method, apparatus, and device, and storage medium
CN111428455B (en) Form management method, device, equipment and storage medium
CN107341137B (en) Multi-panel-based annotation following method and system
CN110737417A (en) demonstration equipment and display control method and device of marking line thereof
CN112860157B (en) Display element adjusting method, device, equipment and storage medium
US20220321831A1 (en) Whiteboard use based video conference camera control
US11557065B2 (en) Automatic segmentation for screen-based tutorials using AR image anchors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant