CN109725804B - Projection-based track identification method, projection equipment and storage medium - Google Patents

Projection-based track identification method, projection equipment and storage medium Download PDF

Info

Publication number
CN109725804B
CN109725804B CN201811574442.1A CN201811574442A CN109725804B CN 109725804 B CN109725804 B CN 109725804B CN 201811574442 A CN201811574442 A CN 201811574442A CN 109725804 B CN109725804 B CN 109725804B
Authority
CN
China
Prior art keywords
moment
track
track points
points
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811574442.1A
Other languages
Chinese (zh)
Other versions
CN109725804A (en
Inventor
张士林
陈维亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rongcheng goer Technology Co.,Ltd.
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201811574442.1A priority Critical patent/CN109725804B/en
Publication of CN109725804A publication Critical patent/CN109725804A/en
Application granted granted Critical
Publication of CN109725804B publication Critical patent/CN109725804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application provides a track identification method based on projection, projection equipment and a storage medium. In the embodiment of the application, in the interaction process of the interactive object and the projection picture, the number of the track points generated by the interactive object on the projection picture is continuously detected, and if the number of the track points detected at the current moment is different from the set number of the track points, the number of the track points at the current moment is corrected to the set number of the track points according to the position information of the track points at the previous moment and the position information of the track points detected at the current moment. Therefore, the motion track of the interactive object from the previous moment to the current moment is determined according to the position of the track point at the previous moment and the corrected position of the track point at the current moment, so that the probability of track misconnection can be reduced, the smoothness of the consistency operation response of the interactive object on the projection picture can be improved, and the user experience is further improved.

Description

Projection-based track identification method, projection equipment and storage medium
Technical Field
The present application relates to the field of projection technologies, and in particular, to a trajectory recognition method based on projection, a projection device, and a storage medium.
Background
With the continuous development of projection technology, projection devices with interactive functions are produced. For example, household devices such as projection speakers and projection lamps are widely used in life of people, and great convenience is brought to life of people.
For the projection equipment with the interaction function, the picture content can be projected onto a projection plane to form a projection picture, and the user can interact with the projection equipment by performing corresponding operation on the projection picture. For example, touch, drag, slide, zoom in, zoom out, and the like may be performed.
In practical applications, a user can perform continuous operations such as dragging, zooming out, and the like on a projection screen with two fingers. In the process, the tracks may be in wrong connection, so that the gestures of the user cannot be identified or are identified incorrectly, further, the projection image is jittered and even jammed, and the user experience is poor.
Disclosure of Invention
Aspects of the present application provide a trajectory recognition method based on projection, a projection device, and a storage medium to reduce the probability of trajectory misconnection when a user performs a coherent operation on a projection screen, so that the response to the coherent operation is smoother, and the user experience is improved.
The embodiment of the application provides a trajectory identification method based on projection, which comprises the following steps:
in the interaction process of at least one interactive object and a projection picture, detecting the number M of track points generated on the projection picture by the at least one interactive object at a second moment, wherein the projection picture is formed by projection of the projection equipment;
if the number M of the track points detected at the second moment is not equal to the set number N of the track points, correcting the number M of the track points detected at the second moment into N track points according to the position information of the N track points at the first moment and the position information of the M track points detected at the second moment, wherein M, N is a positive integer, and the first moment is the previous moment of the second moment;
and determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the second moment according to the position information of the N track points at the first moment and the positions of the N track points obtained by correcting the second moment.
An embodiment of the present application further provides a projection device, including: the projection module comprises a memory, a processor and a projection module; wherein the content of the first and second substances,
the projection module is used for projecting a projection picture;
the memory is used for storing the computer program and the position information of N track points generated by at least one interactive object on the projection picture at a first moment;
the processor is coupled to the memory for executing the computer program for:
in the interaction process of the at least one interactive object and the projection picture, detecting the number M of track points generated on the projection picture by the at least one interactive object at a second moment;
if the number M of the track points detected at the second moment is not equal to the set number N of the track points, correcting the number M of the track points detected at the second moment into N track points according to the position information of the N track points at the first moment and the position information of the M track points detected at the second moment, wherein M, N is a positive integer, and the first moment is the previous moment of the second moment;
and determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the second moment according to the position information of the N track points at the first moment and the positions of the N track points obtained by correcting the second moment.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-described projection-based trajectory recognition method.
In the embodiment of the application, in the interaction process of the interactive object and the projection picture, the number of track points generated by the interactive object on the projection picture is continuously detected, and if the number of the track points detected at the current moment is different from the set number of the track points, the number of the track points at the current moment is corrected to be the set number of the track points according to the position information of the track points at the previous moment and the position information of the track points detected at the current moment; therefore, if the number of the track points at the current moment is more than the set number of the track points, the wrong track points can be eliminated; if the number of the trace points is less than the set number of the trace points, the missing trace points can be filled up. Therefore, the motion track of the interactive object from the previous moment to the current moment is determined according to the position of the track point at the previous moment and the corrected position of the track point at the current moment, so that the probability of track misconnection can be reduced, the smoothness of the consistency operation response of the interactive object on the projection picture can be improved, and the user experience is further improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic flowchart of a trajectory recognition method based on projection according to an embodiment of the present disclosure;
fig. 1b is a schematic diagram of trace points at a first time and a second time provided in this embodiment of the present application;
fig. 1c is a schematic diagram of another trace point at a first time and a second time provided in this embodiment of the present application;
fig. 1d is a schematic diagram of a sliding track according to an embodiment of the present application;
fig. 1e is a schematic view of another sliding track provided in the embodiment of the present application;
fig. 1f is a schematic view of another sliding track provided in the embodiment of the present application;
fig. 2 is a schematic structural diagram of a projection apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the existing projection equipment with the human-computer interaction function, in the process of performing continuous operations such as dragging, amplifying, reducing and the like on a projection picture by a user, track misconnection may occur, so that user gestures cannot be identified or are identified incorrectly, further the projection picture shakes or even is stuck, and the user experience is poor. Aiming at the existing technical problems, the embodiment of the application provides a solution, and the basic idea is as follows: continuously detecting the number of track points generated by the interactive object on the projection picture in the interactive process of the interactive object and the projection picture, and correcting the number of the track points at the current moment to be the set number of the track points according to the position information of the track points at the previous moment and the position information of the track points detected at the current moment if the number of the track points detected at the current moment is different from the set number of the track points; therefore, if the number of the track points at the current moment is more than the set number of the track points, the wrong track points can be eliminated; if the number of the trace points is less than the set number of the trace points, the missing trace points can be filled up. Therefore, the motion track of the interactive object from the previous moment to the current moment is determined according to the position of the track point at the previous moment and the corrected position of the track point at the current moment, so that the probability of track misconnection can be reduced, the smoothness of the consistency operation response of the interactive object on the projection picture can be improved, and the user experience is further improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic flowchart of a trajectory recognition method based on projection according to an embodiment of the present disclosure. The method is suitable for projection equipment. The projection device can be implemented as projection sound equipment, projection lamps, projectors and other household equipment. As shown in fig. 1a, the method comprises:
101. during the interaction process of the at least one interactive object and the projection picture, the number M of track points generated on the projection picture by the at least one interactive object at the second moment is detected, and the projection picture is formed by projection of the projection equipment.
102. And if the number M of the track points detected at the second moment is not equal to the set number N of the track points, correcting the number M of the track points detected at the second moment into N according to the position information of the N track points at the first moment and the position information of the M track points detected at the second moment, wherein M, N is a positive integer, and the first moment is the previous moment of the second moment.
103. And determining a sliding track formed by at least one interactive object on the projection picture from the first moment to the second moment according to the position information of the N track points at the first moment and the positions of the N track points obtained by correcting the second moment.
In this embodiment, the projection mode of the projection apparatus is not limited. The projection screen may be formed by vertical projection of the projection device, or may be formed by horizontal projection or inclined projection of the projection device, but is not limited thereto.
In this embodiment, the interaction process of the at least one interactive object and the projection screen is a process in which the at least one interactive object performs a consistency operation on the projection screen. For example, at least one interactive object performs operations such as dragging, sliding, zooming in, zooming out, and the like on the projection screen.
In this embodiment, the interactive object may be a finger, a stylus, a mechanical arm, a baton, and the like of the user, which is not limited in this embodiment. The number of the interactive objects is 1 or more, and the specific value is determined by an effect to be achieved by the action of the consistency operation of the user, which is not limited in the embodiment of the present application. For example, when the user drags or slides the icon, the number of the interactive objects may be 1. For another example, when the user zooms in or zooms out on the target content, the number of the interactive objects may be 2. For example, the user performs a consistency operation such as enlargement or reduction of the target content with 2 fingers. For another example, when the user drags or slides a plurality of target contents, the number of the interactive objects may be plural. During the interaction of the at least one interactive object with the projection screen, each interactive object may generate a track point on the projection screen. Accordingly, the set number N of trace points may be determined by the effect to be achieved by the user's actions of the consistency operation, which is the same as the number of interactive objects, i.e., N is a positive integer.
In this embodiment, in order to determine the sliding track generated on the projection screen during the interaction between the at least one interactive object and the projection screen, in step 101, the number M of track points generated on the projection screen by the at least one interactive object at the current time is continuously detected during the interaction between the interactive object and the projection screen. And M is a positive integer, and the specific value of M is determined by the number of trace points actually detected at the current moment. For convenience of description and distinction, in the embodiment of the present application, a time when the number of track points generated on the projection screen by at least one interactive object is currently detected is defined as a second time, and a time before the current time (second time) is defined as a first time.
Alternatively, a detection period may be set and a timer or counter may be started to time the detection period. And detecting the number M of track points generated on the projection picture by at least one interactive object at a second moment every time when the detection period arrives. The detection period can be flexibly set according to the actual requirement of the user on the interactive fluency, and is not limited herein.
Considering that the operation action of at least one interactive object is coherent in the interactive process with the projection picture, the track point at the current moment has a certain relation with the track point at the previous moment. Based on this, in step 102, if the number M of trace points detected at the second time is not equal to the set number N of trace points, the number M of trace points detected at the second time is corrected from M to N according to the position information of the N trace points at the first time and the position information of the M trace points detected at the second time. Therefore, if the number of the track points at the current moment is more than the set number of the track points, the wrong track points can be eliminated; if the number of the trace points is less than the set number of the trace points, the missing trace points can be filled up. The N track points at the first time may be N track points actually detected at the first time, or may be N track points corrected by using the method in step 102. The position information of the track point may be, but is not limited to, a position coordinate of the track point, a vector distance of the track point with respect to a specified reference pixel point on the projection screen, and the like. Further, the designated reference pixel point may be any point on the projection image.
Further, in step 103, based on the position information of the N track points at the first time and the position information of the N track points obtained by modifying the second time, the sliding track formed by the at least one interactive object on the projection screen from the first time to the second time can be determined. Thus, by adopting the trajectory identification method in the above-mentioned step 101-103, the sliding trajectory between every two adjacent moments is determined, and the sliding trajectory of the whole interaction process between at least one interaction object and the projection picture can be determined.
Correspondingly, if the number M of detected track points at the second time is equal to the set number N of track points, the operation in step 103 is directly performed, that is, the sliding track formed by the at least one interactive object on the projection screen from the first time to the second time is determined according to the positions of the N track points at the first time and the positions of the N track points detected at the second time.
In this embodiment, in the interaction process between the interactive object and the projection picture, the number of track points generated by the interactive object on the projection picture is continuously detected, and if the number of track points detected at the current moment is different from the set number of track points, the number of track points at the current moment is corrected to the set number of track points according to the position information of the track points at the previous moment and the position information of the track points detected at the current moment; therefore, if the number of the track points at the current moment is more than the set number of the track points, the wrong track points can be eliminated; if the number of the trace points is less than the set number of the trace points, the missing trace points can be filled up. Therefore, the motion track of the interactive object from the previous moment to the current moment is determined according to the position of the track point at the previous moment and the corrected position of the track point at the current moment, so that the probability of track misconnection can be reduced, the smoothness of the consistency operation response of the interactive object on the projection picture can be improved, and the user experience is further improved.
Optionally, a depth of field module may be disposed on the projection apparatus, and the depth of field module may be a distance sensor, such as an infrared distance measuring sensor, a laser distance measuring sensor, a depth of field camera, and the like, but is not limited thereto. The depth of field module can gather the depth of field image in its collectable range. The depth value of each pixel point on the depth image reflects the distance of the pixel point relative to the depth module, namely the distance of the pixel point relative to the projection equipment.
The depth of field module can collect a first depth of field image when at least one interactive object is not interacted with the projection picture and a second depth of field image at a second moment. Based on this, an alternative implementation of step 101 is: and determining the number of track points generated on the projection picture by the at least one interactive object at the second moment based on the depth difference between the first depth image and the second depth image.
Further, the depth values of corresponding points on the first depth-of-field image and the second depth-of-field image can be acquired; calculating the depth difference value of each corresponding point on the second depth-of-field image and the first depth-of-field image, and forming a depth difference image of the second depth-of-field image and the first depth-of-field image; and selecting points with depth difference values within a preset difference value range from the depth difference value image, and taking the number of connected areas formed by the points with the depth difference values within the preset difference value range as the track point number M of at least one interactive object generated on the projection picture at the second moment. Wherein, the preset difference range can be flexibly set according to different interactive objects, for example, for the interactive object being the finger of the user, the preset difference range can be [1mm,20mm ]. That is, if the object to be exchanged is a finger of the user, areas formed by pixel points having a depth difference value greater than or equal to 1mm and less than or equal to 20mm in the depth difference image of the first depth-of-field image and the second depth-of-field image are used as positions of finger pads (target areas), and the number of the areas is used as the number M of track points generated on the projection screen by the finger of the user at the second moment.
Based on the connected regions in the depth difference image, calculating the mean value coordinate of the pixel points in each connected region, and taking the mean value coordinate of each connected region as the position of the track point corresponding to the connected region, so as to obtain the position information of the M track points detected at the second moment, wherein the position information of the M track points is the corresponding position coordinate. The coordinate system can be flexibly established according to actual requirements, and is not limited herein.
Similarly, a third depth-of-field image generated when at least one interactive object interacts with the projection picture at the first moment can be collected, the number of track points generated by the at least one interactive object on the projection picture at the first moment and the position information of the track points can be determined according to the depth difference between the third depth-of-field image and the first depth-of-field image, and the determination process can refer to the above description for determining the position information of the M track points generated by the at least one interactive object on the projection picture at the second moment, which is not described herein again. And when the number of the track points generated on the projection picture by the at least one interactive object at the first moment is not equal to N, correcting the number of the track points to N by adopting the method in the step 102, and the specific implementation manner may refer to the related description of the step 102 in the foregoing or the following embodiments, which is not described herein again.
In the actual use process, not only the track points of the interaction between the interaction object and the projection picture, but also other interference points may exist on the projection picture. In this application scenario, the number M of trace points detected at the second time is greater than N. Further, it is considered that the displacement of the current time (second time) with respect to the previous time (first time) is not too large during the sliding of the at least one interactive object on the projection screen. Therefore, the redundant interference points with large displacement can be removed according to the displacement of the M track points detected at the second moment compared with the N track points at the first moment. Based on this, an alternative implementation of step 102 is: respectively calculating the displacement of the M track points at the second moment relative to the first moment according to the position information of the N track points at the first moment and the position information of the M track points at the second moment; and according to the displacement of the M track points at the second moment relative to the first moment, (M-N) track points are removed from the M track points at the second moment according to the sequence from large displacement to small displacement, so that the number of the track points at the second moment is corrected from M to N.
Further, when the position information of the track points is represented by position coordinates, when the displacements of the M track points at the second moment relative to the first moment are respectively calculated according to the position information of the N track points at the first moment and the position information of the M track points at the second moment, the reference track point at the first moment can be determined according to the position coordinates of the N track points at the first moment; and respectively calculating the distance from the M track points detected at the second moment to the reference track point according to the position coordinates of the M track points detected at the second moment and the position coordinates of the reference track point, and taking the calculated distance as the displacement of the M track points detected at the second moment relative to the first moment. The reference track point at the first time has multiple selection modes, and the following description is given by way of example in combination with several optional embodiments.
Embodiment 1: and selecting a reference track point with the minimum abscissa from the N track points according to the position coordinates of the N track points at the first moment.
Embodiment 2: and taking the mean value coordinate of the N track points as a reference track point according to the position coordinates of the N track points at the first moment.
In the following, taking N as 2, M as 3, and selecting the trace point with the smallest abscissa from the N trace points at the first time as the reference trace point, an exemplary description will be given of a process of correcting the trace point at the second time from M to N.
As shown in fig. 1b, a1, a2, and A3 are 3 trace points detected at the second time; b1 and B2 are 2 trace points at the first moment. The B1 with the smaller abscissa is selected from B1 and B2 as the reference trace point. Thereafter, distances from A1, A2, and A3 to B1, respectively, were calculated as dA1B1、dA2B1And dA3B1. Due to dA3B1>dA2B1>dA1B1Therefore, the trace point A3 is removed, i.e., the trace points a1 and a2 are 2 trace points at the second time.
Optionally, if the displacements of the M track points at the second time relative to the first time are all smaller than the set pixel threshold, removing (M-N) track points from the M track points at the second time, where the set pixel threshold may be obtained by statistics according to the moving rate of the interactive object and the image refresh rate of the projection screen. When the interactive object is a user's finger, the threshold of pixels may be taken to be 100 pixels.
In the actual use process, due to the problem of the recognition rate of the projection device to the interactive object or the problem of the image refresh rate of the projection picture, the situation that the track points generated by the interactive object on the projection picture cannot be recognized may occur, or the number of the recognized track points is less than the actual number of the track points. In this application scenario, the number M of trace points detected for the second time is smaller than N. Further, considering that the operation of the user is continuous during the sliding process of the at least one interactive object on the projection screen, the displacement of the current time (second time) relative to the previous time (first time) is not too large, and the displacement of the current time (second time) relative to the subsequent time when the N track points are detected also has a certain relationship. Based on this, another alternative implementation of step 102 is: if M is smaller than N, continuing to detect the number of track points generated by at least one interactive object on the projection picture at the subsequent moment until the number of the track points is detected to be N, and taking the moment when the number of the track points is detected to be N at the subsequent moment as a target moment; calculating the position information of the middle position according to the position information of the N track points at the first moment and the position information of the N track points detected at the target moment; and determining the position information of the (N-M) new track points according to the position information of the M track points at the second moment and the position information of the middle position, and taking the (N-M) new track points as the track points at the second moment so as to correct the number of the track points detected at the second moment from M to N.
In the following, a specific embodiment of determining the position information of the new track point at the intermediate position and the second time is provided, taking N as 2, M as 1, and the position information of the track point represented by the position coordinate as an example.
As shown in fig. 1C, C1 is a trace point detected at the second time, D1 and D2 are 2 trace points at the first time, and E1 and E2 are 2 trace points detected at the target time. Based on this, the mean coordinates of the position coordinates of the 2 trace points D1 and D2 at the first time and the 2 trace points E1 and E2 detected at the target time can be calculated and taken as the coordinate values of the intermediate position. Further, with the middle position as the center O, a symmetric point C2 of one trace point C1 detected at the second time with respect to the center O is determined, and the symmetric point C2 is used as a new trace point at the second time, that is, 2 trace points at the second time are C1 and C2.
Based on the embodiment, if the number of the track points at the current moment is more than the set number of the track points, the wrong track points can be removed; if the number of the trace points is less than the set number of the trace points, the missing trace points can be filled up. When the M track points detected at the second time are corrected to N, further, a sliding track formed by the at least one interactive object on the projection picture from the first time to the second time can be determined according to the position information of the N track points at the first time and the position information of the N track points obtained by correcting the second time. When the sliding track from the first moment to the second moment is determined, the corresponding connection relation between the N track points at the first moment and the N track points obtained by correcting the second moment is also required to be determined, so that the probability of track misconnection can be reduced. Based on this, when the position information of the track point is represented by using the position coordinate, an optional implementation of step 103 is: respectively calculating the distance between the N track points at the first moment and the N track points obtained by correcting the second moment according to the position information of the N track points at the first moment and the position information of the N track points obtained by correcting the second moment; determining the corresponding connection relation between the N track points at the first moment and the N track points obtained by correcting the second moment according to the distance between the N track points at the first moment and the N track points obtained by correcting the second moment; and determining a sliding track of at least one interactive object on the projection picture from the first moment to the second moment according to the corresponding connection relation between the N track points obtained by correcting the second moment.
Taking N as an example, a specific embodiment of determining the corresponding connection relationship between the N track points at the first time and the N track points obtained by correcting the second time is provided below.
As shown in fig. 1d, F1 and F2 are 2 trace points at the first time, and G1 and G2 are 2 trace points obtained by correcting the second time. Then, the distances d between the trace point F1 and the trace points G1 and G2 are calculated, respectivelyF1G1And dF1G2The distances between the track point F2 and the track points G1 and G2 are d respectivelyF2G1And dF2G2. Further, the corresponding connection relation between the first time track points F1 and F2 and the 2 track points G1 and G2 obtained by correcting the second time is determined by adopting a forward nearest principle. The specific implementation mode is as follows: if the distance d between the track point F1 and the track point G1F1G1Is less than the distance d between the track point F1 and the track point G2F1G2And the distance d between the track point F2 and the track point G1F2G1Greater than the distance d between the track point F2 and the track point G2F2G2I.e. dF1G1<dF1G2And d isF2G1>dF2G2And determining that the track point F1 is correspondingly connected with the track point G1, and the track point F2 is correspondingly connected with the track point G2.
Accordingly, as shown in FIG. 1e, if trace point F1 is a distance d from trace point G1F1G1Greater than the distance d between the track point F1 and the track point G2F1G2And are andor the distance d between the track point F2 and the track point G1F2G1Is less than the distance d between the track point F2 and the track point G2F2G2I.e. dF1G1>dF1G2And/or dF2G1<dF2G2And determining the corresponding connection relation between the first time track points F1 and F2 and the 2 track points G1 and G2 obtained by correcting the second time by adopting a reverse nearest principle. The specific implementation mode is as follows: continuously judging the distance d between the track point G1 and the track point F1F1G1Whether it is less than its distance d from track point F2F2G1And judging whether the distance between the track point G2 and the track point F1 is larger than the distance between the track point G2 and the track point F2, namely judging dF1G1<dF2G1And dF1G2>dF2G2Whether the two are true at the same time; and if the judgment results are yes, determining that the track point F1 is correspondingly connected with the track point G1, and determining that the track point G2 is correspondingly connected with the track point F2.
Correspondingly, if the above-mentioned reverse nearest principle determines that at least one judgment result of the two judgment conditions in the corresponding connection relationship between the first time trace points F1 and F2 and the 2 trace points G1 and G2 obtained by correcting the second time is negative, that is, d isF1G1>dF2G1And/or dF1G2<dF2G2And continuously determining N track points generated on the projection picture at K moments after the second moment. The N track points generated on the projection screen at each of the K times after the second time may be actually detected N track points, or may also be N track points obtained by performing correction in step 102 and the optional embodiment thereof, which is not limited herein. Wherein K may be an integer greater than or equal to 2 in consideration of a refresh rate and a sampling rate of the projection apparatus. Preferably, K is 2.
Further, according to the distance between the N track points at the Kth moment after the second moment and the N track points at the first moment, determining the corresponding connection relation between the N track points at the Kth moment after the second moment and the N track points at the first moment.
Further, according to the distances between the N track points at the Kth moment after the second moment and the N track points corresponding to the second moment and the (K-1) moments after the second moment, the corresponding connection relations between the N track points corresponding to the second moment and the (K-1) moments after the second moment and the N track points at the Kth moment after the second moment are determined. Then, according to the corresponding connection relationship between the N track points at the kth time after the second time and the N track points at the first time, and the corresponding connection relationship between the N track points and the respective N track points at the (K-1) times after the second time, the sliding track formed by the at least one interactive object on the projection picture from the first time to the kth time after the second time is determined.
For a way of determining the corresponding connection relationship between the N track points at the kth time after the second time and the N track points at the first time, and the corresponding connection relationship between the N track points and the respective N track points at the (K-1) times after the second time, reference may be made to the specific implementation manners of the forward closest principle and the backward closest principle in fig. 1d and fig. 1e, which are not described herein again.
In order to further reduce the probability of trajectory misconnection, an alternative embodiment of determining a sliding trajectory formed by at least one interactive object on the projection screen from the first time to the kth time is as follows: calculating N mean value coordinates of the track points correspondingly connected from the first moment to the Kth moment after the second moment according to the corresponding connection relationship between the N track points at the Kth moment after the second moment and the N track points at the first moment and the corresponding connection relationship between the N track points and the N track points at the (K-1) moments after the second moment, and taking the N mean value coordinates as N middle track points correspondingly connected with the N track points at the Kth moment after the second moment; and determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the Kth moment after the second moment according to the corresponding connection relation between the N track points at the Kth moment after the second moment and the N track points at the first moment and the corresponding connection relation between the N track points and the N middle track points.
Taking N-2 and K-2 as an example, a sliding track formed on the projection screen from the first time to a kth time after the second time of at least one interactive object is determined.
As shown in fig. 1F, F1 and F2 are 2 trace points at the first time, G1 and G2 are 2 trace points obtained by correcting the second time, H1 and H2 are 2 trace points at 1 time after the second time, and J1 and J2 are 2 trace points at 2 time after the second time. Then, the distances between the trace point F1 according to the first time and the trace points J1 and J2 according to the 2 nd time after the second time are dF1J1And dF1J2The distances between the track point F2 and the track points J1 and J2 are d respectivelyF2J1And dF1J2And combining the forward-most-recent principle and the backward-most-recent principle described in connection with fig. 1d and 1e above, it is determined that F1 is connected correspondingly to J1 and F2 is connected correspondingly to J2.
Further, according to the distances between 2 track points J1 and J2 at the 2 nd time after the second time and 2 track points G1, G2, H1, and H2 corresponding to the second time and 1 time after the second time, respectively, and in combination with the forward closest principle and the reverse closest principle described in association with fig. 1d and 1e above, the corresponding connection relations between the 2 track points at the second time and 1 time after the second time and the 2 track points J1 and J2 at the 2 nd time after the second time are determined as follows: g1 and H1 are connected correspondingly to J1, and G2 and H2 are connected correspondingly to J2.
Further, since F1 and J1 are correspondingly connected, G1, H1 are correspondingly connected with J1, F2 and J2 are correspondingly connected, and G2, H2 are correspondingly connected with J2, it is determined that the sliding trajectory formed by the 2 interaction objects on the projection screen from the first time to the 2 nd time after the second time is as shown by a solid line in fig. 1F.
In order to further reduce the probability of track misconnection, since F1 and J1 are correspondingly connected, G1, H1 and J1 are correspondingly connected, and F2 and J2 are correspondingly connected, G2, H2 and J2 are correspondingly connected, the mean coordinates of the track points correspondingly connected from the first time to the 2 nd time after the second time are calculated, that is, the mean coordinates of the track points F1, G1, H1 and J1 and the mean coordinates of the track points F2, G2, H2 and J2 are respectively calculated; these two mean coordinates correspond to the two intermediate trace points L1 and L2. Since the L1 is a track point corresponding to the mean coordinates of the track points F1, G1, H1 and J1, the L1 is correspondingly connected with the J1; similarly, L2 is connected to J2. Further, since J1 is connected to F1 and L1, and J2 is connected to F2 and L2, it is determined that the sliding trajectory formed by the 2 interactive objects on the projection screen from the first time to the 2 nd time after the second time is as shown by the dashed line in fig. 1F. As shown in fig. 1f, the distance between the 2 sliding tracks obtained by the dotted line is greater than that between the 2 sliding tracks shown by the solid line, so that the probability of the misconnection of the tracks can be further reduced, and the fluency of the interaction can be improved.
It should be noted that the distance calculation shown in fig. 1b to 1f can be performed by using manhattan distance formula, and can also be performed by using euclidean distance formula, but is not limited thereto. The distance calculation method is a known technique in the art, and is not described herein again.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of step 101-103 may be device a; for another example, the execution subject of step 101 may be device a, and the execution subject of step 102 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 101, 102, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Correspondingly, the embodiment of the application also provides a computer readable storage medium storing computer instructions. The computer instructions, when executed by one or more processors, cause the one or more processors to perform the steps in the projection-based trajectory recognition method described above.
Fig. 2 is a schematic structural diagram of a projection apparatus according to an embodiment of the present application. Among them, the projection apparatus may be implemented as a projection stereo, a projection lamp, a projector, and the like. As shown in fig. 2, the projection apparatus includes: a memory 20a, a processor 20b, and a projection module 20 c.
The projection module 20c is used for projecting a projection image.
The memory 20a is used for storing computer programs and may be configured to store various other data to support operations on the projection device. Wherein the processor 20b may execute a computer program stored in the memory 20a to implement the corresponding control logic. The memory 20a may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 20b is coupled to the memory 20a for executing the above-mentioned computer program for: in the interaction process of at least one interactive object and a projection picture, detecting the number M of track points generated on the projection picture by the at least one interactive object at a second moment, wherein the projection picture is formed by projection of projection equipment; if the number M of the track points detected at the second moment is not equal to the set number N of the track points, correcting the number M of the track points detected at the second moment into N according to the position information of the N track points at the first moment and the position information of the M track points detected at the second moment, wherein M, N is a positive integer, and the first moment is the previous moment of the second moment; and determining a sliding track formed by at least one interactive object on the projection picture from the first moment to the second moment according to the position information of the N track points at the first moment and the position information of the N track points obtained by correcting the second moment.
In an alternative embodiment, the projection device further comprises: a depth of field module 20 d. The depth of field module 20d may be disposed on a front surface of the projection apparatus main body, and is configured to collect a first depth of field image when at least one interaction object does not interact with the projection picture, and a second depth of field image at a second time. In addition, the depth of field module 20d may include an infrared distance measuring sensor, a laser distance measuring sensor, and the like, but is not limited thereto. Based on this, the processor 20c, when detecting the number M of track points generated on the projection screen by the at least one interactive object at the second time, is specifically configured to: and determining the number of track points generated on the projection picture by the at least one interactive object at the second moment based on the depth difference between the first depth image and the second depth image. The corresponding specific implementation manner can refer to the related description in the above method embodiments, and is not described herein again.
In another optional embodiment, when the trace points detected at the second time are corrected from M to N, the processor 20c is specifically configured to: if M is larger than N, respectively calculating the displacement of the M track points at the second moment relative to the first moment according to the position information of the N track points at the first moment and the position information of the M track points at the second moment; and removing (M-N) track points from the M track points at the second moment according to the displacement of the M track points at the second moment relative to the first moment and the sequence of the displacement from large to small so as to correct the M track points at the second moment into N track points.
Further, when the processor 20c calculates the displacement of the M track points at the second time relative to the first time, it is specifically configured to: selecting a reference track point with the minimum abscissa from the N track points according to the position coordinates of the N track points at the first moment; and respectively calculating the distances between the M track points at the second moment and the reference track point according to the position coordinates of the M track points at the second moment and the position coordinates of the reference track point, and taking the distances as the displacement of the M track points at the second moment relative to the first moment.
In another alternative embodiment, the processor 20c is further configured to, when correcting the trace points detected at the second time from M to N, further: if M is smaller than N, continuing to detect the number of track points generated by at least one interactive object on the projection picture at the subsequent moment until the number of the track points is detected to be N, and taking the moment when the number of the track points is detected to be N at the subsequent moment as a target moment; calculating the position information of the middle position according to the positions of the N track points at the first moment and the position information of the N track points detected at the target moment; and determining the position information of the (N-M) new track points according to the position information of the M track points at the second moment and the position information of the middle position, and taking the (N-M) new track points as the track points at the second moment so as to correct the number of the track points detected at the second moment from M to N.
Further, when N is 2 and M is 1, the processor 20c is specifically configured to: and calculating the position coordinates of the 2 track points at the first moment and the mean value of the position coordinates of the 2 track points detected at the target moment as coordinate values of the middle position.
Accordingly, the processor 20c, when determining the position information of the (N-M) new track points, is specifically configured to: and determining the symmetrical points of the 1 track point at the second moment by taking the middle position as the center, and taking the position coordinates of the symmetrical points as the position information of the 1 new track point at the second moment.
In a further alternative embodiment, the processor 20c, when determining the sliding trajectory of the at least one interactive object on the projection screen from the first time to the second time, is specifically configured to: respectively calculating the distance between the N track points at the first moment and the N track points obtained by correcting the second moment according to the position information of the N track points at the first moment and the position information of the N track points obtained by correcting the second moment; determining the corresponding connection relation between the N track points at the first moment and the N track points obtained by correcting the second moment according to the distance between the N track points at the first moment and the N track points obtained by correcting the second moment; and determining a sliding track of at least one interactive object on the projection picture from the first moment to the second moment according to the corresponding connection relation between the N track points obtained by correcting the second moment.
Further, when N is 2, the N track points at the first time include a track point F1 and a track point F2, and the N track points obtained through correction at the second time include a track point G1 and a track point G2; when determining the corresponding connection relationship between the N track points at the first time and the N track points obtained by correcting the second time, the processor 20c is specifically configured to: if the distance between the track point F1 and the track point G1 is smaller than the distance between the track point F1 and the track point G2, and the distance between the track point F2 and the track point G1 is larger than the distance between the track point F2 and the track point G2, determining that the track point F1 is correspondingly connected with the track point G1, and the track point F2 is correspondingly connected with the track point G2; if the distance between the track point F1 and the track point G1 is greater than the distance between the track point F1 and the track point G2, and/or the distance between the track point F2 and the track point G1 is less than the distance between the track point F2 and the track point G2, judging whether the distance between the track point G1 and the track point F1 is less than the distance between the track point F2, and judging whether the distance between the track point G2 and the track point F1 is greater than the distance between the track point G2 and the track point F3652; and if the judgment results are yes, determining that the track point F1 is correspondingly connected with the track point G1, and determining that the track point F2 is correspondingly connected with the track point G2.
Accordingly, the processor 20c is further configured to: if at least one judgment result is negative, continuously determining the number of N track points generated on the projection picture at K moments after the second moment, wherein K is an integer greater than or equal to 2; determining the corresponding connection relation between the N track points at the Kth moment and the N track points at the first moment according to the distance between the N track points at the Kth moment after the second moment and the N track points at the first moment; determining the corresponding relation between the trace point at the Kth moment and the respective N trace points at the second moment and (K-1) moments after the second moment according to the distances between the N trace points at the Kth moment and the respective N trace points at the second moment and (K-1) moments; and determining a sliding track formed by the at least one interactive object on the projection picture from the first time to the Kth time according to the corresponding connection relation between the N track points at the Kth time and the N track points at the first time and the corresponding relation between the N track points and the N track points at the second time and the (K-1) times respectively.
Further, the processor 20c, when determining the sliding track formed by the at least one interactive object on the projection screen from the first time to the kth time, is specifically configured to: respectively calculating mean value coordinates of (K +2) track points correspondingly connected from the first moment to the Kth moment according to the corresponding connection relationship between the N track points at the Kth moment and the N track points at the first moment and the corresponding connection relationship between the N track points at the second moment and the respective N track points at the (K-1) moments, and taking the obtained N mean value coordinates as N middle track points correspondingly connected with the N track points at the K moments; and determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the Kth moment according to the corresponding connection relation between the N track points at the Kth moment and the N track points at the first moment and the corresponding connection relation between the N track points and the N middle track points.
In some embodiments, the projection device further includes a communication component 20 e. The communication component 20e is configured to facilitate wired or wireless communication between the projection device and other devices. The projection device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) modules, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In other embodiments, the projection device further includes a power supply assembly 20 f. Power supply assembly 20f is configured to provide power to the various components of the projection device. The power components 20f may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In some embodiments, the projection device may further include a sound input/output unit 20g that may be configured to output and/or input an audio signal, such as projected sound, and the like. For example, the sound input/output unit 20g includes a Microphone (MIC) configured to receive an external audio signal when the apparatus in which the audio component is located is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via the communication component 20 e. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for a projection apparatus having a language interaction function, voice interaction with a user or the like may be realized through the sound input/output unit 20 g.
Accordingly, the projection apparatus may further include a sound processing unit 20h for processing the sound signal input or output by the sound input/output unit 20 g.
In some embodiments, the projection device further comprises: and a display 20 i. The display 20i may include a Liquid Crystal Display (LCD) and or a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Accordingly, the projection apparatus may further include an image processing unit 20j for performing signal processing such as image quality correction with respect to the image signal output from the processor 20b and converting the resolution thereof to a resolution according to the screen of the display 20 i. Then, the display driving unit 20k sequentially selects each row of pixels of the display 20i and sequentially scans each row of pixels of the display 20i row by row, thus providing pixel signals based on the signal-processed image signals.
It should be noted that only some of the components are schematically shown in fig. 2, and it is not meant that the projection device must include all of the components shown in fig. 2, nor that the projection device only includes the components shown in fig. 2. In addition, the projection apparatus inputs an operation unit (not shown in fig. 2) in addition to the components shown in fig. 2. The input operation unit includes at least one operation member, such as a key, a button, a switch, or other members with similar functions, for performing an input operation, and receives a user instruction through the operation member and outputs the instruction to the processor 20 b. Optionally, the projection device may further include a bracket, a fixing table, and other components for fixing the projection device according to application requirements.
In the projection device provided by this embodiment, in the interaction process between the interactive object and the projection picture, the number of track points generated by the interactive object on the projection picture is continuously detected, and if the number of track points detected at the current time is different from the set number of track points, the number of track points at the current time is corrected to the set number of track points according to the position information of the track points at the previous time and the position information of the track points detected at the current time; therefore, if the number of the track points at the current moment is more than the set number of the track points, the wrong track points can be eliminated; if the number of the trace points is less than the set number of the trace points, the missing trace points can be filled up. Therefore, the motion track of the interactive object from the previous moment to the current moment is determined according to the position of the track point at the previous moment and the corrected position of the track point at the current moment, so that the probability of track misconnection can be reduced, the smoothness of the consistency operation response of the interactive object on the projection picture can be improved, and the user experience is further improved.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A trajectory recognition method based on projection is suitable for projection equipment and is characterized by comprising the following steps:
in the interaction process of at least one interactive object and a projection picture, detecting the number M of track points generated on the projection picture by the at least one interactive object at a second moment, wherein the projection picture is formed by projection of the projection equipment;
if the number M of the track points detected at the second moment is not equal to the set number N of the track points, correcting the number M of the track points detected at the second moment into N track points according to the position information of the N track points at the first moment and the position information of the M track points detected at the second moment, wherein M, N is a positive integer, and the first moment is the previous moment of the second moment;
determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the second moment according to the position information of the N track points at the first moment and the position information of the N track points obtained by correcting the second moment;
the correcting the track points detected at the second moment from M to N according to the position information of the N track points at the first moment and the position information of the M track points detected at the second moment comprises the following steps:
if M is larger than N, respectively calculating the displacement of the M track points at the second moment relative to the first moment according to the position information of the N track points at the first moment and the position information of the M track points at the second moment;
and removing (M-N) track points from the M track points at the second moment according to the displacement of the M track points at the second moment relative to the first moment from the displacement of the M track points at the second moment from large to small so as to correct the M track points at the second moment into N track points.
2. The method according to claim 1, wherein the calculating, according to the position information of the N track points at the first time and the position information of the M track points at the second time, the displacement of the M track points at the second time relative to the first time includes:
selecting a reference track point with the smallest abscissa from the N track points according to the position coordinates of the N track points at the first moment;
and respectively calculating the distance between the M track points at the second moment and the reference track point according to the position coordinates of the M track points at the second moment and the position coordinates of the reference track point, and taking the distance as the displacement of the M track points at the second moment relative to the first moment.
3. The method of claim 1, further comprising:
if M is smaller than N, continuing to detect the number of track points generated by the at least one interactive object on the projection picture at the subsequent time until the number of the track points is detected to be N, and taking the time when the number of the track points is detected to be N at the subsequent time as a target time;
calculating the position information of the middle position according to the position information of the N track points at the first moment and the position information of the N track points detected at the target moment;
and determining the position information of the (N-M) new track points according to the position information of the M track points at the second moment and the position information of the middle position, and taking the (N-M) new track points as the track points at the second moment so as to correct the track points detected at the second moment into N track points from M.
4. The method according to claim 3, wherein N is 2, and M is 1, then the calculating the position information of the intermediate position according to the position information of the N track points at the first time and the position information of the N track points detected at the target time includes:
calculating the position coordinates of the 2 track points at the first moment and the mean value of the position coordinates of the 2 track points detected at the target moment, and taking the mean value as the coordinate value of the middle position;
determining the position information of the (N-M) new track points according to the position information of the M track points at the second moment and the position information of the middle position, including:
and determining the symmetrical points of the 1 track point at the second moment by taking the middle position as the center, and taking the position coordinates of the symmetrical points as the position information of the 1 new track point at the second moment.
5. The method according to any one of claims 1, 2, or 3, wherein determining the sliding trajectory of the at least one interactive object on the projection screen from the first time to the second time according to the position information of the N track points at the first time and the positions of the N track points obtained by correcting the second time comprises:
respectively calculating the distance between the N track points at the first moment and the N track points obtained by correcting the second moment according to the position information of the N track points at the first moment and the position information of the N track points obtained by correcting the second moment;
determining the corresponding connection relation between the N track points at the first moment and the N track points obtained by correcting the second moment according to the distance between the N track points at the first moment and the N track points obtained by correcting the second moment;
and determining a sliding track of the at least one interactive object on the projection picture from the first moment to the second moment according to the corresponding connection relation between the N track points obtained by correcting the second moment.
6. The method according to claim 5, wherein N is 2, the N track points at the first time comprise a track point F1 and a track point F2, and the N track points obtained by correcting the second time comprise a track point G1 and a track point G2;
determining a corresponding connection relationship between the N track points at the first time and the N track points obtained by correcting the second time according to the distance between the N track points at the first time and the N track points obtained by correcting the second time, including:
if the distance between the track point F1 and the track point G1 is smaller than the distance between the track point F1 and the track point G2, and the distance between the track point F2 and the track point G1 is larger than the distance between the track point F2 and the track point G2, it is determined that the track point F1 is correspondingly connected with the track point G1, and the track point F2 is correspondingly connected with the track point G2;
if the distance between the track point F1 and the track point G1 is greater than the distance between the track point F3578 and the track point G2, and/or the distance between the track point F2 and the track point G1 is less than the distance between the track point G2, judging whether the distance between the track point G1 and the track point F1 is less than the distance between the track point F2, and judging whether the distance between the track point G2 and the track point F1 is greater than the distance between the track point F2 and the track point F3552;
if the judgment results are yes, it is determined that the track point F1 is correspondingly connected with the track point G1, and the track point F2 is correspondingly connected with the track point G2.
7. The method of claim 6, further comprising:
if at least one judgment result is negative, continuously determining the number of N track points generated on the projection picture at K moments after the second moment, wherein K is an integer greater than or equal to 2;
determining the corresponding connection relation between the N track points at the Kth moment and the N track points at the first moment according to the distance between the N track points at the Kth moment after the second moment and the N track points at the first moment;
determining the corresponding relation between the track point at the Kth moment and the respective N track points at the second moment and the (K-1) moment after the second moment according to the distances between the respective N track points at the Kth moment and the respective N track points at the second moment and the (K-1) moment;
and determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the Kth moment according to the corresponding connection relation between the N track points at the Kth moment and the N track points at the first moment and the corresponding relation between the corresponding connection relation and the respective N track points at the second moment and the (K-1) moments.
8. The method according to claim 7, wherein the determining, according to the correspondence between the N track points at the kth time and the N track points at the first time and the correspondence between the N track points at the second time and the respective N track points at the (K-1) times, a sliding track formed by the at least one interactive object on the projection screen from the first time to the kth time includes:
respectively calculating mean coordinates of (K +2) track points correspondingly connected from the first moment to the Kth moment according to corresponding connection relations between the N track points at the Kth moment and the N track points at the first moment and corresponding connection relations between the N track points and the respective N track points at the second moment and the (K-1) moment, and taking the obtained N mean coordinates as N middle track points correspondingly connected with the N track points at the K moments;
and determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the Kth moment according to the corresponding connection relation between the N track points at the Kth moment and the N track points at the first moment and the corresponding connection relation between the N track points and the N middle track points.
9. A projection device, comprising: the projection module comprises a memory, a processor and a projection module; wherein the content of the first and second substances,
the projection module is used for projecting a projection picture;
the memory is used for storing the computer program and the position information of N track points generated by at least one interactive object on the projection picture at a first moment;
the processor is coupled to the memory for executing the computer program for:
in the interaction process of at least one interactive object and a projection picture, detecting the number M of track points generated on the projection picture by the at least one interactive object at a second moment, wherein the projection picture is formed by projection of the projection equipment;
if the number M of trace points detected at the second moment is not equal to the set number N of trace points,
if M is larger than N, respectively calculating the displacement of the M track points at the second moment relative to the first moment according to the position information of the N track points at the first moment and the position information of the M track points at the second moment;
according to the displacement of the M track points at the second moment relative to the first moment, removing (M-N) track points from the M track points at the second moment according to the sequence from large displacement to small displacement so as to correct the M track points at the second moment into N track points;
m, N is a positive integer, and the first time is a time before the second time;
and determining a sliding track formed by the at least one interactive object on the projection picture from the first moment to the second moment according to the position information of the N track points at the first moment and the position information of the N track points obtained by correcting the second moment.
10. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-8.
CN201811574442.1A 2018-12-21 2018-12-21 Projection-based track identification method, projection equipment and storage medium Active CN109725804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811574442.1A CN109725804B (en) 2018-12-21 2018-12-21 Projection-based track identification method, projection equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811574442.1A CN109725804B (en) 2018-12-21 2018-12-21 Projection-based track identification method, projection equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109725804A CN109725804A (en) 2019-05-07
CN109725804B true CN109725804B (en) 2020-11-17

Family

ID=66297022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811574442.1A Active CN109725804B (en) 2018-12-21 2018-12-21 Projection-based track identification method, projection equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109725804B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365743B (en) * 2020-10-12 2021-11-09 中国民用航空总局第二研究所 Method and device for correcting flight path positioning data offset of aircraft

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807130A (en) * 2010-05-17 2010-08-18 友达光电股份有限公司 Touch control position correcting method
WO2015187319A1 (en) * 2014-06-01 2015-12-10 Intel Corporation System and method for determining a number of users and their respective positions relative to a device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102339157B (en) * 2010-07-19 2014-02-12 瑞鼎科技股份有限公司 Touch detection method and touch detection device of touch control panel
TWI507947B (en) * 2013-07-12 2015-11-11 Wistron Corp Apparatus and system for correcting touch signal and method thereof
JP2016053774A (en) * 2014-09-03 2016-04-14 コニカミノルタ株式会社 Handwriting input device, handwriting information acquisition method, and handwriting information acquisition program
CN107506133B (en) * 2017-08-24 2020-09-18 歌尔股份有限公司 Operation track response method and system of projection touch system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807130A (en) * 2010-05-17 2010-08-18 友达光电股份有限公司 Touch control position correcting method
WO2015187319A1 (en) * 2014-06-01 2015-12-10 Intel Corporation System and method for determining a number of users and their respective positions relative to a device

Also Published As

Publication number Publication date
CN109725804A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN109144360B (en) Screen lighting method, electronic device, and storage medium
KR102045232B1 (en) Gesture identification methods, devices, programs and recording media
RU2667027C2 (en) Method and device for video categorization
EP2953133B1 (en) Method and device of playing multimedia
TW202113680A (en) Method and apparatus for association detection for human face and human hand, electronic device and storage medium
RU2644533C2 (en) Method and device for displaying message
EP3036742A1 (en) Content-based video segmentation
EP2629545A1 (en) Apparatus and method for changing attribute of subtitle in image display device
CN105469056A (en) Face image processing method and device
US11455836B2 (en) Dynamic motion detection method and apparatus, and storage medium
US9535604B2 (en) Display device, method for controlling display, and recording medium
EP3276301A1 (en) Mobile terminal and method for calculating a bending angle
TW202044065A (en) Method, device for video processing, electronic equipment and storage medium thereof
EP3133482A1 (en) Method and device for displaying a target object
RU2635241C2 (en) Method and device for displaying document on touch screen display
CN109725804B (en) Projection-based track identification method, projection equipment and storage medium
CN109660779A (en) Touch-control independent positioning method, projection device and storage medium based on projection
CN105487774A (en) Image grouping method and device
CN104156344A (en) Text editing method and text editing device
CN107977147B (en) Sliding track display method and device
US20220245920A1 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN109600594B (en) Projection-based touch point positioning method, projection equipment and storage medium
US9843317B2 (en) Method and device for processing PWM data
US11796959B2 (en) Augmented image viewing with three dimensional objects
CN109683775B (en) Projection-based interaction method, projection equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210719

Address after: 264300 No. 699, Jiangjun South Road, Rongcheng City, Weihai City, Shandong Province

Patentee after: Rongcheng goer Technology Co.,Ltd.

Address before: 266104 Room 308, North Investment Street Service Center, Laoshan District, Qingdao, Shandong.

Patentee before: GOERTEK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right