CN104980722B - A kind of data processing method, device and electronic equipment - Google Patents

A kind of data processing method, device and electronic equipment Download PDF

Info

Publication number
CN104980722B
CN104980722B CN201410141683.2A CN201410141683A CN104980722B CN 104980722 B CN104980722 B CN 104980722B CN 201410141683 A CN201410141683 A CN 201410141683A CN 104980722 B CN104980722 B CN 104980722B
Authority
CN
China
Prior art keywords
projection
content
image
electronic device
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410141683.2A
Other languages
Chinese (zh)
Other versions
CN104980722A (en
Inventor
张晶
陈云凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410141683.2A priority Critical patent/CN104980722B/en
Publication of CN104980722A publication Critical patent/CN104980722A/en
Application granted granted Critical
Publication of CN104980722B publication Critical patent/CN104980722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A kind of data processing method of offer of the embodiment of the present invention, device and electronic equipment, one of which data processing method, for the data processing between at least two electronic equipments, including:Obtain the position relationship of the first view field and the second view field;By data transmission channel, the second project content is obtained from the second electronic equipment;Based on position relationship, the first project content and the second project content are analyzed, obtain representing the first analysis result of the relation between the first project content and the second project content;According to the first analysis result, generation first is instructed, and the first instruction is performed in the first electronic equipment, for indicating that the first electronic equipment is handled the first project content and the second project content based on position relationship, such first electronic equipment can carry out the shared of project content by data transmission channel and the second electronic equipment, and the second project content that the instruction of execution first is projected to itself project content and the second electronic equipment is handled.

Description

Data processing method and device and electronic equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method and apparatus, and an electronic device.
Background
With the continuous development of science and technology, in modern electronic business activities, electronic conferences usually need to carry a notebook computer with them, and an independent projector is needed to carry out the electronic conferences.
With the gradual maturity of micro-projection technology, handheld devices such as mobile phones or portable electronic products with projection functions have appeared. The handheld device can project a projection area on a projection surface through a projection device in the handheld device, and projects contents to be displayed on the projection area for display. And the handheld device can also change the size and the display position of the content displayed on the projection area
However, the modification of the content displayed on the self-projection area by the existing handheld device is limited to the modification of the size and the display position of the content, and the function is relatively single.
Disclosure of Invention
In view of this, in order to overcome the defect that the existing projection technology can only modify the content size and the content display position of the display content on the projection area of the projection technology, a data processing method, an apparatus and an electronic device are provided, and the technical scheme is as follows:
the embodiment of the invention provides a data processing method, which is used for data processing between first electronic equipment and second electronic equipment, wherein the first electronic equipment comprises a first projection unit, the first electronic equipment projects a first projection area on a projection surface through the first projection unit and projects first projection content in the first projection area through the first projection unit, the second electronic equipment comprises a second projection unit, the second electronic equipment projects a second projection area on the projection surface through the second projection unit and projects second projection content in the second projection area through the second projection unit, and a data transmission channel is established between the first electronic equipment and the second electronic equipment;
the data processing method is applied to the first electronic equipment and comprises the following steps:
acquiring the position relation between the first projection area and the second projection area;
acquiring the second projection content from the second electronic equipment through the data transmission channel;
analyzing the first projection content and the second projection content based on the position relationship to obtain a first analysis result, wherein the first analysis result is used for representing the relationship between the first projection content and the second projection content;
and generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device, wherein the first instruction is used for instructing the first electronic device to process the first projection content and the second projection content based on the position relationship.
Preferably, the first projection content is data generated by a first object at a first time, the second projection content is data generated by the first object at a second time, and the first time is earlier than the second time;
analyzing the first projection content and the second projection content based on the position relationship to obtain a first analysis result, including:
when the position relationship indicates that the second projection area is located in the first projection area, matching the first projection content with the second projection content to determine whether the first projection content and the second projection content are the same;
when the first projection content and the second projection content are not matched, a first analysis result indicating that the first projection content and the second projection content are different is obtained.
Preferably, generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device includes:
generating an updating instruction according to the first analysis result;
executing the updating instruction to update the first projection content into the second projection content;
deleting the first projection content projected to the first projection area, and projecting the second projection content to the first projection area through the first projection unit.
Preferably, the first projection content is a first image, and the second projection content is a second image;
analyzing the first projection content and the second projection content based on the position relationship to obtain a first analysis result, including:
judging whether the display of the first object in the first image and the second object in the second image accords with a preset display relation or not based on the position relation; and if not, obtaining a first analysis result indicating that the first object and the second object do not accord with a preset display relationship.
Preferably, generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device includes:
generating a first instruction for changing the display direction of the first object according to the first analysis result;
executing the first instruction to alter a display orientation of the first object in the first projection area.
Preferably, the method further comprises: acquiring the first object and the second object;
and judging whether the first object and the second object have an association relation according to a preset object association relation, and if so, adjusting the posture of the first object according to the association relation.
Preferably, when the position relationship indicates that the first projection region and the second projection region overlap, analyzing the first projection content and the second projection content based on the position relationship to obtain a first analysis result, including:
acquiring a first position of the first projection content in the first projection area and a second position of the second projection content in the second projection area;
converting the second position to a third position of the second projection content in the first projection area based on the positional relationship;
and comparing the first position with the third position to judge whether the first projection content and the second projection content are overlapped or not, so as to obtain a first analysis result for indicating whether the first projection content and the second projection content are overlapped or not.
Preferably, generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device includes:
when the first analysis result shows that the first projection content and the second projection content are overlapped, generating a typesetting instruction;
executing the typesetting instruction, and typesetting the first projection content and the second projection content based on a preset typesetting mode;
and projecting the first projection content and the second projection content after typesetting into the first projection area through the first projection unit.
Preferably, the first projection content comprises first text content and first option content, the first option content comprises at least one operation on the first text content, and the second projection content is a first graph;
generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device, wherein the first instruction comprises:
when the first analysis result shows that the first projection content and the second projection content are overlapped, obtaining the operation selected by the first graph according to the third position;
based on the selected operation, a first instruction is generated, and the first instruction is executed on the first projection content.
Preferably, the first projection content is a first image, the first image is a half image of a third image, and the second projection content is a second image;
generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device, wherein the first instruction comprises:
when the first analysis result shows that the first image and the second image are overlapped or separated, generating a projection adjusting instruction, wherein the projection adjusting instruction is at least used for adjusting display parameters of an image projected by any electronic equipment;
adjusting the image projected by any electronic equipment according to the display parameters so that the adjusted first image and the adjusted second image are spliced into a fourth image on a projection surface;
and matching the fourth image and the third image, wherein when the fourth image and the third image are matched, the first image and the second image are spliced into the third image.
Preferably, the third image is a frame image in a video played by the first electronic device, and the method further includes:
when the first image and the second image are spliced into the third image, acquiring a video corresponding to the third image in the first electronic equipment;
selecting a playing application corresponding to the video from the first electronic equipment, and indicating the first electronic equipment to call the playing application;
and loading the video into the playing application, and displaying the played video in the first projection area.
Preferably, the light projected by the first projection unit is structured light, and the method further comprises:
acquiring depth information of an operating body by detecting a first deformation amount of the structured light;
when the depth information of the operation body is the same as the depth information of the projection surface, detecting a second deformation and a third deformation of the structured light, wherein the depth information of the projection surface is acquired by a first acquisition unit in the first electronic device;
determining a trigger operation of the operation body through the second deformation quantity and the third deformation quantity, and generating a second instruction corresponding to the trigger operation;
when the position of the trigger operation is located in an overlapping area of the first projection area and the second projection area, sending an execution prohibition instruction to the second electronic device through the data transmission channel, and instructing the second electronic device to suspend executing the second instruction;
and when the position of the trigger operation is located in a region where the first projection region is not overlapped with the second projection region, executing the second instruction by the first electronic equipment.
Preferably, the first image includes a first key for securely authenticating the device, the second image includes a second key for securely authenticating the device, and a third key matching the first key is included in the other half of the third image;
when the fourth image and the third image are matched, it is indicated that the second key is the same as the third key, the first electronic device and the second electronic device are security devices, and the security devices are used for indicating that the first electronic device and the second electronic device are devices authenticated by keys.
The embodiment of the present invention further provides a data processing apparatus, configured to process data between a first electronic device and a second electronic device, where the first electronic device includes a first projection unit, the first electronic device projects a first projection area on a projection surface through the first projection unit, and projects first projection content in the first projection area through the first projection unit, the second electronic device includes a second projection unit, the second electronic device projects a second projection area on the projection surface through the second projection unit, and projects second projection content in the second projection area through the second projection unit, and a data transmission channel is established between the first electronic device and the second electronic device;
the data processing device is applied to the first electronic equipment and comprises:
a first acquisition unit configured to acquire a positional relationship between the first projection area and the second projection area;
a second obtaining unit, configured to obtain the second projection content from the second electronic device through the data transmission channel;
an analysis unit, configured to analyze the first projection content and the second projection content based on the position relationship to obtain a first analysis result, where the first analysis result is used to represent a relationship between the first projection content and the second projection content;
a generating unit, configured to generate a first instruction according to the first analysis result, where the first instruction is used to instruct the first electronic device to process the first projection content and the second projection content based on the position relationship;
an execution unit to execute the first instruction in the first electronic device.
Preferably, the first projection content is data generated by a first object at a first time, the second projection content is data generated by the first object at a second time, and the first time is earlier than the second time;
the analysis unit analyzes the first projection content and the second projection content based on the position relationship to obtain a first analysis result, and the analysis unit includes:
and when the position relation indicates that the second projection area is located in the first projection area, matching the first projection content with the second projection content, and when the first projection content is not matched with the second projection content, obtaining a first analysis result indicating that the first projection content is different from the second projection content.
Preferably, the generating unit generates a first instruction according to the first analysis result, including: generating an updating instruction according to the first analysis result;
the execution unit executes the first instruction in the first electronic device, including: and executing an updating instruction, updating the first projection content into the second projection content, deleting the first projection content projected to the first projection area, and projecting the second projection content to the first projection area through the first projection unit.
Preferably, the first projection content is a first image, and the second projection content is a second image;
the analysis unit analyzes the first projection content and the second projection content based on the position relationship to obtain a first analysis result, and the analysis unit includes:
judging whether the display of the first object in the first image and the second object in the second image accords with a preset display relation or not based on the position relation; and if not, obtaining a first analysis result indicating that the first object and the second object do not accord with a preset display relationship.
Preferably, the generating unit generates a first instruction according to the first analysis result, including: generating a first instruction for changing the display direction of the first object according to the first analysis result;
the execution unit executes the first instruction in the first electronic device, including: executing the first instruction to alter a display orientation of the first object in the first projection area.
Preferably, the apparatus further comprises: a third acquisition unit configured to acquire the first object and the second object;
and the adjusting unit is used for adjusting the posture of the first object according to a preset object association relationship under the condition that the first object and the second object are judged to have the association relationship according to the preset object association relationship.
Preferably, when the positional relationship indicates that the first projection region and the second projection region overlap, the analysis unit includes: a fetch subunit, a convert subunit, and a compare subunit, wherein,
an obtaining subunit, configured to obtain a first position of the first projection content in the first projection area and a second position of the second projection content in the second projection area;
a conversion subunit, configured to convert, based on the positional relationship, the second position into a third position of the second projection content in the first projection area;
a comparison subunit, configured to compare the first position and the third position to determine whether the first projection content and the second projection content are overlapped, so as to obtain the first analysis result indicating whether the first projection content and the second projection content are overlapped.
Preferably, the analyzing unit generates a first instruction according to the first analysis result, including: when the first analysis result shows that the first projection content and the second projection content are overlapped, generating a typesetting instruction;
the execution unit executes the first instruction in the first electronic device, including: executing the typesetting instruction, typesetting the first projection content and the second projection content based on a preset typesetting mode, and projecting the typesetted first projection content and second projection content into the first projection area through the first projection unit.
Preferably, the first projection content comprises first text content and first option content, the first option content comprises at least one operation on the first text content, and the second projection content is a first graph;
the generating unit generates a first instruction according to the first analysis result, and the generating unit comprises: and when the first analysis result shows that the first projection content and the second projection content are overlapped, obtaining the operation selected by the first graph according to the third position, and generating a first instruction based on the selected operation.
Preferably, the first projection content is a first image, the first image is a half image of a third image, and the second projection content is a second image;
the generating unit generates a first instruction according to the first analysis result, and the generating unit comprises: when the first analysis result shows that the first image and the second image are overlapped or separated, generating a projection adjusting instruction, wherein the projection adjusting instruction is at least used for adjusting display parameters of an image projected by any electronic equipment;
the execution unit executes the first instruction in the first electronic device, including: and adjusting the image projected by any electronic equipment according to the display parameters, matching a fourth image and a third image which are spliced on a projection surface of the adjusted first image and the adjusted second image, and when the fourth image is matched with the third image, indicating that the first image and the second image are spliced into the third image.
Preferably, the third image is a frame of image in a video played by the first electronic device, and the apparatus further includes:
a fourth obtaining unit, configured to obtain, when the first image and the second image are spliced into the third image, a video corresponding to the third image in the first electronic device;
the selecting unit is used for selecting a playing application corresponding to the video from the first electronic equipment and indicating the first electronic equipment to call the playing application;
a loading unit, configured to load the video into the playback application;
and the display unit is used for displaying the played video in the first projection area.
Preferably, the light projected by the first projection unit is structured light, and the apparatus further includes:
a fifth acquisition unit configured to acquire depth information of the operation body by detecting the first deformation amount of the structured light;
a detection unit configured to detect a second deformation amount and a third deformation amount of the structured light when depth information of the operation body is the same as depth information of the projection plane, the depth information of the projection plane being acquired by a first acquisition unit in the first electronic device;
an instruction generating unit configured to determine a trigger operation of the operation body by the second and third shape variables and generate a second instruction corresponding to the trigger operation;
a sixth obtaining unit, configured to obtain a position of the trigger operation, and when the position of the trigger operation is located in a region where the first projection region is not overlapped with the second projection region, trigger the execution unit to execute the second instruction;
and the indicating unit is used for sending an execution prohibition instruction to the second electronic device through the data transmission channel when the position of the trigger operation is located in the overlapping area of the first projection area and the second projection area, and indicating the second electronic device to suspend executing the second instruction.
Preferably, the first image includes a first key for securely authenticating the device, the second image includes a second key for securely authenticating the device, and a third key matching the first key is included in the other half of the third image;
when the fourth image and the third image are matched, it is indicated that the second key is the same as the third key, the first electronic device and the second electronic device are security devices, and the security devices are used for indicating that the first electronic device and the second electronic device are devices authenticated by keys.
An embodiment of the present invention further provides an electronic device, including a first projection unit, where the electronic device projects a first projection area on a projection surface through the first projection unit, and projects first projection content in the first projection area through the first projection unit, and the electronic device further includes: the data processing device is connected with the first projection unit.
As can be seen from the data processing method provided in the foregoing embodiment of the present invention, the first electronic device may obtain a position relationship between a first projection area of the first electronic device and a second projection area projected by the second electronic device, and obtain the second projection content from the second electronic device through a data transmission channel established between the first electronic device and the second electronic device. Then, the first electronic device analyzes the first projection content and the second projection content in the first projection area based on the position relationship to obtain a first analysis result indicating the relationship between the two projection contents, generates a first instruction according to the first analysis result, and executes the first instruction by the first electronic device. The first instruction is used for instructing the first electronic device to process the first projection content and the second projection content based on the position relation, so that the first electronic device can share the projection content with the second electronic device through the data transmission channel, and the first instruction is executed to process the self projection content and the second projection content projected by the second electronic device.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram of a system to which a data processing method according to an embodiment of the present invention is applied;
FIG. 2 is a first flowchart of a data processing method according to an embodiment of the present invention;
FIG. 3 is a second flowchart of a data processing method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a first application provided by an embodiment of the present invention;
FIG. 5 is a third flowchart of a data processing method according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a second application provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of the first object in FIG. 6 after the pose is changed;
FIG. 8 is a fourth flowchart of a data processing method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a third application provided by an embodiment of the present invention;
FIG. 10 is a fifth flowchart of a data processing method according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a fourth application provided by an embodiment of the present invention;
fig. 12 is a sixth flowchart of a data processing method according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a fourth application provided by an embodiment of the present invention;
fig. 14 is a seventh flowchart of a data processing method according to an embodiment of the present invention;
fig. 15 is an eighth flowchart of a data processing method according to an embodiment of the present invention;
FIG. 16 is a diagram illustrating a first structure of a data processing apparatus according to an embodiment of the present invention;
FIG. 17 is a diagram illustrating a second structure of a data processing apparatus according to an embodiment of the present invention;
FIG. 18 is a schematic diagram of an analysis unit in the data processing apparatus according to the embodiment of the present invention;
FIG. 19 is a diagram illustrating a third exemplary configuration of a data processing apparatus according to an embodiment of the present invention;
fig. 20 is a schematic diagram of a fourth structure of the data processing apparatus according to the embodiment of the present invention.
Detailed Description
The data processing method provided by the embodiment of the invention is applicable to the system shown in fig. 1, and is used for processing data between a first electronic device 100 and a second electronic device 200 in the system, where the first electronic device 100 includes a first projection unit 101 (not shown in the figure), a first projection area 102 is projected on a projection surface by the first projection unit 101, and a first projection content in the first electronic device 100 is projected into the first projection area by the first projection unit 101. Also, the second electronic device 200 includes a second projection unit 201 (not shown), and projects a second projection area 202 on the projection surface through the second projection unit 201, and projects second projection content in the second electronic device 200 into the second projection area through the second in-projection unit 201. In the system shown in fig. 1, the first electronic device 100, as a master device for controlling the first projection content and the second projection content, may execute the data processing method provided in the embodiment of the present invention to process the first projection content and the second projection content.
Of course, in actual operation, the second electronic device 200 may also serve as a master device for controlling the first projection content and the second projection content, and it may also execute the data processing method provided by the embodiment of the present invention. In the embodiment of the present invention, the data processing method provided in the embodiment of the present invention will be described with the first electronic device 100 as an execution subject.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 2, a data processing method applied to a first electronic device according to an embodiment of the present invention is shown, which includes the following steps:
201: and acquiring the position relation of the first projection area and the second projection area.
Wherein the positional relationship is a positional relationship formed by the first projection region and the second projection region on the projection plane, and the positional relationship includes: the second projection area is located in the first projection area (all overlap), the second projection area and the first projection area are partially overlapped, the second projection area is not located in the first projection area, one side of the second projection area is overlapped with one side of the first projection area, and the first projection area and the second projection area are separated (the two projection areas are not overlapped).
In some examples of the embodiment of the present invention, the first electronic device further includes a first acquisition unit, a projection image formed by the first projection area and the second projection area on the projection surface is acquired by the first acquisition unit, and the first electronic device may obtain a positional relationship between the first projection area and the second projection area by analyzing the projection image. However, the acquisition field of view of the first acquisition unit is limited, and when the first electronic device is located at a smaller distance from the projection surface, the larger the first projection area formed on the projection surface, the first acquisition unit may acquire only a partial area of the first projection area, so that the first electronic device cannot know the position relationship between the first projection area and the second projection area from the projection image.
In other examples of the embodiments of the present invention, the first electronic device may determine the position relationship of the first projection area and the second projection area by determining whether a distance between center points of the first projection area and the second projection area is less than half of a sum of lengths of the first projection area and the second projection area.
For example, if the distance between the center points of the first projection unit and the second projection unit is d, the length of the first projection area is a, and the length of the second projection area is b, if d is less than (a + b)/2, the first electronic device may determine that the first projection area and the second projection area overlap; if d is equal to (a + b)/2, the first electronic device may determine that the second projection region is not in the first projection region and one edge of the second projection region overlaps one edge of the first projection region; if d is greater than (a + b)/2, the first electronic device may determine that the first projection region and the second projection region are separated; if d is judged to be less than a/2 and d is greater than b/2 on the basis that d is less than (a + b)/2, the first electronic device may determine that the second projection region is located in the first projection region.
The central points of the first projection area and the second projection area are points formed on the projection surface by the light rays passing through the axis in the light rays projected by the respective projection units, the light rays passing through the axis are the light rays emitted by the central points of the projection units, and the light ray lines of the light rays do not change when the light rays pass through the axis. The distance between the two central points is related to the distance (called the axle distance for short) between the axle centers of the first projection unit and the second projection unit, and when the first projection unit and the second projection unit vertically project, the distance between the two central points is equal to the axle distance; when the axes of the first projection unit and the second projection unit rotate up and down along the horizontal axis, the distance between the two central points is equal to the distance between the axes; when the axes of the first projection unit and the second projection unit rotate left and right along the vertical axis, the distance between the two central points is related to the distance between the axes, the rotation direction and the rotation angle.
If the distance between the center points of the first projection unit and the second projection unit is d, the first projection unit rotates right along the vertical axis, and the second projection unit rotates left along the vertical axis, the distance between the center points is m-p cosa-q cosb, where m is the axial distance, p is the depth information (i.e., the distance to the projection plane) of the first projection unit, a is the rotation angle of the first projection unit, q is the depth information of the second projection unit, and b is the rotation angle of the second projection unit. If the first projection unit is rotated to the left and the second projection unit is rotated to the right, the distance d from the center point is m + p cosa + q cosb.
The length of the first projection area and the length of the second projection area can be determined by projection parameters of the respective projection units, and the first electronic device can acquire various parameters of the second projection area from the second electronic device through the data transmission channel, wherein the various parameters include the length of the second projection area.
202: and acquiring second projection content from the second electronic equipment through the data transmission channel. The second projection content is a content projected on the projection surface by the second projection unit, and may be a segment of text, data, or an image, and is specifically determined by a user of the second electronic device according to an application scenario.
203: and analyzing the first projection content and the second projection content based on the position relation to obtain a first analysis result, wherein the first analysis result is used for representing the relation between the first projection content and the second projection content.
204: and generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device, wherein the first instruction is used for instructing the first electronic device to process the first projection content and the second projection content based on the position relation.
In the embodiment of the invention, after the second projection content is acquired, the second projection content and the first projection content are analyzed by combining the position relationship of the two projection areas, so that a first instruction for processing the two projection contents is obtained and is executed by the first electronic device. Therefore, after the first electronic device shares the projection content with the second electronic device through the data transmission channel, the projection content can be further processed, and the processing function of the electronic device on the projection content is increased.
Referring to fig. 3, a second flowchart of the data processing method according to the embodiment of the present invention is shown, where the data processing method shown in the flowchart illustrates that, when data generated by the same object at different times are stored in the first electronic device and the second electronic device, the latest data is updated to the electronic device that does not store the data by using the data processing method according to the embodiment of the present invention, and a process is as follows:
301: and acquiring the position relation of the first projection area and the second projection area. Wherein step 301 is the same as step 201, which is not described.
302: and acquiring second projection content from the second electronic equipment through the data transmission channel.
In this embodiment of the present invention, the second projection content is data generated by the first object at the second time, the first projection content is data generated by the first object at the first time, that is, the first projection content and the second projection content are data generated by the first object at different times, and the first time is earlier than the second time, so that the second projection content stored by the second electronic device is data obtained by updating the first projection content stored by the first electronic device, and at this time, the first electronic device needs to analyze the two projection contents to finally determine whether to update the first projection content to the second projection content.
303: and when the position relation indicates that the second projection area is positioned in the first projection area, matching the first projection content with the second projection content to determine whether the first projection content and the second projection content are the same.
In step 201, when the second projection area is located in the first projection area, it indicates that the second projection content is projected into the first projection area, and at this time, the first electronic device may further analyze whether the first projection content and the second projection content are the same.
In some examples of the embodiment of the present invention, when the first projection content and the second projection content are texts, the first electronic device may obtain key information in the first projection content and the second projection content through semantic analysis, and compare the key information to determine whether the first projection content and the second projection content are the same.
If the first projection content is: the sales of the product A is 300 ten thousand by No. 1/20 2014, and the second projection content is as follows: product a has a sales volume of 500 ten thousand in month 1 of 2014. The first electronic device obtains key information of the first projection content through semantic analysis, and the key information is as follows: product A-2014-1-20-sales amount 300 ten thousand, the key information of the second projection content is: product A-2014-1-sales 500 ten thousand. From these two pieces of key information, it can be known that the second projected content is the result of statistics of sales of product a in month 1 in 2014, while the first projected content is sales of product a by a certain day, which are significantly different, and the data of the second projected content is significantly more timely than the data of the first projected content.
In other examples of the embodiments of the present invention, the first projection content and the second projection content may also be images, and the first electronic device may determine whether the first projection content and the second projection content are the same through image comparison.
In other examples of embodiments of the present invention, the first projected content and the second projected content may be audio, and the first electronic device may determine whether the first projected content and the second projected content are the same through audio spectrum analysis.
304: and when the first projection content is not matched with the second projection content, obtaining a first analysis result indicating that the first projection content is different from the second projection content. The first analysis result may also indicate that the first electronic device updates the first projected content to the second projected content.
If the first electronic device determines that the first projection content and the second projection content are the same, the first electronic device does not update the first projection content and continues to project the first projection content into the first projection area.
305: and generating an updating instruction according to the first analysis result.
306: and executing an updating instruction to update the first projection content into the second projection content.
307: and deleting the first projection content projected to the first projection area, and projecting the second projection content to the first projection area through the first projection unit.
For example, in a meeting, the first electronic device of the presenter stores data such as a data report, and the data is projected as first projection content onto the projection plane through the first projection unit, and an area projected by the first projection unit on the projection plane is a first projection area. So that other persons in the conference can view the first projected content from the projection surface. If there is more updated information than the information shown in the first projection content among the other persons in the conference, the user may use his/her own second electronic device to project the stored updated information as the second projection content onto the projection surface, where the area projected by the second projection unit on the projection surface is the second projection area, as shown in fig. 4.
Wherein the first projected content in the first electronic device is projected into a first projection area, the second projected content in the second electronic device is projected into a second projection area, and it can be seen from the figure that the second projection area is located in the first projection area. At this time, the first electronic device can determine that the second projection area is located in the first projection area through distance analysis.
And the first projection content and the second projection content adopt a pie chart mode to record data, the pie chart is used as an image, and the first electronic equipment can determine whether the first projection content and the second projection content are the same or not through an image comparison mode. The first electronic device determines a first analysis result that the first projection content and the second projection content are different through comparison of the dividing lines on the pie chart, and the first electronic device generates an updating instruction according to the analysis result. The first electronic equipment runs the updating instruction, updates the first projection content into second projection content, deletes the first projection content projected into the first projection area, and projects the second projection content into the first projection area through the first projection unit.
It can be seen from the foregoing technical solutions that, in the data processing method provided in the embodiment of the present invention, the first projection area is used as a shared data area, the projection content of the shared data area can be analyzed, and in a case that the first projection content and the second projection content in the shared data area are different, the first electronic device can update the first projection content stored in the first electronic device to be the second projection content shared by the shared data area.
Referring to fig. 5, a third flowchart of a data processing method according to an embodiment of the present invention is shown, which illustrates that when an image is projected by a first electronic device and a second electronic device respectively as projection content, the image projected by the first electronic device is automatically changed, and the specific process is as follows:
501: and acquiring the position relation of the first projection area and the second projection area. Wherein step 501 is the same as step 201, which is not described.
502: and acquiring second projection content from the second electronic equipment through the data transmission channel.
In the embodiment of the present invention, the second projection content is a second image, and the first projection content is a first image, that is, the first electronic device and the second electronic device respectively project an image as the projection content.
503: and judging whether the display of the first object in the first image and the second object in the second image accords with a preset display relation or not based on the position relation. If not, go to step 504; if so, step 507 is performed.
The preset display relationship is a display relationship which the first object and the second object are correspondingly provided with in a real environment. If the first object is a person and the second object is a sofa, the preset display relationship between the first object and the second object should be: the person is back to the sofa. For another example, if the first object is a puppy and the second object is a bone, the preset display relationship between the first object and the second object is a direction from the mouth of the puppy to the bone.
504: and obtaining a first analysis result which shows that the first object and the second object do not accord with the preset display relation. When the first object and the second object do not conform to the preset display relationship, the first electronic device needs to change the display direction of the first electronic device so that the first object and the second object conform to the preset display relationship.
505: and generating a first instruction for changing the display direction of the first object according to the first analysis result.
506: the display orientation of the first object in the first projection area is altered.
Taking the above puppy and the bone as an example, when the mouth of the puppy faces away from the bone, the first electronic device generates a first instruction for changing the display direction of the puppy, and executes the first instruction to change the display direction of the puppy in the first projection area, so that the mouth of the puppy faces the bone, as shown in fig. 6, it is shown in fig. 6 that the first electronic device changes the display direction of the puppy, so that the mouth of the puppy faces the bone instead of facing away from the bone.
507: a first object and a second object are acquired.
508: judging whether the first object and the second object have an association relationship according to the preset object association relationship, if so, executing step 509; if not, step 510 is performed.
509: and adjusting the posture of the first object according to the association relation.
510: and finishing the operation.
The preset object association relationship is as follows: the interactive relationship of two objects specified in the interactive game stored in the first electronic device. After the first object and the second object are obtained, the first electronic device compares the specified objects in the interactive game to determine whether the obtained first object and the obtained second object are two objects specified by the interactive game, if so, step 309 is executed to adjust the posture of the first object according to the association relationship; and if not, the first electronic equipment ends the operation.
As shown in fig. 6, two objects, namely, a puppy and a bone, are shown, and when the first electronic device finds the interactive game in which the puppy bites the bone, the first electronic device adjusts the posture of the puppy, adjusts the posture of the puppy shown in fig. 6 when the puppy sits on the ground to a posture of jumping from the puppy, and shows an image of the puppy biting the bone shown in fig. 7.
It can be seen from the foregoing technical solutions that, the data processing method provided in the embodiment of the present invention can determine an object in the projected images of the first electronic device and the second electronic device, and adjust the posture of the first object in the first image projected by the first electronic device, so as to implement interaction between the first object and the second object in a certain virtual reality interactive game, that is, the data processing method provided in the embodiment of the present invention can design a virtual reality interactive game through a projection technology.
Referring to fig. 8, a fourth flowchart of a data processing method according to an embodiment of the present invention is shown, which illustrates how to supplement the first projection content projected by the first electronic device by using a projection method, and the specific process is as follows:
801: and acquiring the position relation of the first projection area and the second projection area. Wherein step 801 is the same as step 201 and is not described.
802: and acquiring second projection content from the second electronic equipment through the data transmission channel.
803: when the position relationship indicates that the first projection area and the second projection area overlap, a first position of the first projection content in the first projection area and a second position of the second projection content in the second projection area are acquired.
It can be understood that: projection parameters are set in the first electronic device and the second electronic device, and the projection parameters are used for indicating how the electronic devices display the projection content in the projection area, so that the first electronic device can determine a first position of the first projection content in the first projection area through the projection parameters of the first electronic device. Of course, the second electronic device may also determine the second position of the second projection content in the second projection area through its own projection parameters. The second position is sent to the first electronic device through the data transmission channel.
804: the second position is converted into a third position of the second projection content in the first projection area based on the positional relationship.
In some examples of the embodiment of the present invention, the position relationship may be determined by the first electronic device by analyzing the image acquired by the first acquisition unit, and as is known to those skilled in the art, the first electronic device may compare coordinates of a pixel point in the image as a third position of the second position in the first projection area.
In other examples of the embodiment of the present invention, the position relationship is obtained by comparing a center point formed on the projection plane by the axes of the first electronic device and the second electronic device with lengths of two projection areas, in this case, the first electronic device acquires the center point of the second projection area, and the position of the second projection area in the first projection area can be obtained by expanding the center point, and the second position of the second projection content in the second projection area is known, so that the second position can be converted into the third position by the position of the second projection area in the first projection area.
805: and comparing the first position with the third position to judge whether the first projection content and the second projection content are overlapped or not, so as to obtain a first analysis result for judging whether the first projection content and the second projection content are overlapped or not.
The first position is used for indicating the display position of the first projection content in the first projection area, and the third position is used for indicating the display position of the second projection content in the first projection area. When the first position and the third position overlap, the first projection content and the second projection content also overlap, so that the overlapped first projection content and the second projection content cannot be clearly distinguished, and therefore, the first electronic device needs to adjust the display of the first projection content and the second projection content so that the first projection content and the second projection content do not overlap.
806: and generating a typesetting instruction when the first analysis result shows that the first projection content and the second projection content are overlapped. The typesetting instruction is used for indicating the first projection content and the second projection content to be rearranged in a preset typesetting mode, namely, the first projection content and the second projection content are rearranged, so that the overlapped parts of the first projection content and the second projection content are separated.
807: and executing the typesetting instruction, and typesetting the first projection content and the second projection content based on a preset typesetting mode.
The preset typesetting mode is different based on the different types of the first projection content and the second projection content. The preset typesetting mode includes but is not limited to the following modes:
when the first projection content is characters and the second projection content is characters or multimedia information, the first projection content is displayed on the upper half part of the first projection area, and the second projection content is displayed on the lower half part of the first projection area;
and when the first projection content and the second projection content are images and the first projection content and the second projection content are half images of a complete image, splicing the first projection content and the second projection content into the complete image.
808: and projecting the typeset first projection content and the typeset second projection content into a first projection area through a first projection unit.
The first electronic device may further store the second projection content, and copy the projection content in a projection manner, for example, when the first electronic device plays a picture in an album through projection, the second electronic device projects the picture stored in the second electronic device into the first projection area, and the first electronic device may add the picture into the album.
The points to be explained are: when the first analysis result indicates that the first projection content and the second projection content do not overlap, the first electronic device displays the second projection content according to the position of the second projection area in the first projection area, as shown in fig. 9.
By applying the data processing method provided by the embodiment of the invention, the first electronic device can judge the positions of the first projection content and the second projection content, and when the first projection content and the second projection content are judged to be overlapped, the first projection content and the second projection content are automatically typeset. In addition, the first electronic device and the second electronic device can also share the projection content in a projection mode, and the second projection content of the second electronic device is shared to the first electronic device.
Certainly, the first projected content in the first electronic device is copied by the second electronic device as a shared content, as shown in fig. 10, it is described how the second electronic device obtains the first projected content from the first electronic device in a projection manner, where the first projected content includes first text content and first option content, the first option content includes at least one operation on the first text content, and the second projected content is a first graphic, and the first graphic is used for selecting one operation included in the first option content. The data processing method shown in fig. 10 may include the steps of:
1001: and acquiring the position relation of the first projection area and the second projection area. Wherein step 1001 is the same as step 201, which is not described.
1002: and acquiring second projection content from the second electronic equipment through the data transmission channel.
1003: when the position relationship indicates that the first projection area and the second projection area overlap, a first position of the first projection content in the first projection area and a second position of the second projection content in the second projection area are acquired.
1004: the second position is converted into a third position of the second projection content in the first projection area based on the positional relationship.
1005: and comparing the first position with the third position to judge whether the first projection content and the second projection content are overlapped or not, so as to obtain a first analysis result for judging whether the first projection content and the second projection content are overlapped or not.
Step 1002 to step 1005: as with steps 802 through 805, this is not illustrated.
1006: and when the first analysis result shows that the first projection content and the second projection content are overlapped, obtaining the operation selected by the first graph according to the third position. The operation selected by the first graph is an operation to be executed on the first projection content by the first electronic equipment.
1007: and generating a first instruction based on the selected operation, and executing the first instruction on the first projection content to execute the selected operation of the second projection content on the first projection content.
As shown in fig. 11, the first electronic device projects a certain video as first projection content into a first projection area, and three operations A, B and C operable for the first projection content are projected in the first projection area, the second electronic device projects a second projection area in which the second projection content is a rectangle, and the operation C is selected. The first electronic device generates a first instruction corresponding to the C operation, and the first electronic device executes the first instruction to perform the C operation on the first projection content.
For example, when the operation C is a copy operation, the operation C is selected from the first image to indicate that the second electronic device wants to acquire the video file currently projected by the first electronic device, and the first electronic device sends the video file to the second electronic device through the data transmission channel to implement data sharing.
Referring to fig. 12, a sixth flowchart of a data processing method according to an embodiment of the present invention is shown, illustrating automatic adjustment of an image when the first projection content and the second projection content are images, where the method may include the following steps:
1201: and acquiring the position relation of the first projection area and the second projection area.
1202: and acquiring second projection content from the second electronic equipment through the data transmission channel.
In the embodiment of the present invention, the first projection content is a first image, the first image is a half image of the third image, and the second projection content is a second image. The second image may be the other half of the third image, or may be another image, which is determined by the user of the second electronic device.
1203: and analyzing the first image and the second image based on the position relation to obtain a first analysis result.
In the embodiment of the invention, the analysis of the first image and the second image mainly comprises the step of analyzing the positions of the two images so as to judge whether the two images are spliced into one image. For example, when the first image is displayed on the left of the second image, the right edge of the first image and the left edge of the second image are coincident, which indicates that the first image and the second image are spliced into one image, and if the right edge of the first image is far away from the left edge of the second image, which indicates that the first image and the second image are not spliced into one image; if a portion of the first image overlaps a portion of the second image, it also indicates that the first image and the second image are not stitched into one image.
1204: and when the first analysis result shows that the first image and the second image are overlapped or separated, generating a projection adjusting instruction.
When the first analysis result shows that the first image and the second image are overlapped or separated, the first image and the second image are not spliced into one image, and at the moment, the first electronic equipment needs to produce a projection adjusting instruction, wherein the projection adjusting instruction is at least used for adjusting display parameters of the image projected by any electronic equipment.
Such as a projection adjustment instruction for adjusting a display parameter of a first image projected by a first electronic device or for adjusting a display parameter of a second image projected by a second electronic device. Of course, the projection adjustment instruction is also used to simultaneously adjust the display parameters of the first image projected by the first electronic device and the display parameters of the second image projected by the second electronic device.
1205: and adjusting the image projected by any electronic equipment according to the display parameters so that the adjusted first image and the second image are spliced into a fourth image on the projection surface. Any one of the electronic devices is an electronic device for adjusting the display parameters by the projection adjustment instruction, and the electronic device splices the first image and the second image into one image (a fourth image) by executing the projection adjustment instruction.
1206: and matching the fourth image and the third image, wherein when the fourth image and the third image are matched, the first image and the second image are spliced into the third image, namely the second image is the other half image of the third image.
Taking fig. 13 as an example, the first electronic device projects the first image in the first projection area, the second electronic device projects the second image in the second projection area, and the first image and the second image are separated. On the basis, the first electronic device generates a projection adjustment instruction for adjusting the first image and the second image, and sends the projection adjustment instruction to the second electronic device through the data transmission channel. The first electronic device and the second electronic device respectively execute the projection adjustment instruction to adjust the self-projected images, the adjusted first image and the second image are spliced into a fourth image, and the fourth image is matched with the third image through matching, so that the second image is the other half of the third image.
In some examples of the embodiment of the present invention, the first image includes a first key for performing secure authentication on the device, the second image includes a second key for performing secure authentication on the device, and a third key matching the first key is included in another half of the third image, that is, when the first image and the other image are spliced into the third image, it indicates that the electronic device projecting the other image is a secure device authenticated by the first electronic device, and the two electronic devices may perform the above operations such as data sharing. Therefore, when the second image projected by the second electronic device and the first image projected by the first electronic device are spliced into the third image, it is indicated that the second key is the same as the third key, and the security device that the second electronic device and the first electronic device pass the key authentication can perform the above-mentioned operations such as data sharing with the first electronic device.
In other examples of the embodiment of the present invention, the third image is a frame image in a video played by the first electronic device, and when the first image and the second image are spliced into the third image, the first electronic device may further call the corresponding video to play, and a process of the third image may be shown in fig. 14, and based on fig. 12, the method may further include the following steps:
1207: and acquiring a video corresponding to the third image in the first electronic equipment. In the embodiment of the invention, the first electronic device can record the video name corresponding to each frame of image, and after the third image is spliced, the video corresponding to the third image can be found through the video name.
1208: and selecting a playing application corresponding to the video from the first electronic equipment, and indicating the first electronic equipment to call the playing application.
1209: and loading the video into a playing application, and displaying the played video in the first projection area.
Based on the above scheme, the first electronic device can call the video corresponding to the third image after the third image is spliced, and start the playing application to play the video, so that the video is obtained based on image matching and automatically played. And the embodiment of the invention can play the video from the starting position of the video when playing the video. Of course, it may also continue to play the video starting from the playing position in the video where the third image is located.
In addition, when the first electronic device displays the played video in the first projection area, the first electronic device may further execute a corresponding instruction by detecting an operation on the first projection area, as shown in fig. 15, and on the basis of fig. 14, the method may further include the following steps:
1210: the depth information of the operation body is acquired by detecting a first deformation amount of the structured light projected by the first projection unit.
The first deformation quantity is a deformation quantity which enables the structured light to deform when the operation body moves in the first projection area. The first deformation quantity is larger when the operation body is closer to the first projection unit; when the operation body is farther from the first projection unit, the first deformation amount is smaller, so that the first electronic device binds the first deformation amount with the depth information, and the depth information of the operation body is acquired through detection of the first deformation amount.
1210: when the depth information of the operation body is the same as the depth information of the projection plane, the second and third deformation amounts of the structured light are detected.
The depth information of the projection surface is acquired by a first acquisition unit in the first electronic device, when the depth information of the operation body is the same as the depth information of the projection surface, the operation body is indicated to operate on the projection surface in the first projection area, and the first electronic device needs to determine what operation is performed by the operation body at the moment, and the determination of the operation can be judged by the second deformation amount and the third deformation amount.
1211: and determining the trigger operation of the operation body through the second deformation quantity and the third deformation quantity, and generating a second instruction corresponding to the trigger action.
In the embodiment of the invention, if the second deformation amount indicates that the variation trend is changed from large to small, the third deformation amount indicates that the variation trend is changed from small to large, and the variation distance of each deformation amount is basically the same, the first electronic device determines that the operation body executes the click operation, and generates the second instruction corresponding to the click operation.
The currently generated second instruction is related to a second instruction generated when the clicking operation is executed before, and if the previously generated second instruction is a playing instruction, the currently generated second instruction is a pausing instruction; and if the previously generated second instruction is a pause instruction, the currently generated second instruction is a play instruction.
If the second deformation variable is zero and the third deformation variable is a deformation variable of a plurality of light beams in the structured light, it indicates that the operating body is dragging a video file from the playlist to be played by the first electronic device, and at this time, the first electronic device generates a second instruction for retrieving the video file and adding the video file to the playing application.
In the embodiment of the present invention, the first projection area and the second projection area are overlapped, so when the second instruction is executed, the first electronic device first needs to determine the position of the trigger operation, and particularly when the trigger operation is a single-click operation, if the position of the trigger operation is the overlapped area of the first projection area and the second projection area, both the first electronic device and the second electronic device execute the second instruction, which causes the second instruction to be executed twice, so that the result executed by the first electronic device is different from the result executed by the trigger operation. When the positions of the trigger operations are different, the operations performed by the first electronic device can be referred to in steps 1212 and 1213.
1212: and when the position of the trigger operation is located in the overlapping area of the first projection area and the second projection area, sending an execution prohibition instruction to the second electronic equipment through the data transmission channel, and instructing the second electronic equipment to suspend executing the second instruction.
1213: and when the position of the trigger operation is located in the area where the first projection area is not overlapped with the second projection area, executing a second instruction by the first electronic equipment.
Corresponding to the above method embodiment, an embodiment of the present invention provides a data processing apparatus for processing data between a first electronic device and a second electronic device, where the first electronic device includes a first projection unit, the first electronic device projects a first projection area on a projection surface through the first projection unit and projects first projection content in the first projection area through the first projection unit, the second electronic device includes a second projection unit, the second electronic device projects a second projection area on the projection surface through the second projection unit and projects second projection content in the second projection area through the second projection unit, and a data transmission channel is established between the first electronic device and the second electronic device.
The data processing apparatus 900 is applied to a first electronic device, and the structure diagram thereof shown in fig. 16 includes: a first acquisition unit 901, a second acquisition unit 902, an analysis unit 903, a generation unit 904, and an execution unit 905, wherein,
a first obtaining unit 901, configured to obtain a position relationship between the first projection area and the second projection area.
Wherein the positional relationship is a positional relationship formed by the first projection region and the second projection region on the projection plane, and the positional relationship includes: the second projection area is located in the first projection area (all overlap), the second projection area and the first projection area are partially overlapped, the second projection area is not located in the first projection area, one side of the second projection area is overlapped with one side of the first projection area, and the first projection area and the second projection area are separated (the two projection areas are not overlapped).
In some examples of the embodiment of the present invention, the first electronic device further includes a first acquisition unit, the first acquisition unit acquires a projection image formed by the first projection area and the second projection area on the projection surface, and the first acquisition unit 901 may obtain a positional relationship between the first projection area and the second projection area through analysis of the projection image. However, the acquisition field of view of the first acquisition unit is limited, and when the first electronic device is located at a smaller distance from the projection surface, the larger the first projection area formed on the projection surface, the first acquisition unit may acquire only a partial area of the first projection area, so that the first acquisition unit 901 cannot know the position relationship between the first projection area and the second projection area from the projection image.
In other examples of the embodiments of the present invention, the first obtaining unit 901 may determine the position relationship between the first projection area and the second projection area by determining whether a distance between center points of the first projection area and the second projection area is less than half of a sum of lengths of the first projection area and the second projection area.
For example, if the distance between the center points of the first projection unit and the second projection unit is d, the length of the first projection area is a, and the length of the second projection area is b, if d is smaller than (a + b)/2, the first obtaining unit 901 may determine that the first projection area and the second projection area overlap; if d is equal to (a + b)/2, the first acquisition unit 901 may determine that the second projection region is not in the first projection region and one side of the second projection region overlaps one side of the first projection region; if d is greater than (a + b)/2, the first acquisition unit 901 may determine that the first projection region and the second projection region are separated; if d is determined to be less than a/2 and d is greater than b/2 on the basis that d is less than (a + b)/2, the first acquisition unit 901 may determine that the second projection region is located in the first projection region.
The central points of the first projection area and the second projection area are points formed on the projection surface by the light rays passing through the axis in the light rays projected by the respective projection units, the light rays passing through the axis are the light rays emitted by the central points of the projection units, and the light ray lines of the light rays do not change when the light rays pass through the axis. The distance between the two central points is related to the distance (called the axle distance for short) between the axle centers of the first projection unit and the second projection unit, and when the first projection unit and the second projection unit vertically project, the distance between the two central points is equal to the axle distance; when the axes of the first projection unit and the second projection unit rotate up and down along the horizontal axis, the distance between the two central points is equal to the distance between the axes; when the axes of the first projection unit and the second projection unit rotate left and right along the vertical axis, the distance between the two central points is related to the distance between the axes, the rotation direction and the rotation angle.
If the distance between the center points of the first projection unit and the second projection unit is d, the first projection unit rotates right along the vertical axis, and the second projection unit rotates left along the vertical axis, the distance between the center points is m-p cosa-q cosb, where m is the axial distance, p is the depth information (i.e., the distance to the projection plane) of the first projection unit, a is the rotation angle of the first projection unit, q is the depth information of the second projection unit, and b is the rotation angle of the second projection unit. If the first projection unit is rotated to the left and the second projection unit is rotated to the right, the distance d from the center point is m + p cosa + q cosb.
The length of the first projection area and the length of the second projection area may be determined by projection parameters of the respective projection units, and the first obtaining unit 901 may obtain, from the second electronic device through the data transmission channel, various parameters of the second projection area, where the various parameters include the length of the second projection area.
A second obtaining unit 902, configured to obtain second projection content from a second electronic device through a data transmission channel. The second projection content is a content projected on the projection surface by the second projection unit, and may be a segment of text, data, or an image, and is specifically determined by a user of the second electronic device according to an application scenario.
An analyzing unit 903, configured to analyze the first projection content and the second projection content based on the position relationship to obtain a first analysis result, where the first analysis result is used to indicate a relationship between the first projection content and the second projection content.
A generating unit 904, configured to generate a first instruction according to the first analysis result, where the first instruction is used to instruct the first electronic device to process the first projection content and the second projection content based on the location relationship.
An execution unit 905 is configured to execute the first instruction in the first electronic device.
In the embodiment of the present invention, after the second projection content is acquired, the second projection content and the first projection content are analyzed in combination with the position relationship between the two projection areas, so as to obtain a first instruction for processing the two projection contents, and the execution unit 905 executes the first instruction on the first electronic device. In this way, after the first electronic device uses the data processing apparatus provided by the embodiment of the present invention to share the projection content with the second electronic device through the data transmission channel, the projection content can be further processed, and the processing function of the electronic device on the projection content is increased.
A first application scenario of the data processing apparatus provided in the embodiment of the present invention is as follows: the first projection content is data generated by the first object at a first time, the second projection content is data generated by the first object at a second time, and the first time is earlier than the second time, so that the data processing apparatus 900 may update the latest data to an electronic device that does not store the data when the data generated by the same object at different times is stored in the first electronic device and the second electronic device.
In a first application scenario, the analyzing unit 903 analyzes the first projection content and the second projection content based on the position relationship to obtain a first analysis result, and includes:
and when the position relation indicates that the second projection area is positioned in the first projection area, matching the first projection content with the second projection content, and when the first projection content is not matched with the second projection content, obtaining a first analysis result indicating that the first projection content is different from the second projection content.
In some examples of the first application scenario, when the first projection content and the second projection content are words, the analysis unit 903 may obtain key information in the first projection content and the second projection content through semantic analysis, and compare the key information to determine whether the first projection content and the second projection content are the same.
If the first projection content is: the sales of the product A is 300 ten thousand by No. 1/20 2014, and the second projection content is as follows: product a has a sales volume of 500 ten thousand in month 1 of 2014. The analysis unit 903 obtains key information of the first projection content through semantic analysis, where the key information is: product A-2014-1-20-sales amount 300 ten thousand, the key information of the second projection content is: product A-2014-1-sales 500 ten thousand. From these two pieces of key information, it can be known that the second projected content is the result of statistics of sales of product a in month 1 in 2014, while the first projected content is sales of product a by a certain day, which are significantly different, and the data of the second projected content is significantly more timely than the data of the first projected content.
In other examples of the first application scenario, the first projection content and the second projection content may also be images, and the analysis unit 903 may determine whether the first projection content and the second projection content are the same through image comparison.
In other examples of the first application scenario, the first projection content and the second projection content may be audio, and the analysis unit 903 may determine whether the first projection content and the second projection content are the same through audio spectrum analysis.
Accordingly, the generating unit 904 may generate the update instruction according to the first analysis result. The execution unit 905 executes a first instruction in the first electronic device, including: and executing an updating instruction, updating the first projection content into second projection content, deleting the first projection content projected to the first projection area, and projecting the second projection content to the first projection area through the first projection unit.
In a second application scenario provided in the embodiment of the present invention, the first projection content is a first image, and the second projection content is a second image. In a second application scenario, when the data processing apparatus 900 according to the embodiment of the present invention projects one image as projection content through each of the first electronic device and the second electronic device, the image projected by the first electronic device is automatically changed.
In the second application scenario, the analysis unit 903 may determine whether the display of the first object in the first image and the second object in the second image conforms to a preset display relationship based on the positional relationship. And if not, obtaining a first analysis result indicating that the first object and the second object do not accord with the preset display relationship.
The preset display relationship is a display relationship which the first object and the second object are correspondingly provided with in a real environment. If the first object is a person and the second object is a sofa, the preset display relationship between the first object and the second object should be: the person is back to the sofa. For another example, if the first object is a puppy and the second object is a bone, the preset display relationship between the first object and the second object is a direction from the mouth of the puppy to the bone.
When the first object and the second object do not conform to the preset display relationship, the data processing apparatus 900 needs to change its display direction so that the first object and the second object conform to the preset display relationship. At this time, the generating unit 904 generates a first instruction for changing the display direction of the first object according to the first analysis result. The first instruction is then executed by the execution unit 905 to alter the display orientation of the first object in the first projection region.
In a second application scenario, the data processing apparatus 900 provided in the embodiment of the present invention may further include, on the basis of fig. 16: a third acquiring unit 906 and an adjusting unit 907, as shown in fig. 17, wherein:
a third acquiring unit 906 for acquiring the first object and the second object.
The adjusting unit 907 is configured to adjust the posture of the first object according to the association relationship when the first object and the second object have the association relationship according to the preset object association relationship.
In the embodiment of the present invention, the preset object association relationship is: the interactive relationship of two objects specified in the interactive game stored in the first electronic device. After the first object and the second object are acquired, the objects designated in the interactive game are compared to determine whether the acquired first object and the acquired second object are two objects designated in the interactive game, and if so, the adjusting unit 907 is triggered to adjust the posture of the first object according to the association relationship.
In a third application scenario of the embodiment of the present invention, it is described how the data processing apparatus 900 supplements, in a projection manner, the first projection content projected by the first electronic device, where when a position relationship of the analysis unit 903 in the data processing apparatus 900 indicates that the first projection area and the second projection area overlap, a schematic structural diagram of the analysis unit 903 in the data processing apparatus 900 is shown in fig. 18, and the configuration diagram may include: an acquisition subunit 9031, a conversion subunit 9032, and a comparison subunit 9033, wherein,
an acquiring subunit 9031, configured to acquire a first position of the first projection content in the first projection area, and a second position of the second projection content in the second projection area.
It can be understood that: projection parameters are set in the first electronic device and the second electronic device, and the projection parameters are used for instructing the electronic devices how to display the projection content in the projection area, so that the acquiring subunit 9031 may determine the first position of the first projection content in the first projection area by using its own projection parameters. Of course, the second electronic device may also determine the second position of the second projection content in the second projection area through its own projection parameters. The second position is sent to the acquiring subunit 9031 through the data transmission channel.
A conversion subunit 9032, configured to convert the second location into a third location of the second projection content in the first projection area based on the location relationship.
In some examples of the embodiment of the present invention, the position relationship may be determined by the first electronic device by analyzing the image acquired by the first acquiring unit, and as known to those skilled in the art, the first electronic device may compare coordinates of a pixel point in the image as a third position of the second position in the first projection area, so that the converting subunit 9032 may compare coordinates of a pixel point in the image as a third position of the second position in the first projection area.
In other examples of the embodiment of the present invention, the position relationship is obtained by comparing a center point formed on the projection plane by the axes of the first electronic device and the second electronic device with lengths of two projection areas, in this case, the obtaining subunit 9031 obtains the center point of the second projection area, the position of the second projection area in the first projection area can be obtained by expanding the center point, and the second position of the second projection content in the second projection area is known, so that the converting subunit 9032 may convert the second position into the third position by using the position of the second projection area in the first projection area.
The comparison subunit 9033 is configured to compare the first position and the third position to determine whether the first projection content and the second projection content overlap, so as to obtain a first analysis result used for determining whether the first projection content and the second projection content overlap.
The first position is used for indicating the display position of the first projection content in the first projection area, and the third position is used for indicating the display position of the second projection content in the first projection area. When the first position and the third position overlap, the first projection content and the second projection content also overlap, which results in that the overlapped first projection content and the second projection content cannot be clearly distinguished, so the data processing apparatus 900 needs to adjust the display of the first projection content and the second projection content so that the first projection content and the second projection content do not overlap.
At this time, the analysis unit 904 generates a typesetting instruction when the first analysis result indicates that the first projection content and the second projection content overlap. The execution unit 905 executes the typesetting instruction, typesets the first projection content and the second projection content based on a preset typesetting mode, and projects the typesetted first projection content and second projection content into the first projection area through the first projection unit.
The preset typesetting mode is different based on the different types of the first projection content and the second projection content. The preset typesetting mode includes but is not limited to the following modes:
when the first projection content is characters and the second projection content is characters or multimedia information, the first projection content is displayed on the upper half part of the first projection area, and the second projection content is displayed on the lower half part of the first projection area;
and when the first projection content and the second projection content are images and the first projection content and the second projection content are half images of a complete image, splicing the first projection content and the second projection content into the complete image.
In a fourth application scenario provided in the embodiment of the present invention, how a second electronic device obtains a first projection content from a first electronic device in a projection manner is described, where the first projection content includes a first text content and a first option content, the first option content includes at least one operation on the first text content, and the second projection content is a first graphic, and the first graphic is used for selecting one operation included in the first option content.
In the fourth application scenario, when the first analysis result indicates that the first projection content and the second projection content overlap, the generating unit 904 obtains an operation selected by the first graph according to the third position, and generates the first instruction based on the selected operation. The execution unit 905 executes a first instruction on the first projection content to perform a selected operation of the second projection content on the first projection content.
As shown in fig. 11, the first electronic device projects a certain video as first projection content into a first projection area, and three operations A, B and C operable for the first projection content are projected in the first projection area, the second electronic device projects a second projection area in which the second projection content is a rectangle, and the operation C is selected. The first electronic device generates a first instruction corresponding to the C operation, and the first electronic device executes the first instruction to perform the C operation on the first projection content.
For example, when the operation C is a copy operation, the operation C is selected from the first image to indicate that the second electronic device wants to acquire the video file currently projected by the first electronic device, and the first electronic device sends the video file to the second electronic device through the data transmission channel to implement data sharing.
In a fifth application scenario provided in the embodiment of the present invention, it is described how the data processing apparatus 900 provided in the embodiment of the present invention automatically adjusts an image when the first projection content is the first image, the first image is a half image of the third image, and the second projection content is the second image.
The generating unit 904 generates the first instruction according to the first analysis result, including: and when the first analysis result shows that the first image and the second image are overlapped or separated, generating a projection adjusting instruction, wherein the projection adjusting instruction is at least used for adjusting the display parameters of the image projected by any electronic equipment.
Such as a projection adjustment instruction for adjusting a display parameter of a first image projected by a first electronic device or for adjusting a display parameter of a second image projected by a second electronic device. Of course, the projection adjustment instruction is also used to simultaneously adjust the display parameters of the first image projected by the first electronic device and the display parameters of the second image projected by the second electronic device.
The execution unit 905 executes a first instruction in the first electronic device, including: and adjusting the image projected by any electronic equipment according to the display parameters, matching a fourth image and a third image which are spliced on the projection surface of the adjusted first image and the adjusted second image, and when the fourth image is matched with the third image, indicating that the first image and the second image are spliced into the third image.
Any one of the electronic devices is an electronic device for adjusting the display parameters by the projection adjustment instruction, and the electronic device splices the first image and the second image into one image (a fourth image) by executing the projection adjustment instruction.
In a fifth application scenario, the third image is a frame of image in a video played by the first electronic device, and the data processing apparatus provided in the embodiment of the present invention may further include, on the basis of fig. 16: a fourth acquiring unit 908, a selecting unit 909, a loading unit 910, and a display unit 911, as shown in fig. 19. Wherein,
a fourth obtaining unit 908, configured to obtain a video corresponding to a third image in the first electronic device when the first image and the second image are spliced into the third image. In the embodiment of the invention, the first electronic device can record the video name corresponding to each frame of image, and after the third image is spliced, the video corresponding to the third image can be found through the video name.
A selecting unit 909 is configured to select a playback application corresponding to the video from the first electronic device, and instruct the first electronic device to call the playback application.
A loading unit 910, configured to load the video into the playing application.
A display unit 911 for displaying the played video in the first projection area.
Based on the above scheme, after the third images are spliced, the data processing apparatus 900 may call the video corresponding to the third images, and start the playing application to play the video, so as to obtain the video based on image matching and automatically play the video. And the embodiment of the invention can play the video from the starting position of the video when playing the video. Of course, it may also continue to play the video starting from the playing position in the video where the third image is located.
In addition, in a fifth application scenario, light projected by the first projection unit is structured light, and the data processing apparatus 900 provided in the embodiment of the present invention may further include, on the basis of fig. 19: a fifth acquiring unit 912, a detecting unit 913, an instruction generating unit 914, a sixth acquiring unit 915, and an indicating unit 916, as shown in fig. 20. Wherein,
a fifth acquiring unit 912 for acquiring depth information of the operating body by detecting the first deformation amount of the structured light. The first deformation quantity is a deformation quantity which enables the structured light to deform when the operation body moves in the first projection area. The first deformation quantity is larger when the operation body is closer to the first projection unit; when the operation body is farther from the first projection unit, the first deformation amount is smaller, and therefore the fifth acquisition unit 912 binds the first deformation amount with the depth information, and acquires the depth information of the operation body by detecting the first deformation amount.
A detection unit 913 for detecting the second and third deformation amounts of the structured light when the depth information of the operation body is the same as the depth information of the projection plane, the depth information of the projection plane being acquired by the first acquisition unit in the first electronic device. When the depth information of the operation body is the same as the depth information of the projection surface, indicating that the operation body is operating on the projection surface within the first projection area, the first electronic device needs to determine what operation the operation body is performing, and the determination of the operation can be judged by the second deformation amount and the third deformation amount.
The instruction generating unit 914 is configured to determine a trigger operation of the operation body by the second deformation amount and the third deformation amount, and generate a second instruction corresponding to the trigger operation.
In the embodiment of the present invention, if the second deformation amount indicates that the variation trend is from large to small, the third deformation amount indicates that the variation trend is from small to large, and the variation distance of the deformation amount is substantially the same each time, the instruction generating unit 914 determines that the operating body performs the single-click operation, and generates the second instruction corresponding to the single-click operation.
The currently generated second instruction is related to a second instruction generated when the clicking operation is executed before, and if the previously generated second instruction is a playing instruction, the currently generated second instruction is a pausing instruction; and if the previously generated second instruction is a pause instruction, the currently generated second instruction is a play instruction.
If the second deformation variable is zero and the third deformation variable is a deformation variable of a plurality of light beams in the structured light, it indicates that the operating body is dragging a video file from the playlist to be played by the first electronic device, and at this time, the instruction generating unit 914 generates a second instruction for retrieving the video file and adding the video file to the playing application.
In the embodiment of the present invention, the first projection area and the second projection area are overlapped, so when the second instruction is executed, the position of the trigger operation needs to be determined first, and particularly when the trigger operation is a single-click operation, if the position of the trigger operation is an overlapped area of the first projection area and the second projection area, both the first electronic device and the second electronic device execute the second instruction, which causes the second instruction to be executed twice, so that the result executed by the first electronic device is different from the result executed by the trigger operation. Therefore, in the embodiment of the present invention, the sixth obtaining unit 915 and the indicating unit 916 execute the second instruction based on the position of the trigger operation.
A sixth obtaining unit 915, configured to obtain a position of the trigger operation, and when the position of the trigger operation is located in an area where the first projection area is not overlapped with the second projection area, the trigger executing unit 905 executes the second instruction.
The indicating unit 916 is configured to send an execution prohibition instruction to the second electronic device through the data transmission channel when the position of the trigger operation is located in an overlapping area of the first projection area and the second projection area, and instruct the second electronic device to suspend executing the second instruction.
In a sixth application scenario of the embodiment of the present invention, the first image includes a first key for performing security authentication on the device, the second image includes a second key for performing security authentication on the device, and a third key matched with the first key is included in the other half of the image of the third image, that is, when the first image and the other image are spliced into the third image, it indicates that the electronic device projecting the other image is a security device authenticated by the first electronic device through the key, and the two electronic devices may perform operations such as the above-mentioned data sharing. Therefore, when the second image projected by the second electronic device and the first image projected by the first electronic device are spliced into the third image, it is indicated that the second key is the same as the third key, and the security device that the second electronic device and the first electronic device pass the key authentication can perform the above-mentioned operations such as data sharing with the first electronic device.
In addition, an embodiment of the present invention further provides an electronic device, including a first projection unit and the data processing apparatus, where the data processing apparatus is connected to the first projection unit, projects a first projection area on a projection surface through the first projection unit, and projects first projection content in the first projection area through the first projection unit, and the data processing apparatus processes the first projection content and second projection content projected by another electronic device, and the processing procedure of the data processing apparatus refers to the description in the method embodiment, which is not described herein.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Finally, it is further noted that, herein, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (27)

1. A data processing method is used for data processing between a first electronic device and a second electronic device, wherein the first electronic device comprises a first projection unit, the first electronic device projects a first projection area on a projection surface through the first projection unit and projects first projection content in the first projection area through the first projection unit, the second electronic device comprises a second projection unit, the second electronic device projects a second projection area on the projection surface through the second projection unit and projects second projection content in the second projection area through the second projection unit, and a data transmission channel is established between the first electronic device and the second electronic device;
the data processing method is applied to the first electronic equipment and comprises the following steps:
acquiring the position relation between the first projection area and the second projection area;
acquiring the second projection content from the second electronic equipment through the data transmission channel;
analyzing the first projection content and the second projection content based on the position relationship to obtain a first analysis result, wherein the first analysis result is used for representing the relationship between the first projection content and the second projection content;
and generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device, wherein the first instruction is used for instructing the first electronic device to process the first projection content and the second projection content based on the position relationship.
2. The method of claim 1, wherein the first projected content is data generated by a first object at a first time, wherein the second projected content is data generated by the first object at a second time, and wherein the first time is earlier than the second time;
analyzing the first projection content and the second projection content based on the position relationship to obtain a first analysis result, including:
when the position relationship indicates that the second projection area is located in the first projection area, matching the first projection content with the second projection content to determine whether the first projection content and the second projection content are the same;
when the first projection content and the second projection content are not matched, a first analysis result indicating that the first projection content and the second projection content are different is obtained.
3. The method of claim 2, wherein generating a first instruction according to the first analysis result and executing the first instruction in the first electronic device comprises:
generating an updating instruction according to the first analysis result;
executing the updating instruction to update the first projection content into the second projection content;
deleting the first projection content projected to the first projection area, and projecting the second projection content to the first projection area through the first projection unit.
4. The method of claim 1, wherein the first projected content is a first image and the second projected content is a second image;
analyzing the first projection content and the second projection content based on the position relationship to obtain a first analysis result, including:
judging whether the display of the first object in the first image and the second object in the second image accords with a preset display relation or not based on the position relation; and if not, obtaining a first analysis result indicating that the first object and the second object do not accord with a preset display relationship.
5. The method of claim 4, wherein generating a first instruction according to the first analysis result and executing the first instruction in the first electronic device comprises:
generating a first instruction for changing the display direction of the first object according to the first analysis result;
executing the first instruction to alter a display orientation of the first object in the first projection area.
6. The method of claim 5, further comprising: acquiring the first object and the second object;
and judging whether the first object and the second object have an association relation according to a preset object association relation, and if so, adjusting the posture of the first object according to the association relation.
7. The method of claim 1, wherein when the positional relationship indicates that the first projection area and the second projection area overlap, analyzing the first projection content and the second projection content based on the positional relationship to obtain a first analysis result comprises:
acquiring a first position of the first projection content in the first projection area and a second position of the second projection content in the second projection area;
converting the second position to a third position of the second projection content in the first projection area based on the positional relationship;
and comparing the first position with the third position to judge whether the first projection content and the second projection content are overlapped or not, so as to obtain a first analysis result for indicating whether the first projection content and the second projection content are overlapped or not.
8. The method of claim 7, wherein generating a first instruction according to the first analysis result and executing the first instruction in the first electronic device comprises:
when the first analysis result shows that the first projection content and the second projection content are overlapped, generating a typesetting instruction;
executing the typesetting instruction, and typesetting the first projection content and the second projection content based on a preset typesetting mode;
and projecting the first projection content and the second projection content after typesetting into the first projection area through the first projection unit.
9. The method of claim 7, wherein the first projected content comprises first textual content and first option content, the first option content comprises at least one operation on the first textual content, and the second projected content is a first graphic;
generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device, wherein the first instruction comprises:
when the first analysis result shows that the first projection content and the second projection content are overlapped, obtaining the operation selected by the first graph according to the third position;
based on the selected operation, a first instruction is generated, and the first instruction is executed on the first projection content.
10. The method of claim 1, wherein the first projected content is a first image, and the first image is a half image of a third image, and the second projected content is a second image;
generating a first instruction according to the first analysis result, and executing the first instruction in the first electronic device, wherein the first instruction comprises:
when the first analysis result shows that the first image and the second image are overlapped or separated, generating a projection adjusting instruction, wherein the projection adjusting instruction is at least used for adjusting display parameters of an image projected by any electronic equipment;
adjusting the image projected by any electronic equipment according to the display parameters so that the adjusted first image and the adjusted second image are spliced into a fourth image on a projection surface;
and matching the fourth image and the third image, wherein when the fourth image and the third image are matched, the first image and the second image are spliced into the third image.
11. The method of claim 10, wherein the third image is a frame of image in a video played by the first electronic device, the method further comprising:
when the first image and the second image are spliced into the third image, acquiring a video corresponding to the third image in the first electronic equipment;
selecting a playing application corresponding to the video from the first electronic equipment, and indicating the first electronic equipment to call the playing application;
and loading the video into the playing application, and displaying the played video in the first projection area.
12. The method of claim 11, wherein the light projected by the first projection unit is structured light, the method further comprising:
acquiring depth information of an operating body by detecting a first deformation amount of the structured light;
when the depth information of the operation body is the same as the depth information of the projection surface, detecting a second deformation and a third deformation of the structured light, wherein the depth information of the projection surface is acquired by a first acquisition unit in the first electronic device;
determining a trigger operation of the operation body through the second deformation quantity and the third deformation quantity, and generating a second instruction corresponding to the trigger operation;
when the position of the trigger operation is located in an overlapping area of the first projection area and the second projection area, sending an execution prohibition instruction to the second electronic device through the data transmission channel, and instructing the second electronic device to suspend executing the second instruction;
and when the position of the trigger operation is located in a region where the first projection region is not overlapped with the second projection region, executing the second instruction by the first electronic equipment.
13. The method of claim 11, wherein the first image comprises a first key for securely authenticating a device, wherein the second image comprises a second key for securely authenticating a device, and wherein a third key matching the first key is included in the other half of the third image;
when the fourth image and the third image are matched, it is indicated that the second key is the same as the third key, the first electronic device and the second electronic device are security devices, and the security devices are used for indicating that the first electronic device and the second electronic device are devices authenticated by keys.
14. A data processing apparatus, configured to process data between a first electronic device and a second electronic device, where the first electronic device includes a first projection unit, the first electronic device projects a first projection area on a projection surface through the first projection unit, and projects first projection content in the first projection area through the first projection unit, the second electronic device includes a second projection unit, the second electronic device projects a second projection area on the projection surface through the second projection unit, and projects second projection content in the second projection area through the second projection unit, and a data transmission channel is established between the first electronic device and the second electronic device;
the data processing device is applied to the first electronic equipment and comprises:
a first acquisition unit configured to acquire a positional relationship between the first projection area and the second projection area;
a second obtaining unit, configured to obtain the second projection content from the second electronic device through the data transmission channel;
an analysis unit, configured to analyze the first projection content and the second projection content based on the position relationship to obtain a first analysis result, where the first analysis result is used to represent a relationship between the first projection content and the second projection content;
a generating unit, configured to generate a first instruction according to the first analysis result, where the first instruction is used to instruct the first electronic device to process the first projection content and the second projection content based on the position relationship;
an execution unit to execute the first instruction in the first electronic device.
15. The apparatus of claim 14, wherein the first projected content is data generated by a first object at a first time, wherein the second projected content is data generated by the first object at a second time, and wherein the first time is earlier than the second time;
the analysis unit analyzes the first projection content and the second projection content based on the position relationship to obtain a first analysis result, and the analysis unit includes:
and when the position relation indicates that the second projection area is located in the first projection area, matching the first projection content with the second projection content, and when the first projection content is not matched with the second projection content, obtaining a first analysis result indicating that the first projection content is different from the second projection content.
16. The apparatus of claim 15, wherein the generating unit generates a first instruction according to the first analysis result, and comprises: generating an updating instruction according to the first analysis result;
the execution unit executes the first instruction in the first electronic device, including: and executing an updating instruction, updating the first projection content into the second projection content, deleting the first projection content projected to the first projection area, and projecting the second projection content to the first projection area through the first projection unit.
17. The apparatus of claim 14, wherein the first projected content is a first image and the second projected content is a second image;
the analysis unit analyzes the first projection content and the second projection content based on the position relationship to obtain a first analysis result, and the analysis unit includes:
judging whether the display of the first object in the first image and the second object in the second image accords with a preset display relation or not based on the position relation; and if not, obtaining a first analysis result indicating that the first object and the second object do not accord with a preset display relationship.
18. The apparatus of claim 17, wherein the generating unit generates a first instruction according to the first analysis result, comprising: generating a first instruction for changing the display direction of the first object according to the first analysis result;
the execution unit executes the first instruction in the first electronic device, including: executing the first instruction to alter a display orientation of the first object in the first projection area.
19. The apparatus of claim 18, further comprising: a third acquisition unit configured to acquire the first object and the second object;
and the adjusting unit is used for adjusting the posture of the first object according to a preset object association relationship under the condition that the first object and the second object are judged to have the association relationship according to the preset object association relationship.
20. The apparatus according to claim 14, wherein when the positional relationship indicates that the first projection area and the second projection area overlap, the analysis unit includes: a fetch subunit, a convert subunit, and a compare subunit, wherein,
an obtaining subunit, configured to obtain a first position of the first projection content in the first projection area and a second position of the second projection content in the second projection area;
a conversion subunit, configured to convert, based on the positional relationship, the second position into a third position of the second projection content in the first projection area;
a comparison subunit, configured to compare the first position and the third position to determine whether the first projection content and the second projection content are overlapped, so as to obtain the first analysis result indicating whether the first projection content and the second projection content are overlapped.
21. The apparatus of claim 20, wherein the analysis unit generates a first instruction according to the first analysis result, comprising: when the first analysis result shows that the first projection content and the second projection content are overlapped, generating a typesetting instruction;
the execution unit executes the first instruction in the first electronic device, including: executing the typesetting instruction, typesetting the first projection content and the second projection content based on a preset typesetting mode, and projecting the typesetted first projection content and second projection content into the first projection area through the first projection unit.
22. The apparatus of claim 20, wherein the first projected content comprises first textual content and first option content, the first option content comprises at least one operation on the first textual content, and the second projected content is a first graphic;
the generating unit generates a first instruction according to the first analysis result, and the generating unit comprises: and when the first analysis result shows that the first projection content and the second projection content are overlapped, obtaining the operation selected by the first graph according to the third position, and generating a first instruction based on the selected operation.
23. The apparatus of claim 14, wherein the first projected content is a first image, and the first image is a half image of a third image, and the second projected content is a second image;
the generating unit generates a first instruction according to the first analysis result, and the generating unit comprises: when the first analysis result shows that the first image and the second image are overlapped or separated, generating a projection adjusting instruction, wherein the projection adjusting instruction is at least used for adjusting display parameters of an image projected by any electronic equipment;
the execution unit executes the first instruction in the first electronic device, including: and adjusting the image projected by any electronic equipment according to the display parameters, matching a fourth image and a third image which are spliced on a projection surface of the adjusted first image and the adjusted second image, and when the fourth image is matched with the third image, indicating that the first image and the second image are spliced into the third image.
24. The apparatus of claim 23, wherein the third image is a frame of image in a video played by the first electronic device, the apparatus further comprising:
a fourth obtaining unit, configured to obtain, when the first image and the second image are spliced into the third image, a video corresponding to the third image in the first electronic device;
the selecting unit is used for selecting a playing application corresponding to the video from the first electronic equipment and indicating the first electronic equipment to call the playing application;
a loading unit, configured to load the video into the playback application;
and the display unit is used for displaying the played video in the first projection area.
25. The apparatus of claim 24, wherein the light projected by the first projection unit is structured light, the apparatus further comprising:
a fifth acquisition unit configured to acquire depth information of the operation body by detecting the first deformation amount of the structured light;
a detection unit configured to detect a second deformation amount and a third deformation amount of the structured light when depth information of the operation body is the same as depth information of the projection plane, the depth information of the projection plane being acquired by a first acquisition unit in the first electronic device;
an instruction generating unit configured to determine a trigger operation of the operation body by the second and third shape variables and generate a second instruction corresponding to the trigger operation;
a sixth obtaining unit, configured to obtain a position of the trigger operation, and when the position of the trigger operation is located in a region where the first projection region is not overlapped with the second projection region, trigger the execution unit to execute the second instruction;
and the indicating unit is used for sending an execution prohibition instruction to the second electronic device through the data transmission channel when the position of the trigger operation is located in the overlapping area of the first projection area and the second projection area, and indicating the second electronic device to suspend executing the second instruction.
26. The apparatus of claim 23, wherein the first image comprises a first key for securely authenticating a device, wherein the second image comprises a second key for securely authenticating a device, and wherein a third key matching the first key is included in the other half of the third image;
when the fourth image and the third image are matched, it is indicated that the second key is the same as the third key, the first electronic device and the second electronic device are security devices, and the security devices are used for indicating that the first electronic device and the second electronic device are devices authenticated by keys.
27. An electronic device including a first projection unit, the electronic device projecting a first projection area on a projection surface through the first projection unit and projecting first projection content in the first projection area through the first projection unit, the electronic device being characterized by further comprising: a data processing apparatus as claimed in any one of claims 14 to 26 connected to the first projection unit.
CN201410141683.2A 2014-04-10 2014-04-10 A kind of data processing method, device and electronic equipment Active CN104980722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410141683.2A CN104980722B (en) 2014-04-10 2014-04-10 A kind of data processing method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410141683.2A CN104980722B (en) 2014-04-10 2014-04-10 A kind of data processing method, device and electronic equipment

Publications (2)

Publication Number Publication Date
CN104980722A CN104980722A (en) 2015-10-14
CN104980722B true CN104980722B (en) 2017-08-29

Family

ID=54276760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410141683.2A Active CN104980722B (en) 2014-04-10 2014-04-10 A kind of data processing method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN104980722B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017191307A (en) * 2016-04-11 2017-10-19 セイコーエプソン株式会社 Projection system, projector, and control method for projector
CN106791741B (en) * 2016-12-07 2018-09-21 重庆杰夫与友文化创意有限公司 Multi-screen marching method and device
CN109587458B (en) * 2017-09-29 2021-10-15 中强光电股份有限公司 Projection system and automatic setting method thereof
US10571863B2 (en) * 2017-12-21 2020-02-25 International Business Machines Corporation Determine and project holographic object path and object movement with mult-device collaboration
CN110471525A (en) * 2019-08-05 2019-11-19 薄涛 Projected image processing method, device, equipment and its storage medium
CN112822468B (en) * 2020-12-31 2023-02-17 成都极米科技股份有限公司 Projection control method and device, projection equipment and laser controller

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169367A (en) * 2010-02-24 2011-08-31 英特尔公司 Interactive projected displays
JP2011250206A (en) * 2010-05-27 2011-12-08 Kyocera Corp Portable electronic apparatus and image projection unit
CN103259996A (en) * 2012-02-21 2013-08-21 佳能株式会社 Display system, display apparatus, and method for controlling display system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8514234B2 (en) * 2010-07-14 2013-08-20 Seiko Epson Corporation Method of displaying an operating system's graphical user interface on a large multi-projector display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169367A (en) * 2010-02-24 2011-08-31 英特尔公司 Interactive projected displays
JP2011250206A (en) * 2010-05-27 2011-12-08 Kyocera Corp Portable electronic apparatus and image projection unit
CN103259996A (en) * 2012-02-21 2013-08-21 佳能株式会社 Display system, display apparatus, and method for controlling display system

Also Published As

Publication number Publication date
CN104980722A (en) 2015-10-14

Similar Documents

Publication Publication Date Title
CN104980722B (en) A kind of data processing method, device and electronic equipment
US11678004B2 (en) Recording remote expert sessions
US9195345B2 (en) Position aware gestures with visual feedback as input method
US8984406B2 (en) Method and system for annotating video content
CN111178191B (en) Information playing method and device, computer readable storage medium and electronic equipment
US20130057642A1 (en) Video conferencing system, method, and computer program storage device
JP6754968B2 (en) A computer-readable storage medium that stores a video playback method, video playback device, and video playback program.
US20130055143A1 (en) Method for manipulating a graphical user interface and interactive input system employing the same
US20150185825A1 (en) Assigning a virtual user interface to a physical object
JP2001125738A (en) Presentation control system and method
US11528535B2 (en) Video file playing method and apparatus, and storage medium
US20210192751A1 (en) Device and method for generating image
US20200117908A1 (en) Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content by tiling the sphere
US20130249788A1 (en) Information processing apparatus, computer program product, and projection system
CN109561240A (en) System and method for generating media asset
US20190155465A1 (en) Augmented media
US11758217B2 (en) Integrating overlaid digital content into displayed data via graphics processing circuitry
US20140229823A1 (en) Display apparatus and control method thereof
JP6677160B2 (en) Information processing apparatus, information processing system, information processing method and program
CN115516867B (en) Method and system for reducing latency on collaboration platforms
WO2022231703A1 (en) Integrating overlaid digital content into displayed data via processing circuitry using a computing memory and an operating system memory
US20170351415A1 (en) System and interfaces for an interactive system
US11682101B2 (en) Overlaying displayed digital content transmitted over a communication network via graphics processing circuitry using a frame buffer
US20230326108A1 (en) Overlaying displayed digital content transmitted over a communication network via processing circuitry using a frame buffer
US20240098213A1 (en) Modifying digital content transmitted to devices in real time via processing circuitry

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant