CN106843790B - Information display system and method - Google Patents

Information display system and method Download PDF

Info

Publication number
CN106843790B
CN106843790B CN201710061210.5A CN201710061210A CN106843790B CN 106843790 B CN106843790 B CN 106843790B CN 201710061210 A CN201710061210 A CN 201710061210A CN 106843790 B CN106843790 B CN 106843790B
Authority
CN
China
Prior art keywords
information
scene
dimensional
server
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710061210.5A
Other languages
Chinese (zh)
Other versions
CN106843790A (en
Inventor
侯朝阳
肖洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Senscape Technologies Beijing Co ltd
Original Assignee
Senscape Technologies Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Senscape Technologies Beijing Co ltd filed Critical Senscape Technologies Beijing Co ltd
Priority to CN201710061210.5A priority Critical patent/CN106843790B/en
Publication of CN106843790A publication Critical patent/CN106843790A/en
Application granted granted Critical
Publication of CN106843790B publication Critical patent/CN106843790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an information display system and a method, wherein the system comprises: the camera equipment is used for acquiring first scene information of a scene where the camera equipment is located and transmitting the first scene information to the server; the intelligent equipment is used for displaying the three-dimensional virtual display information, acquiring second scene information of a scene where the intelligent equipment is located, and transmitting the second scene information and the three-dimensional virtual display information to the server; the server is used for processing the three-dimensional virtual display information according to the received first scene information and the second scene information to obtain three-dimensional scene display information, and transmitting the three-dimensional scene display information to the display equipment; and the display equipment is used for displaying the three-dimensional scene display information. The invention also provides a corresponding information display method.

Description

Information display system and method
Technical Field
The invention relates to an information display system and method.
Background
At present, the traditional information display technology is used for teaching, exhibition, open-screen type, lecture table and other occasions needing information display, mainly combines designed elements such as graphics, characters, animations and the like, outputs the combined elements to display equipment, and displays the combined elements in front of audiences through projection or a display screen, but the information display mode is single and lacks space interaction.
Therefore, a system capable of implementing a spatial interaction function and displaying information better by combining technologies such as augmented reality and holographic display is needed.
Disclosure of Invention
In view of the above, it is an object of the present invention to provide an information presentation system and method that seek to solve, or at least alleviate, the above-identified problems.
In a first aspect, an embodiment of the present invention provides an information presentation system, where the system includes: image pickup apparatus, display apparatus, smart apparatus, and server, wherein
The camera shooting device is used for acquiring first scene information of a scene where the camera shooting device is located and transmitting the first scene information to the server;
the intelligent equipment is used for displaying the three-dimensional virtual display information, acquiring second scene information of a scene where the intelligent equipment is located, and transmitting the second scene information and the three-dimensional virtual display information to the server;
the server is used for processing the three-dimensional virtual display information according to the received first scene information and the second scene information to obtain three-dimensional scene display information, and transmitting the three-dimensional scene display information to the display equipment;
and the display equipment is used for displaying the three-dimensional scene display information.
Optionally, in the method according to the present invention, the server is further configured to:
and setting the coordinate system of the received first scene information transmitted by the camera equipment as a target coordinate system.
Optionally, in the method according to the present invention, the server is further configured to:
after the three-dimensional virtual display information is received, converting a coordinate system of the three-dimensional virtual display information into the target coordinate system to obtain three-dimensional target virtual display information;
and processing the first scene information and the three-dimensional target virtual display information to obtain the three-dimensional scene display information.
Optionally, in the method according to the present invention, the server is further configured to:
calculating conversion information between a coordinate system of the second scene information and the target coordinate system by using an iterative closest point algorithm according to the first scene information and the second scene information;
and converting the three-dimensional virtual display information according to the conversion information to obtain the three-dimensional target virtual display information.
Optionally, in the method according to the invention, wherein,
the intelligent equipment is also used for acquiring external operation information and transmitting the operation information to the server;
the server is further used for adjusting the display position of the three-dimensional scene display information in the display equipment according to the three-dimensional scene display information and the operation information.
In a second aspect, an embodiment of the present invention provides an information presentation method, which employs the information presentation system as above, and includes:
acquiring first scene information of a scene where the camera shooting device is located through the camera shooting device, and transmitting the first scene information to the server;
displaying three-dimensional virtual display information through the intelligent equipment, acquiring second scene information of a scene where the intelligent equipment is located, and transmitting the second scene information and the three-dimensional virtual display information to the server;
processing the three-dimensional virtual display information according to the received first scene information and the second scene information through the server to obtain three-dimensional scene display information, and transmitting the three-dimensional scene display information to the display equipment;
and displaying the three-dimensional scene display information through the display equipment.
Optionally, in the apparatus according to the present invention, the processing, by the server, the three-dimensional virtual display information according to the received first scene information and the second scene information includes:
and the server is used for calibrating the coordinate system of the received first scene information transmitted by the camera equipment as a target coordinate system.
Optionally, in the apparatus according to the present invention, the processing, by the server, the three-dimensional virtual display information according to the received first scene information and the second scene information further includes:
after the server receives the three-dimensional virtual display information, converting a coordinate system of the three-dimensional virtual display information into the target coordinate system to obtain three-dimensional target virtual display information;
and processing the first scene information and the three-dimensional target virtual display information through the server to obtain the three-dimensional scene display information.
Optionally, in the apparatus according to the present invention, the converting, by the server, the coordinate system of the three-dimensional virtual display information to the target coordinate system after receiving the three-dimensional virtual display information, so as to obtain three-dimensional target virtual display information includes:
calculating conversion information between a coordinate system of the second scene information and the target coordinate system by using an iterative closest point algorithm according to the first scene information and the second scene information through the server;
and converting the three-dimensional virtual display information through the server according to the conversion information to obtain the three-dimensional target virtual display information.
Optionally, in the apparatus according to the present invention, further comprising:
acquiring external operation information through the intelligent equipment, and transmitting the operation information to the server;
and adjusting the display position of the three-dimensional scene display information in the display equipment by the server according to the three-dimensional scene display information and the operation information.
According to the technical scheme, different scene information in the same scene is acquired from different angles through the camera device and the intelligent device, and the three-dimensional scene display information is obtained by processing the different scene information and the three-dimensional virtual display information preset in the intelligent device, so that the viewing angle of a user is increased, the accuracy of the three-dimensional scene display information is improved, and the enjoyment is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a block diagram illustrating an information presentation system provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a depth camera capture field of view provided by an embodiment of the invention;
fig. 3 is a schematic diagram illustrating an information presentation scenario in an intelligent device according to an embodiment of the present invention;
fig. 4 shows a flowchart of an information presentation method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 shows a block diagram of an information presentation system according to one embodiment of the present invention. As shown in fig. 1, the system includes: the camera device 110, the display device 120, the smart device 130, and the server 140.
The image capturing apparatus 110 is configured to acquire first scene information of a scene in which the image capturing apparatus is located, and transmit the first scene information to the server 140. The first scene information generally includes image information of the first scene, depth information, data (IMU data) output by an Inertial Measurement Unit (IMU) in the image pickup apparatus, and the like. The first scene generally refers to a scene in the field of view of the image pickup device, the IMU data generally includes an attitude angle, an acceleration, a speed, and the like of the device, and the depth information refers to three-dimensional space coordinate information obtained through a three-dimensional reconstruction function, and can embody a spatial outline. For example, for a picture, only two dimensions are provided, and depth information in the direction perpendicular to the picture is obtained, so that three-dimensional information is formed, and a large number of three-dimensional coordinate points are point cloud information viewed macroscopically, so that the spatial outline is reflected.
In one embodiment, at a new release meeting, due to the limited field of view of depth cameras such as Hololens, Kinect (refer to FIG. 2), the release meeting place may be provided with a plurality of depth cameras, for example, at each corner of the meeting place, so as to capture all pictures of the meeting place.
The smart device 130 includes a display screen, a depth camera, a luminosity sensor, an environment sensing camera, a 200-ten-thousand-pixel camera, a storage unit and the like, the storage unit stores three-dimensional virtual display information in advance, the depth camera, the luminosity sensor and the like are used for detecting and capturing gestures and the like of a wearer, the smart device 130 can be used for displaying the three-dimensional virtual display information such as virtual football, court and the like in a virtual scene, acquiring second scene information of the scene where the smart device is located, and transmitting the second scene information and the three-dimensional virtual display information to the server 140, the second scene information includes image information, depth information and data (IMU data) output by an Inertial Measurement Unit (IMU) and the like of the second scene, the second scene generally refers to the scene in a field of the smart device, the first scene information and the second scene information are generally information of the same scene acquired from different angles, the IMU data also includes a posture angle, an acceleration, a speed and the like of the device, the smart device 130 can be but is not limited to smart device, smart device having a function of displaying the virtual information and acquiring the same as a portable device, such as a smart device L ens, and a wearable device having a function of tracking function of a three-dimensional tracking function.
The Hololens holographic glasses are special glasses with a CPU, a GPU and a holographic processor, a wearer can enter an augmented reality world indoors through picture images and sound, holographic experience is carried out by taking the surrounding environment as a carrier, various virtual information can be added on the images through the Holo L ens by taking the actual surrounding environment as the carrier, the Mars surface, the human heart model and even the virtual famous scenic spots can be viewed indoors through the Holo L ens, for example, after the lessee wears the Hololens holographic glasses, teaching contents such as the human body model, the moisture sub-model and the like can be presented in a holographic mode, and the wearer can move, zoom, rotate and the like the teaching contents, so that detailed teaching explanation can be carried out for students.
The server 140 is configured to process the three-dimensional virtual display information according to the received first scene information and the second scene information to obtain three-dimensional scene display information, and transmit the three-dimensional scene display information to the display device 120, where the display device 120 displays the three-dimensional scene display information, where the display device may be, but is not limited to, a projector, a display screen, an L ED screen, a screen with a display function, and the like.
In one embodiment, the server 140 may determine the coordinate system of the received second scene information transmitted by the smart device as the target coordinate system, or the server 140 may determine the coordinate system of the received first scene information transmitted by the image capturing device 110 as the target coordinate system, or the server 140 may directly determine the world coordinate system as the target coordinate system. Wherein the target coordinate system is used to unify all data received by the server into the same coordinate system, it should be understood herein that the calibration of the target coordinate system is merely illustrative, and all coordinate systems that can unify data into the same coordinate system are within the scope of the present invention.
The invention can obtain the three-dimensional scene display information to be displayed by any one of the following modes.
When the server 140 specifies the coordinate system of the first scene information as the target coordinate system, the server 140 converts the coordinate system of the three-dimensional virtual display information into the target coordinate system after receiving the three-dimensional virtual display information, so as to obtain the three-dimensional target virtual display information. And processing the first scene information and the three-dimensional target virtual display information to obtain three-dimensional scene display information.
When the server 140 specifies the coordinate system of the second scene information as the target coordinate system, the server 140 converts the coordinate system of the first scene information into the target coordinate system after receiving the first scene information to obtain the first target scene information, and converts the coordinate system of the three-dimensional virtual display information into the target coordinate system after receiving the three-dimensional virtual display information to obtain the three-dimensional target virtual display information. And processing the first target scene information and the three-dimensional target virtual display information to obtain three-dimensional scene display information.
When the server converts the coordinate system of the acquired first scene information, second scene information or three-dimensional virtual display information into the target coordinate system, the same algorithm is used for conversion, and the following description will be given by taking an example of calibrating the first scene information as the target coordinate system.
In one embodiment, the server 140 calculates transformation information between the coordinate system of the second scene information and the target coordinate system using an iterative closest point algorithm based on the first scene information and the second scene information. The server 140 obtains a rotation angle between the coordinate system of the first scene information and the coordinate system of the second scene information according to the IMU data in the first scene information and the IMU data in the second scene information, and uses the rotation angle as a rotation initial value. The server 140 randomly selects a translation distance within a reasonable range as an initial value of translation between the coordinate system of the first scene information and the coordinate system of the second scene information. The server 140 calculates conversion information between the coordinate system of the second scene information and the target coordinate system by using an iterative closest point algorithm in combination with the rotation initial value and the translation initial value, and if an iteration result is smaller than a preset error threshold, the obtained conversion information is valid. The server 140 performs conversion processing on the three-dimensional virtual display information according to the obtained conversion information, so as to obtain three-dimensional target virtual display information.
In one embodiment, the image capturing apparatus is responsible for acquiring first scene information including a picture and depth information of a first scene. The camera device has the functions of intelligent equipment, computing equipment and the like, for example, has three-dimensional reconstruction and tracking functions, and a high-definition camera can be arranged on the camera device and can acquire a scene picture, and the high-definition camera transmits the shot high-definition picture to the camera device.
The user puts a three-dimensional virtual information in the intelligent device in advance, at the same time, the intelligent device can obtain a second scene information, the three-dimensional virtual information has position and orientation data in a coordinate system of the second scene information, the three-dimensional virtual information and the second scene information are sent to the server, the server calculates conversion information according to the first scene information and the second scene information, further, the three-dimensional virtual information is converted into the coordinate system of the first scene information according to the conversion information to obtain the three-dimensional virtual display information, namely the position and orientation data of the three-dimensional virtual information in the coordinate system of the first scene information, the server outputs the three-dimensional virtual display information into a two-dimensional picture, and then the picture and a scene picture (such as the first scene information) shot by the high-definition camera are synthesized, for example, the picture of the first scene is used as a background picture, and superposing the virtual three-dimensional display information on the virtual three-dimensional display information to obtain a final picture, namely three-dimensional scene display information. And outputting the data to a display device for displaying or outputting the data to a projection device for projecting.
The ICP algorithm has a detailed calculation process in the prior art, which is briefly described below:
assume that the input data is a first scene point cloud data set a ═ aiAnd B, a second scene point cloud data setiThe conversion relationship between A and B is T, which is a matrix containing the rotation angle and the translation distance. At the beginning of iteration, T is preset to an initial value T0. The rotation angle in the initial value is obtained in the manner described above, and the translation distance value can only be randomly selected within a reasonable range, and the two values form an initial value.
Step 1: a maximum translation distance value D is presetmax
Step 2: and (4) judging whether the maximum iteration number is reached, if so, ending the iteration, and otherwise, starting the next iteration from the step (3). And then judging whether convergence is achieved or not, and if yes, ending the operation, wherein the criterion for judging whether convergence is that f (x) is close to the minimum value in the step 4.
And step 3: traversing points in the set B;
selecting a point B from B each timeiB is mixingiConverting the T value once to obtain a conversion point xi=T·biThen find the distance x from the set AiPoint m with the closest pointiJudgment of xiAnd miWhether the distance between is less than the preset maximum distance DmaxIf the distance is less than the maximum distance, the weight value w given to this pointiIs 1, otherwise, a weight value w is giveniIs 0. At the end of traversal, a group of m is obtainediA group wi
Step 4, let f (x) ∑iwi||T·bi-miThe goal is to get a matrix T that minimizes f (x). Here, using the gradient descent method, a new matrix T 'is obtained such that f (x) is reduced, and then step 2 is performed using this new matrix T'.
After iteration is finished, if the maximum iteration times is reached, the iteration is indicated to have no proper matrix T found, if the iteration is finished because of convergence, whether f (x) is smaller than a preset threshold value or not is judged, if the f (x) is smaller than the preset threshold value, the obtained effective matrix T is considered, and if the f (x) is not smaller than the preset threshold value, the matrix T is an invalid matrix.
After a suitable matrix T is obtained through multiple iterations, the conversion relation between A and B can be determined, and the conversion information between the two scenes is also determined.
However, it should be understood here that the process of converting the coordinate system of the second scene information and the three-dimensional virtual representation information is consistent with the above process, and is not described one by one. The ICP algorithm described above is prior art and will not be described in detail here.
In one embodiment, the smart device 130 may also obtain external operation information such as gesture operation or voice information through, for example, a depth camera, and transmit the operation information to the server 140. The server 140 adjusts a display position of the three-dimensional scene display information in the display device according to the three-dimensional scene display information and the operation information. The operation information may be gesture operation information or voice information of a user wearing the intelligent device, or operation information performed by the user using a device such as a keyboard, a wearable device, or a handle. For example, when the smart device is a Hololens holographic glasses, the wearer may display holographic content such as a chart, a picture, and the like through a display screen in the holographic glasses, and when the chart or the picture needs to be adjusted, the wearer may adjust the chart through operations such as gesture action amplification, movement, deletion, and the like, or adjust the chart through sound such as "please amplify the chart by one time", and the like, and corresponding content displayed in the display device may be adjusted accordingly (the scene may refer to fig. 3), thereby implementing interaction between the user and the virtual information.
It should be noted that the first scene information and the second scene information described above are scene information of the same scene, acquired from different angles. The number of the image capturing apparatuses may be set according to actual conditions, and the present invention is not limited in any way.
The three-dimensional reconstruction function described in the present invention has various implementation manners, such as image-based, Time of Flight (TOF) based, etc., and aims to establish spatial three-dimensional point cloud data, so that, for example, a depth camera has a sensing capability to a surrounding space. The tracking function refers to tracking of movement and rotation of the user, for example, the user wears a device with three-dimensional reconstruction and tracking functions, and when the user walks, the tracking technology can accurately track the movement distance and the rotation angle of the user, and further synchronously adjust the display position and the angle of the virtual three-dimensional information, so that the user feels that the virtual three-dimensional information is stable in a spatial position and does not deviate along with the walking of the user.
According to the technical scheme, different scene information in the same scene is acquired from different angles through the camera device and the intelligent device, and the three-dimensional scene display information is obtained by processing the different scene information and the three-dimensional virtual display information preset in the intelligent device, so that the viewing angle of a user is increased, the accuracy of the three-dimensional scene display information is improved, and the enjoyment is improved. In addition, in the technical scheme of the invention, the intelligent equipment can recognize the gesture, voice and other information of the user, so that the real-time interaction between the user and the surrounding environment and virtual information is realized, and the invention is more humanized.
Fig. 4 is a flowchart illustrating an information presentation method according to an embodiment of the present invention. As shown in fig. 4, the method begins at step S410.
In step S410, first scene information of a scene in which the image capturing apparatus is located is acquired by the image capturing apparatus, and the first scene information is transmitted to the server.
In step S420, the three-dimensional virtual display information is displayed by the smart device, the second scene information of the scene where the smart device is located is obtained, and the second scene information and the three-dimensional virtual display information are transmitted to the server.
In step S430, the server processes the three-dimensional virtual display information according to the received first scene information and the second scene information, so as to obtain three-dimensional scene display information, and transmits the three-dimensional scene display information to the display device.
In one embodiment, the server is configured to determine a coordinate system of the received first scene information transmitted by the image pickup apparatus as a target coordinate system. And after the server receives the three-dimensional virtual display information, converting the coordinate system of the three-dimensional virtual display information into the target coordinate system to obtain three-dimensional target virtual display information. And processing the first scene information and the three-dimensional target virtual display information through the server to obtain the three-dimensional scene display information. And the server calculates conversion information between a coordinate system of the second scene information and the target coordinate system by using an iterative closest point algorithm according to the first scene information and the second scene information, and converts the three-dimensional virtual display information according to the conversion information to obtain the three-dimensional target virtual display information.
The manner of calibrating the target coordinate system is described in detail above, and will not be described herein too much.
In step S440, three-dimensional scene representation information is displayed by the display device.
In one embodiment, the intelligent device acquires external operation information, transmits the operation information to the server, and adjusts the display position of the three-dimensional scene display information in the display device according to the three-dimensional scene display information and the operation information.
The information presentation system provided by the embodiment of the invention can be specific hardware on the equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An information presentation system, the system comprising: image pickup apparatus, display apparatus, smart apparatus, and server, wherein
The camera shooting device is used for acquiring first scene information of a scene where the camera shooting device is located and transmitting the first scene information to the server; the first scene information comprises image information, depth information and attitude information output by an inertia measurement unit in the camera equipment;
the intelligent equipment is used for displaying the three-dimensional virtual display information, acquiring second scene information of a scene where the intelligent equipment is located, and transmitting the second scene information and the three-dimensional virtual display information to the server; the second scene information comprises image information, depth information and attitude information output by an inertia measurement unit in the camera equipment;
the server is used for processing the three-dimensional virtual display information according to the received first scene information and the second scene information to obtain three-dimensional scene display information, and transmitting the three-dimensional scene display information to the display equipment;
and the display equipment is used for displaying the three-dimensional scene display information.
2. The system of claim 1, wherein the server is further configured to:
and setting the coordinate system of the received first scene information transmitted by the camera equipment as a target coordinate system.
3. The system of claim 2, wherein the server is further configured to:
after the three-dimensional virtual display information is received, converting a coordinate system of the three-dimensional virtual display information into the target coordinate system to obtain three-dimensional target virtual display information;
and processing the first scene information and the three-dimensional target virtual display information to obtain the three-dimensional scene display information.
4. The system of claim 3, wherein the server is further configured to:
calculating conversion information between a coordinate system of the second scene information and the target coordinate system by using an iterative closest point algorithm according to the first scene information and the second scene information;
and converting the three-dimensional virtual display information according to the conversion information to obtain the three-dimensional target virtual display information.
5. The system of claim 1, wherein,
the intelligent equipment is also used for acquiring external operation information and transmitting the operation information to the server;
the server is further used for adjusting the display position of the three-dimensional scene display information in the display equipment according to the three-dimensional scene display information and the operation information.
6. An information presentation method using the information presentation system according to any one of claims 1 to 5, the method comprising:
acquiring first scene information of a scene where the camera shooting device is located through the camera shooting device, and transmitting the first scene information to the server; the first scene information comprises image information, depth information and attitude information output by an inertia measurement unit in the camera equipment;
displaying three-dimensional virtual display information through the intelligent equipment, acquiring second scene information of a scene where the intelligent equipment is located, and transmitting the second scene information and the three-dimensional virtual display information to the server; the second scene information comprises image information, depth information and attitude information output by an inertia measurement unit in the camera equipment;
processing the three-dimensional virtual display information according to the received first scene information and the second scene information through the server to obtain three-dimensional scene display information, and transmitting the three-dimensional scene display information to the display equipment;
and displaying the three-dimensional scene display information through the display equipment.
7. The method of claim 6, wherein the processing, by the server, the three-dimensional virtual presentation information according to the received first scenario information and the second scenario information comprises:
and the server is used for calibrating the coordinate system of the received first scene information transmitted by the camera equipment as a target coordinate system.
8. The method of claim 7, wherein the processing, by the server, the three-dimensional virtual presentation information according to the received first scenario information and the second scenario information, further comprises:
after the server receives the three-dimensional virtual display information, converting a coordinate system of the three-dimensional virtual display information into the target coordinate system to obtain three-dimensional target virtual display information;
and processing the first scene information and the three-dimensional target virtual display information through the server to obtain the three-dimensional scene display information.
9. The method of claim 8, wherein the converting, by the server, the coordinate system of the three-dimensional virtual presentation information to the target coordinate system after receiving the three-dimensional virtual presentation information, resulting in three-dimensional target virtual presentation information, comprises:
calculating conversion information between a coordinate system of the second scene information and the target coordinate system by using an iterative closest point algorithm according to the first scene information and the second scene information through the server;
and converting the three-dimensional virtual display information through the server according to the conversion information to obtain the three-dimensional target virtual display information.
10. The method of claim 6, further comprising:
acquiring external operation information through the intelligent equipment, and transmitting the operation information to the server;
and adjusting the display position of the three-dimensional scene display information in the display equipment by the server according to the three-dimensional scene display information and the operation information.
CN201710061210.5A 2017-01-25 2017-01-25 Information display system and method Active CN106843790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710061210.5A CN106843790B (en) 2017-01-25 2017-01-25 Information display system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710061210.5A CN106843790B (en) 2017-01-25 2017-01-25 Information display system and method

Publications (2)

Publication Number Publication Date
CN106843790A CN106843790A (en) 2017-06-13
CN106843790B true CN106843790B (en) 2020-08-04

Family

ID=59121858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710061210.5A Active CN106843790B (en) 2017-01-25 2017-01-25 Information display system and method

Country Status (1)

Country Link
CN (1) CN106843790B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107393446A (en) * 2017-09-14 2017-11-24 广州米文化传媒有限公司 Laser projection interactive exhibition system and methods of exhibiting
CN108255291B (en) * 2017-12-05 2021-09-10 腾讯科技(深圳)有限公司 Virtual scene data transmission method and device, storage medium and electronic device
CN108537878B (en) * 2018-03-26 2020-04-21 Oppo广东移动通信有限公司 Environment model generation method and device, storage medium and electronic equipment
CN112579029A (en) * 2020-12-11 2021-03-30 上海影创信息科技有限公司 Display control method and system of VR glasses
CN114928619A (en) * 2022-04-29 2022-08-19 厦门图扑软件科技有限公司 Information synchronization method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN103700128A (en) * 2013-12-30 2014-04-02 无锡触角科技有限公司 Mobile equipment and enhanced display method thereof
CN104134229A (en) * 2014-08-08 2014-11-05 李成 Real-time interaction reality augmenting system and method
CN104143212A (en) * 2014-07-02 2014-11-12 惠州Tcl移动通信有限公司 Reality augmenting method and system based on wearable device
US20150356770A1 (en) * 2013-03-04 2015-12-10 Tencent Technology (Shenzhen) Company Limited Street view map display method and system
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
US20150356770A1 (en) * 2013-03-04 2015-12-10 Tencent Technology (Shenzhen) Company Limited Street view map display method and system
CN103700128A (en) * 2013-12-30 2014-04-02 无锡触角科技有限公司 Mobile equipment and enhanced display method thereof
CN104143212A (en) * 2014-07-02 2014-11-12 惠州Tcl移动通信有限公司 Reality augmenting method and system based on wearable device
CN104134229A (en) * 2014-08-08 2014-11-05 李成 Real-time interaction reality augmenting system and method
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method

Also Published As

Publication number Publication date
CN106843790A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106843790B (en) Information display system and method
US9613463B2 (en) Augmented reality extrapolation techniques
US8878846B1 (en) Superimposing virtual views of 3D objects with live images
JP6258953B2 (en) Fast initialization for monocular visual SLAM
JP6340017B2 (en) An imaging system that synthesizes a subject and a three-dimensional virtual space in real time
CN102938844B (en) Three-dimensional imaging is utilized to generate free viewpoint video
WO2018214697A1 (en) Graphics processing method, processor, and virtual reality system
JP7073481B2 (en) Image display system
GB2481366A (en) 3D interactive display and pointer control
US9955120B2 (en) Multiuser telepresence interaction
JP7353782B2 (en) Information processing device, information processing method, and program
US11302023B2 (en) Planar surface detection
US11710273B2 (en) Image processing
KR102148103B1 (en) Method and apparatus for generating mixed reality environment using a drone equipped with a stereo camera
US11847735B2 (en) Information processing apparatus, information processing method, and recording medium
Pietroszek et al. Volumetric capture for narrative films
US10582190B2 (en) Virtual training system
US20240070973A1 (en) Augmented reality wall with combined viewer and camera tracking
US10902554B2 (en) Method and system for providing at least a portion of content having six degrees of freedom motion
US11128836B2 (en) Multi-camera display
JP2023171298A (en) Adaptation of space and content for augmented reality and composite reality
WO2019241712A1 (en) Augmented reality wall with combined viewer and camera tracking
KR101741149B1 (en) Method and device for controlling a virtual camera's orientation
TWI794512B (en) System and apparatus for augmented reality and method for enabling filming using a real-time display
WO2023195301A1 (en) Display control device, display control method, and display control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant