CN107452067B - Demonstration method and device based on augmented reality and terminal equipment - Google Patents
Demonstration method and device based on augmented reality and terminal equipment Download PDFInfo
- Publication number
- CN107452067B CN107452067B CN201710633384.4A CN201710633384A CN107452067B CN 107452067 B CN107452067 B CN 107452067B CN 201710633384 A CN201710633384 A CN 201710633384A CN 107452067 B CN107452067 B CN 107452067B
- Authority
- CN
- China
- Prior art keywords
- demonstration
- augmented reality
- reality model
- picture
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 198
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000000694 effects Effects 0.000 claims abstract description 28
- 230000011218 segmentation Effects 0.000 claims abstract description 26
- 238000004590 computer program Methods 0.000 claims description 19
- 239000003550 marker Substances 0.000 claims description 7
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 235000003181 Panax pseudoginseng Nutrition 0.000 description 1
- 244000131316 Panax pseudoginseng Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- Educational Administration (AREA)
- Human Computer Interaction (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Educational Technology (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention is suitable for the technical field of augmented reality, and provides a demonstration method, a demonstration device and terminal equipment based on augmented reality, wherein the demonstration method, the demonstration device and the terminal equipment comprise the following steps: determining a current demonstration surface for demonstrating a real object when a demonstrator demonstrates, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface; extracting an augmented reality model corresponding to the current demonstration surface, and overlaying the augmented reality model into an original video to form a target video; and carrying out video picture segmentation on the target video superposed with the augmented reality model, thereby realizing a picture effect of combining virtual reality and real reality. By the method, the audience can watch the demonstration clearly, and the demonstration effect is improved.
Description
Technical Field
The invention belongs to the technical field of augmented reality, and particularly relates to a demonstration method and device based on augmented reality and terminal equipment.
Background
Augmented Reality (AR) is one of the research hotspots of many well-known universities and research institutes abroad in recent years. The AR technology is a new technology developed in recent years, and the core of the AR technology is to fuse virtual content and real-existing content in real time to form interaction between virtual and real, thereby creating a new experience. At present, the Augmented Reality (AR) technology is applied to industries such as education, medical treatment and the like in more and more cases. And by combining with the AR technology, the characteristics of the target can be more comprehensively and vividly shown.
In the current teaching demonstration mode, most of the demo still use the traditional projection, blackboard, whiteboard or video teaching material and other multi-element demonstration modes. However, although the above-mentioned demonstration modes are diversified, when the presenter wants to demonstrate the demonstration object in the hand, except for the audience in the front of the scene, people far away from the presenter and outside the demonstration classroom can see the demonstration object through the video, but cannot see the demonstration object clearly, so that the demonstration effect is not good.
Disclosure of Invention
In view of this, embodiments of the present invention provide a demonstration method, an apparatus and a terminal device based on augmented reality, so as to solve the problem in the prior art that a demonstration object is small, and an audience cannot clearly see a demonstration object, so that a demonstration effect is poor.
The embodiment of the invention provides a demonstration method based on augmented reality in a first aspect, which comprises the following steps:
determining a current demonstration surface for demonstrating a real object when a demonstrator demonstrates, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface;
extracting an augmented reality model corresponding to the current demonstration surface, and overlaying the augmented reality model into an original video to form a target video;
and carrying out video picture segmentation on the target video superposed with the augmented reality model, thereby realizing a picture effect of combining virtual reality and real reality.
Preferably, based on the first aspect of the embodiments of the present invention, in a first possible implementation manner, the determining a current demonstration surface of a demonstration object when a demonstrator performs demonstration, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface to demonstrate the object specifically includes:
determining a mark of a current demonstration surface of the demonstration object;
and searching an augmented reality model corresponding to the mark of the current demonstration surface of the demonstration real object in the augmented reality file of the demonstration real object which is uploaded to a recording and broadcasting system in advance.
Preferably, based on the first aspect of the embodiment of the present invention, in a second possible implementation manner, the extracting an augmented reality model corresponding to the current presentation surface, and superimposing the augmented reality model onto an original video to form a target video specifically includes:
acquiring a video frame with an original background image of the demonstration object in an original video;
and superposing the augmented reality model on the position of the demonstration object on the video frame to form a target video.
Preferably, based on the first aspect of the embodiment of the present invention, in a third possible implementation manner, the performing video picture segmentation on the target video superimposed with the augmented reality model so as to achieve a picture effect of virtual-real combination specifically includes:
dividing a target video picture superposed with the augmented reality model into a first split screen picture and a second split screen picture, wherein the first split screen picture and the second split screen picture are displayed in parallel;
the first split screen picture displays a target video overlaid with the augmented reality model, and the second split screen picture only displays the augmented reality model.
Preferably, based on the first aspect of the embodiment of the present invention, in a fourth possible implementation manner, the target video frame on which the augmented reality model is superimposed is divided into a first display frame and a second display frame, where the second display frame is in the first display frame;
the first display picture displays a target video overlaid with the augmented reality model, and the second display picture displays only the augmented reality model, or the second display picture displays the target video overlaid with the augmented reality model, and the first display picture displays only the augmented reality model.
A second aspect of the present invention provides an augmented reality-based presentation apparatus comprising:
the model determining unit is used for determining a current demonstration surface for demonstrating a real object when a demonstrator demonstrates the real object, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface;
the model superposition unit is used for extracting an augmented reality model corresponding to the current demonstration surface and superposing the augmented reality model to an original video to form a target video;
and the picture segmentation unit is used for carrying out video picture segmentation on the target video superposed with the augmented reality model so as to realize a picture effect of combining virtuality and reality.
Preferably, based on the second aspect of the embodiment of the present invention, in a first possible implementation manner, the model determining unit specifically includes:
the mark determining module is used for determining the mark of the current demonstration surface of the demonstration object;
and the model searching module is used for searching the augmented reality model corresponding to the mark of the current demonstration surface of the demonstration object in the augmented reality file of the demonstration object which is uploaded to the recording and broadcasting system in advance.
Preferably, based on the second aspect of the embodiment of the present invention, in a second possible implementation manner, the model superimposing unit specifically includes:
the video frame acquisition module is used for acquiring a video frame with an original background image of the demonstration object in an original video;
and the model superposition module is used for superposing the augmented reality model on the position of the demonstration object on the video frame to form a target video.
Preferably, based on the second aspect of the embodiment of the present invention, in a third possible implementation manner, the picture segmentation unit specifically includes:
the first division module is used for dividing a target video picture superposed with the augmented reality model into a first split screen picture and a second split screen picture, the first split screen picture and the second split screen picture are displayed in parallel, the first split screen picture displays a target video superposed with the augmented reality model, and the second split screen picture only displays the augmented reality model.
Preferably, based on the second aspect of the embodiment of the present invention, in a fourth possible implementation manner, the picture segmentation unit specifically includes:
the second segmentation module is configured to segment a target video frame on which the augmented reality model is superimposed into a first display frame and a second display frame, where the second display frame is in the first display frame, the first display frame displays a target video on which the augmented reality model is superimposed, and the second display frame only displays the augmented reality model, or the second display frame displays a target video on which the augmented reality model is superimposed, and the first display frame only displays the augmented reality model.
A third aspect of the present invention provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the demonstration method as described above when executing the computer program.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the demonstration method as described above.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the embodiment of the invention determines the current demonstration surface of the demonstration object when a demonstrator performs demonstration, searches the augmented reality model corresponding to the current demonstration surface according to the current demonstration surface, further extracts the augmented reality model corresponding to the current demonstration surface, superimposes the augmented reality model on the original video to form the target video, and finally performs video picture segmentation on the target video superimposed with the augmented reality model, thereby realizing the picture effect of virtual-real combination, increasing interactivity, facilitating clear watching of audiences and improving the demonstration effect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an implementation of an augmented reality-based presentation method according to an embodiment of the present invention;
fig. 1A is a flowchart of an implementation of another augmented reality-based presentation method provided in an embodiment of the present invention;
fig. 1B is a flowchart of an implementation of another augmented reality-based presentation method according to an embodiment of the present invention;
fig. 2 is a block diagram illustrating an augmented reality-based presentation apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to enable a demonstration object to be small and enable audiences to not clearly see the demonstration object, the embodiment of the invention provides a demonstration method, a demonstration device and terminal equipment based on augmented reality, wherein the method mainly comprises the steps of determining a current demonstration surface of a demonstration object when a demonstrator performs demonstration, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface; extracting an augmented reality model corresponding to the current demonstration surface, and overlaying the augmented reality model into an original video to form a target video; and carrying out video picture segmentation on the target video superposed with the augmented reality model, thereby realizing a picture effect of combining virtual reality and real reality.
The scheme of the invention not only can conveniently replace the augmented reality model, but also can control the state of the augmented reality model in the divided video picture, thereby meeting the requirements of various demonstrations, facilitating the watching of users and simultaneously improving the demonstration effect.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example one:
Fig. 1 shows a flowchart of an augmented reality-based presentation method according to an embodiment of the present invention, which is detailed as follows:
step S101, determining a current demonstration surface for demonstrating a real object when a demonstrator demonstrates, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface.
The demonstration real object is a polyhedron, and each surface of the demonstration real object corresponds to an augmented reality model respectively.
Specifically, in order to accurately find the augmented reality model corresponding to the current presentation surface of the presentation object, the step S101 specifically includes:
and A1, determining the mark of the current demonstration surface of the demonstration real object.
A2, searching for an augmented reality model corresponding to the mark of the current demonstration surface of the demonstration object in the augmented reality file of the demonstration object which is uploaded to a recording and broadcasting system in advance.
Specifically, before the demonstration, the augmented reality file of the demonstration real object is uploaded to the recording and broadcasting system in advance, each face of the demonstration real object demonstrated by a demonstrator has a mark, each augmented reality model in the augmented reality file also has a corresponding mark, and the corresponding augmented reality model of the face of the demonstration real object is searched in the augmented reality file by determining the mark of the current demonstration face of the demonstration real object. Because different surfaces of the demonstration real object respectively correspond to different augmented reality models, when the demonstration rotation of the demonstration real object of the demonstrator changes, the augmented reality models are synchronously changed.
Further, in the embodiment of the invention, the presenter uses the general model as the presentation object, and the general model can replace different presentation objects in different presentation scenes, so that the presentation method is suitable for different presentation scenes. For example, in a commodity release meeting, the general model may be a demonstration object of a mobile phone, in a teaching classroom, the general model may be an automobile model, different demonstration objects correspond to different augmented reality files, that is, the augmented reality files are different in different demonstration scenes, the demonstration scenes include scene numbers, the scene numbers are unique and can be used for identifying the demonstration scenes, the same scene number exists in the augmented reality file corresponding to each demonstration scene, and the demonstration scenes correspond to the augmented reality files one to one. At this time, as shown in fig. 1A, the step a2 specifically includes:
and A21, acquiring the scene number of the demonstration scene.
A22, searching for an augmented reality file corresponding to the scene number of the demonstration scene from the augmented reality file of the demonstration real object which is uploaded to a recording and broadcasting system in advance.
And A23, searching the augmented reality file corresponding to the scene number of the demonstration scene for the augmented reality model corresponding to the mark of the current demonstration surface.
In the embodiment of the invention, the universal model is used as the demonstration object, the universal model can be any demonstration object in different scenes, the corresponding augmented reality file can be searched through the scene number of the demonstration scene, and the corresponding augmented reality model in the corresponding augmented reality file is determined by determining the mark of the current demonstration surface of the demonstration object when a demonstrator demonstrates the object, so that the augmented reality effect can be accurately realized.
And S102, extracting an augmented reality model corresponding to the current demonstration surface, and overlaying the augmented reality model into an original video to form a target video.
Wherein the original video comprises live video or recorded video. When the original video is a live video, in step S101, a current demonstration surface of a demonstration object is determined in real time when a demonstrator performs demonstration, and an augmented reality model corresponding to the current demonstration surface is searched for in real time according to the current demonstration surface.
In the embodiment of the present invention, the augmented reality model corresponding to the current presentation surface found in step S101 is extracted, and the augmented reality model is superimposed on the position of the presentation object in the original video. Further, the augmented reality model overlaid with the position of the demonstration object in the original video is adjustable in size.
Further, in the embodiment of the present invention, to accurately superimpose the augmented reality model, as shown in fig. 1B, the step S102 specifically includes:
and B1, acquiring a video frame with the original background image of the demonstration real object in the original video.
And B2, overlaying the augmented reality model on the position of the demonstration object on the video frame to form a target video.
Specifically, a video frame with an original background image of the demonstration real object in the original video is collected, and an augmented reality model corresponding to the demonstration real object in the video frame is enlarged or reduced and then is superimposed on the position of the demonstration real object in the video frame, so that a target video is formed.
Furthermore, in order to clearly show the rotation or other operations of the demonstration object by the demonstrator in the demonstration process and control the playing effect of the augmented reality, in the embodiment of the invention, each surface of the demonstration object is provided with a plurality of markers with the same size, wherein the markers can be stereo markers or plane markers, and the markers are arranged on the demonstration surface from top to bottom according to the number on the same demonstration surface of the demonstration object, so that the inclination angle of the current demonstration surface can be determined according to the size of the markers on the current demonstration surface. For example, when the current presentation surface is tilted forward, the marker above the current presentation surface is larger than the marker below the presentation surface in the image, and the tilt angle of the current presentation surface is determined according to the size of the marker of the current presentation surface. And adjusting the display angle of the augmented reality model superposed in the target video according to the inclination angle of the current demonstration surface, thereby better displaying the demonstration effect.
Further, in the embodiment of the present invention, in different demonstration scenes, the inclination angle of the demonstration surface corresponds to different display contents, and the display angle of the augmented reality model superimposed in the target video is determined according to the inclination angle of the current demonstration surface and the scene number.
Optionally, in the embodiment of the present invention, multiple types of markers are set on each surface of the real object to be demonstrated, and a display angle of the augmented reality model superimposed in the target video is determined according to the types of the markers and the inclination angle of the current demonstration surface.
And S103, carrying out video picture segmentation on the target video superposed with the augmented reality model, thereby realizing a picture effect of combining virtual reality and real reality.
Specifically, in the embodiment of the present invention, when the demonstration object of the presenter is small and the details of the demonstration object are emphasized during the demonstration, in order to facilitate the visibility of the audience, the target video on which the augmented reality model is superimposed is subjected to video frame segmentation, and the frame on which the demonstration object of the augmented reality model is superimposed is segmented, so as to realize the frame effect of virtual-real combination.
Optionally, the S103 specifically includes:
and dividing the target video picture superposed with the augmented reality model into a first split screen picture and a second split screen picture, wherein the first split screen picture and the second split screen picture are displayed in parallel.
The first split screen picture displays a target video overlaid with the augmented reality model, and the second split screen picture only displays the augmented reality model.
In the embodiment of the present invention, due to size limitation of an example article itself, a model may be too small to be clearly seen after being superimposed in a video, a video picture of a target video superimposed with the augmented reality model is divided into a first split screen picture and a second split screen picture, a ratio of the first split screen picture and the second split screen picture is adjustable, for example, five split screens of the first split screen picture and the second split screen picture or pseudo-ginseng split screens of the first split screen picture and the second split screen picture are performed, the first split screen picture displays the target video superimposed with the augmented reality model, the second split screen picture only displays the augmented reality model, and a split screen ratio may be selected and set by a viewer, which is not limited herein. In the first split screen picture and the second split screen picture after the split screen, the demonstration real object overlaid with the augmented reality model is synchronous in rotation, translation and other displays, the demonstration real object overlaid with the augmented reality model is highlighted through the split picture, interactivity is increased, meanwhile, audiences can watch the demonstration real object conveniently and clearly, and therefore the demonstration effect is improved.
Optionally, the step S103 includes:
and dividing the target video picture superposed with the augmented reality model into a first display picture and a second display picture, wherein the second display picture is in the first display picture.
The first display picture displays a target video overlaid with the augmented reality model, and the second display picture displays only the augmented reality model, or the second display picture displays the target video overlaid with the augmented reality model, and the first display picture displays only the augmented reality model.
In the embodiment of the present invention, the target video frame overlaid with the augmented reality model is divided into a picture-in-picture form, the second display frame is in the first display frame, that is, the second display frame is smaller than the first display frame, the two display frames have different display contents, and the target video frame overlaid with the augmented reality model and the video frame overlaid with the augmented reality model are respectively displayed, and as for the first display frame, the target video overlaid with the augmented reality model or the video frame displayed with the augmented reality model is displayed, the frame display may be switched by a switching instruction, which is not limited herein. In the first display picture and the second display picture after being cut apart, the demonstration material object that has superimposed the augmented reality model shows such as rotation, translation are synchronous, through cut apart the demonstration material object that the picture highlights to have superimposed the augmented reality model, increase interactive, make things convenient for spectator's clear watching to improve the demonstration effect.
In the first embodiment of the present invention, by determining a mark of a current presentation surface of the real presentation object, searching an augmented reality model corresponding to the mark of the current presentation surface of the real presentation object in an augmented reality file of the real presentation object that has been uploaded to a recording and playing system in advance, extracting the augmented reality model corresponding to the current presentation surface, superimposing the augmented reality model on an original video to form a target video, and finally performing video frame segmentation on the target video superimposed with the augmented reality model, splitting the target video frame superimposed with the augmented reality model into a first split screen frame and a second split screen frame, where the first split screen frame and the second split screen frame are displayed in parallel, or splitting the target video frame superimposed with the augmented reality model into a first display frame and a second display frame, where the second display frame is in the first display frame, therefore, the virtual-real combined picture effect is realized, the interactivity is increased through augmented reality, and meanwhile, the audience can conveniently and clearly watch the picture, so that the demonstration effect is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two:
Corresponding to an augmented reality-based presentation method described in the foregoing embodiment, fig. 2 shows a block diagram of an augmented reality-based presentation apparatus provided by an embodiment of the present invention, which is applicable to a smart terminal, which may include a user equipment, which may be a mobile phone (or referred to as a "cellular" phone), a computer with a mobile device, and the like, communicating with one or more core networks via a radio access network RAN. For convenience of explanation, only portions related to the embodiments of the present invention are shown.
Referring to fig. 2, the augmented reality-based presentation apparatus includes: a model determining unit, a model superposing unit and a picture dividing unit, wherein:
the model determining unit is used for determining a current demonstration surface for demonstrating a real object when a demonstrator demonstrates the real object, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface;
the model superposition unit is used for extracting an augmented reality model corresponding to the current demonstration surface and superposing the augmented reality model to an original video to form a target video;
and the picture segmentation unit is used for carrying out video picture segmentation on the target video superposed with the augmented reality model so as to realize a picture effect of combining virtuality and reality.
Further, the model determining unit specifically includes:
the mark determining module is used for determining the mark of the current demonstration surface of the demonstration object;
and the model searching module is used for searching the augmented reality model corresponding to the mark of the current demonstration surface of the demonstration object in the augmented reality file of the demonstration object which is uploaded to the recording and broadcasting system in advance.
Further, the model superimposing unit specifically includes:
the video frame acquisition module is used for acquiring a video frame with an original background image of the demonstration object in an original video;
and the model superposition module is used for superposing the augmented reality model on the position of the demonstration object on the video frame to form a target video.
Optionally, the picture segmentation unit specifically includes:
the first division module is used for dividing a target video picture superposed with the augmented reality model into a first split screen picture and a second split screen picture, the first split screen picture and the second split screen picture are displayed in parallel, the first split screen picture displays a target video superposed with the augmented reality model, and the second split screen picture only displays the augmented reality model.
Optionally, the picture segmentation unit specifically includes:
the second segmentation module is configured to segment a target video frame on which the augmented reality model is superimposed into a first display frame and a second display frame, where the second display frame is in the first display frame, the first display frame displays a target video on which the augmented reality model is superimposed, and the second display frame only displays the augmented reality model, or the second display frame displays a target video on which the augmented reality model is superimposed, and the first display frame only displays the augmented reality model.
In the second embodiment of the invention, the current demonstration surface of the demonstration object is determined when the demonstrator performs demonstration, the augmented reality model corresponding to the current demonstration surface is searched according to the current demonstration surface, the augmented reality model corresponding to the current demonstration surface is further extracted, the augmented reality model is superposed into the original video to form the target video, and finally the target video superposed with the augmented reality model is subjected to video picture segmentation, so that the picture effect of virtual-real combination is realized, the interactivity is increased through augmented reality, and meanwhile, the audience is facilitated to clearly watch the augmented reality model, so that the demonstration effect is improved.
Example three:
fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 3, the terminal device 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32, such as an augmented reality based presentation program, stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, implements the steps in the various embodiments of the augmented reality based presentation method described above, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 32, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the units 21 to 23 shown in fig. 2.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 32 in the terminal device 3. For example, the computer program 32 may be divided into a model determination unit, a model superimposition unit, and a screen division unit, and each unit specifically functions as follows:
the model determining unit is used for determining a current demonstration surface for demonstrating a real object when a demonstrator demonstrates the real object, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface;
the model superposition unit is used for extracting an augmented reality model corresponding to the current demonstration surface and superposing the augmented reality model to an original video to form a target video;
and the picture segmentation unit is used for carrying out video picture segmentation on the target video superposed with the augmented reality model so as to realize a picture effect of combining virtuality and reality.
The terminal device 3 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device 3 may include, but is not limited to, a processor 30 and a memory 31. It will be understood by those skilled in the art that fig. 3 is only an example of the terminal device 3, and does not constitute a limitation to the terminal device 3, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device may also include an input-output device, a network access device, a bus, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal device 3, such as a hard disk or a memory of the terminal device 3. The memory 31 may also be an external storage device of the terminal device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the terminal device 3. The memory 31 is used for storing the computer program and other programs and data required by the terminal device. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. An augmented reality-based presentation method, comprising:
determining a current demonstration surface of a demonstration object when a demonstrator performs demonstration, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface, wherein the demonstration object is a polyhedron, each surface of the demonstration object is provided with multiple types of markers, specifically, a universal model is used as the demonstration object, and the universal model can replace different demonstration objects in different demonstration scenes;
extracting an augmented reality model corresponding to the current demonstration surface, and overlaying the augmented reality model into an original video to form a target video; determining a display angle of an augmented reality model superposed in a target video according to the type of the marker and the inclination angle of the current demonstration surface; specifically, the markers are stereo markers or plane markers, the markers are arranged on the same demonstration surface of the demonstration real object from top to bottom according to the number, the inclination angle of the current demonstration surface is determined according to the size of the marker of the current demonstration surface in the image, and then the display angle of the augmented reality model superposed in the target video is adjusted according to the inclination angle of the current demonstration surface;
and performing video picture segmentation on the target video superposed with the augmented reality model, specifically, segmenting the picture superposed with the demonstration real object of the augmented reality model, thereby realizing the picture effect of virtual and real combination.
2. The demonstration method according to claim 1, wherein the determining of the current demonstration surface of the demonstration object when the demonstrator performs the demonstration, and searching for the augmented reality model corresponding to the current demonstration surface according to the current demonstration surface specifically comprises:
determining a mark of a current demonstration surface of the demonstration object;
and searching an augmented reality model corresponding to the mark of the current demonstration surface of the demonstration real object in the augmented reality file of the demonstration real object which is uploaded to a recording and broadcasting system in advance.
3. The presentation method according to claim 1, wherein the extracting an augmented reality model corresponding to the current presentation surface and superimposing the augmented reality model onto an original video to form a target video specifically comprises:
acquiring a video frame with an original background image of the demonstration object in an original video;
and superposing the augmented reality model at the position of the demonstration object in the video frame to form a target video.
4. The presentation method according to claim 1, wherein the video picture segmentation is performed on the target video on which the augmented reality model is superimposed, so as to achieve a picture effect of virtual-real combination, and specifically comprises:
dividing a target video picture superposed with the augmented reality model into a first split screen picture and a second split screen picture, wherein the first split screen picture and the second split screen picture are displayed in parallel, the first split screen picture displays the target video superposed with the augmented reality model, and the second split screen picture only displays the augmented reality model.
5. The presentation method according to claim 1, wherein the video picture segmentation is performed on the target video on which the augmented reality model is superimposed, so as to achieve a picture effect of virtual-real combination, and specifically comprises:
dividing a target video picture overlaid with the augmented reality model into a first display picture and a second display picture, wherein the second display picture is in the first display picture, the first display picture displays the target video overlaid with the augmented reality model, the second display picture only displays the augmented reality model, or the second display picture displays the target video overlaid with the augmented reality model, and the first display picture only displays the augmented reality model.
6. An augmented reality-based presentation device, the presentation device comprising:
the model determining unit is used for determining a current demonstration surface of a demonstration object when a demonstrator performs demonstration, and searching an augmented reality model corresponding to the current demonstration surface according to the current demonstration surface, wherein the demonstration object is a polyhedron, each surface of the demonstration object is provided with multiple types of markers, specifically, a universal model is used as the demonstration object, and the universal model can replace different demonstration objects in different demonstration scenes;
the model superposition unit is used for extracting an augmented reality model corresponding to the current demonstration surface and superposing the augmented reality model to an original video to form a target video; determining a display angle of an augmented reality model superposed in a target video according to the type of the marker and the inclination angle of the current demonstration surface; specifically, the markers are stereo markers or plane markers, the markers are arranged on the same demonstration surface of the demonstration real object from top to bottom according to the number, the inclination angle of the current demonstration surface is determined according to the size of the marker of the current demonstration surface in the image, and then the display angle of the augmented reality model superposed in the target video is adjusted according to the inclination angle of the current demonstration surface;
and the picture segmentation unit is used for carrying out video picture segmentation on the target video superposed with the augmented reality model, specifically, segmenting the picture superposed with the demonstration real object of the augmented reality model, thereby realizing the picture effect of virtual and real combination.
7. The presentation device as claimed in claim 6, wherein said model determination unit comprises in particular:
the mark determining module is used for determining the mark of the current demonstration surface of the demonstration object;
and the model searching module is used for searching the augmented reality model corresponding to the mark of the current demonstration surface of the demonstration object in the augmented reality file of the demonstration object which is uploaded to the recording and broadcasting system in advance.
8. The presentation device as claimed in claim 6, wherein said model superimposing unit comprises in particular:
the video frame acquisition module is used for acquiring a video frame with an original background image of the demonstration object in an original video;
and the model superposition module is used for superposing the augmented reality model on the position of the demonstration object on the video frame to form a target video.
9. A terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor realizes the steps of the presentation method according to any one of claims 1 to 5 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the presentation method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710633384.4A CN107452067B (en) | 2017-07-28 | 2017-07-28 | Demonstration method and device based on augmented reality and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710633384.4A CN107452067B (en) | 2017-07-28 | 2017-07-28 | Demonstration method and device based on augmented reality and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107452067A CN107452067A (en) | 2017-12-08 |
CN107452067B true CN107452067B (en) | 2021-02-05 |
Family
ID=60489645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710633384.4A Active CN107452067B (en) | 2017-07-28 | 2017-07-28 | Demonstration method and device based on augmented reality and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107452067B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI744536B (en) * | 2018-06-19 | 2021-11-01 | 宏正自動科技股份有限公司 | Live streaming system and method for live streaming |
CN114257875B (en) * | 2021-12-16 | 2024-04-09 | 广州博冠信息科技有限公司 | Data transmission method, device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102893327A (en) * | 2010-03-19 | 2013-01-23 | 数字标记公司 | Intuitive computing methods and systems |
CN103488711A (en) * | 2013-09-09 | 2014-01-01 | 北京大学 | Method and system for fast making vector font library |
JP2014160226A (en) * | 2013-01-24 | 2014-09-04 | Panasonic Corp | Imaging apparatus |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10702216B2 (en) * | 2010-06-08 | 2020-07-07 | Styku, LLC | Method and system for body scanning and display of biometric data |
WO2016164355A1 (en) * | 2015-04-06 | 2016-10-13 | Scope Technologies Us Inc. | Method and apparatus for sharing augmented reality applications to multiple clients |
CN105405168A (en) * | 2015-11-19 | 2016-03-16 | 青岛黑晶信息技术有限公司 | Method and apparatus for implementing three-dimensional augmented reality |
CN106873768B (en) * | 2016-12-30 | 2020-05-05 | 中兴通讯股份有限公司 | Augmented reality method, device and system |
-
2017
- 2017-07-28 CN CN201710633384.4A patent/CN107452067B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102893327A (en) * | 2010-03-19 | 2013-01-23 | 数字标记公司 | Intuitive computing methods and systems |
JP2014160226A (en) * | 2013-01-24 | 2014-09-04 | Panasonic Corp | Imaging apparatus |
CN103488711A (en) * | 2013-09-09 | 2014-01-01 | 北京大学 | Method and system for fast making vector font library |
Also Published As
Publication number | Publication date |
---|---|
CN107452067A (en) | 2017-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10499035B2 (en) | Method and system of displaying a popping-screen | |
WO2017193576A1 (en) | Video resolution adaptation method and apparatus, and virtual reality terminal | |
CN112135158B (en) | Live broadcasting method based on mixed reality and related equipment | |
US20140368495A1 (en) | Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof | |
TWI556639B (en) | Techniques for adding interactive features to videos | |
EP3913924B1 (en) | 360-degree panoramic video playing method, apparatus, and system | |
CN105447898A (en) | Method and device for displaying 2D application interface in virtual real device | |
CN107493228A (en) | A kind of social interaction method and system based on augmented reality | |
CN106445437A (en) | Terminal and view angle switching method thereof | |
US20170186243A1 (en) | Video Image Processing Method and Electronic Device Based on the Virtual Reality | |
US20170150212A1 (en) | Method and electronic device for adjusting video | |
WO2017185761A1 (en) | Method and device for playing back 2d video | |
CN112929627A (en) | Virtual reality scene implementation method and device, storage medium and electronic equipment | |
JP7511026B2 (en) | Image data encoding method and device, display method and device, and electronic device | |
CN108881873B (en) | Method, device and system for fusing high-resolution images | |
CN107452067B (en) | Demonstration method and device based on augmented reality and terminal equipment | |
WO2018000620A1 (en) | Method and apparatus for data presentation, virtual reality device, and play controller | |
CN113253842A (en) | Scene editing method and related device and equipment | |
CN113269781A (en) | Data generation method and device and electronic equipment | |
CN205281405U (en) | Image recognition system based on augmented reality | |
CN112017264B (en) | Display control method and device for virtual studio, storage medium and electronic equipment | |
CN114007098A (en) | Method and device for generating 3D holographic video in intelligent classroom | |
CN113115108A (en) | Video processing method and computing device | |
CN105049825A (en) | Image display processing method and terminal | |
CN106454480A (en) | Video playing control method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder |
Address after: 518000 north of 6th floor and north of 7th floor, building a, tefa infoport building, No.2 Kefeng Road, Science Park community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province Patentee after: SZ REACH TECH Co.,Ltd. Address before: 518000 Room 601, building B, Kingdee Software Park, No.2, Keji South 12th Road, Nanshan District, Shenzhen City, Guangdong Province Patentee before: SZ REACH TECH Co.,Ltd. |
|
CP02 | Change in the address of a patent holder |