CN107918956A - Processing method, device and the electronic equipment of augmented reality - Google Patents
Processing method, device and the electronic equipment of augmented reality Download PDFInfo
- Publication number
- CN107918956A CN107918956A CN201711250261.9A CN201711250261A CN107918956A CN 107918956 A CN107918956 A CN 107918956A CN 201711250261 A CN201711250261 A CN 201711250261A CN 107918956 A CN107918956 A CN 107918956A
- Authority
- CN
- China
- Prior art keywords
- user
- current kinetic
- kinetic pattern
- pattern
- virtual scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 19
- 238000003672 processing method Methods 0.000 title claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 16
- 230000033001 locomotion Effects 0.000 claims description 29
- 230000008859 change Effects 0.000 claims description 23
- 238000000034 method Methods 0.000 claims description 19
- 230000009471 action Effects 0.000 claims description 17
- 230000006978 adaptation Effects 0.000 claims description 7
- 238000004454 trace mineral analysis Methods 0.000 claims description 7
- 230000005611 electricity Effects 0.000 claims description 3
- 238000009434 installation Methods 0.000 claims 2
- 239000003550 marker Substances 0.000 abstract 3
- 235000007926 Craterellus fallax Nutrition 0.000 abstract 1
- 240000007175 Datura inoxia Species 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 238000003860 storage Methods 0.000 description 9
- 230000001133 acceleration Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000037147 athletic performance Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 239000011800 void material Substances 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application provides a kind of processing method of augmented reality, device and electronic equipment, and the processing method of wherein augmented reality includes:The marker being arranged in multimedia is detected, and determines actual position attribute information of the marker in world coordinate system, the multimedia includes single frames picture or the video flowing formed by multiframe picture;Augmented reality processing is carried out according to actual position attribute information of the marker in world coordinate system and the virtual scene being attached in the multimedia, augmented reality processing is carried out so as to provide a kind of object based in multimedia, there is provided the user experience of more horn of plenty.
Description
Technical field
The invention relates to software technology field, more particularly to a kind of processing method of augmented reality, device and electricity
Sub- equipment.
Background technology
With the fast development of internet so that the social functions based on internet realize it is rich and varied, such as based on i.e.
When communication etc..At the same time, the rise of intelligent hardware devices, adds the means for realizing social functions.And in some occasions,
Such as sports or physical training, often the independent action of user is in the majority, cause to take exercise or the process moved more
It is uninteresting, lack real-time, the three-dimensional interaction with other users.
The content of the invention
In view of this, one of technical problem that the embodiment of the present invention is solved is to provide a kind of processing side of augmented reality
Method, device and electronic equipment, to overcome or alleviate defect in the prior art.
The embodiment of the present application provides a kind of processing method of augmented reality, it includes:
Determine the current kinetic pattern of user, create the virtual scene for being adapted to the current kinetic pattern of the user;
Obtain exercise data of the user under current kinetic pattern and associate with the virtual scene.
Alternatively, in any of the embodiments of the present invention, determining the current kinetic pattern of user includes:
Determine the motor pattern configuration item of the third party application of electric terminal local;
The current kinetic pattern of user is determined by the motor pattern configuration item.
Alternatively, in any of the embodiments of the present invention, determining the current kinetic pattern of user includes:
Action according to the video flowing of camera and to user in the video flowing carries out trace analysis, determines the user
Current kinetic pattern.
Alternatively, in any of the embodiments of the present invention, the camera was started by page end or by electronics end
Holding locally-installed third party application need to start to carry out the capture of the video flowing.
Alternatively, in any of the embodiments of the present invention, if the camera creates adaptation to start by page end
The virtual scene of the current kinetic pattern of the user includes:The current kinetic pattern for being adapted to the user is created by WEBGL
Virtual scene;
If alternatively, the camera is to be started by the locally-installed third party application of electric terminal, pass through
OPENGL creates the virtual scene for being adapted to the current kinetic pattern of the user.
Alternatively, in any of the embodiments of the present invention, determine the current kinetic pattern of user, create and be adapted to the user
The virtual scene of current kinetic pattern include:
Determine the current kinetic pattern of multiple and different users, create the current kinetic pattern for being adapted to the multiple different user
Same virtual scene.
Alternatively, in any of the embodiments of the present invention, exercise data of the user under current kinetic pattern is obtained
And associate with the virtual scene and include:
Obtain exercise data of the user under current kinetic pattern and the user will be loaded into the virtual field
On model in scape.
Alternatively, in any of the embodiments of the present invention, exercise data of the user under current kinetic pattern is obtained
And associate with the virtual scene and include:
Determine the dynamic change of the current kinetic pattern of the user and the thus change of caused exercise data, or
Person, determines the dynamic change of user exercise data under same current kinetic pattern, to being associated with the virtual scene
Exercise data carry out real-time update.
Alternatively, in any of the embodiments of the present invention, determining the change of exercise data includes:To selected characteristic point simultaneously
Using characteristic point analog quantity to the current motion state of the user into line trace.
Alternatively, in any of the embodiments of the present invention, to selected characteristic point and using characteristic point analog quantity to described
The current motion state of user, which carries out tracking, to be included:Position prediction is carried out to selected characteristic point, and uses characteristic point analog quantity
To the current motion state of the user into line trace.
The embodiment of the present invention also provides a kind of processing unit of augmented reality, it includes:
First module, for determining the current kinetic pattern of user, creates the current kinetic pattern of the adaptation user
Virtual scene;
Second module, for obtaining exercise data of the user under current kinetic pattern and associating with the void
Intend in scene.
Alternatively, in any of the embodiments of the present invention, first module is further used for:
Determine the motor pattern configuration item of the third party application of electric terminal local;
The current kinetic pattern of user is determined by the motor pattern configuration item.
Alternatively, in any of the embodiments of the present invention, first module is further used for:According to the video of camera
Flow and the action to user in the video flowing carries out trace analysis, determine the current kinetic pattern of the user.
Alternatively, in any of the embodiments of the present invention, the camera was started by page end or by electronics end
Holding locally-installed third party application need to start to carry out the capture of the video flowing.
Alternatively, in any of the embodiments of the present invention, if the camera is starts by page end, described first
The virtual scene that module is further used for creating the current kinetic pattern of the adaptation user includes:Created and be adapted to by WEBGL
The virtual scene of the current kinetic pattern of the user;
If alternatively, the camera is to be started by the locally-installed third party application of electric terminal, described the
One module is further used for creating the virtual scene for being adapted to the current kinetic pattern of the user by OPENGL.
Alternatively, in any of the embodiments of the present invention, first module is further used for:Determine multiple and different users
Current kinetic pattern, create the same virtual scene of the current kinetic pattern for being adapted to the multiple different user.
Alternatively, in any of the embodiments of the present invention, second module is further used for obtaining the user and is working as
Exercise data under preceding motor pattern simultaneously will be loaded into the user on the model in the virtual scene.
Alternatively, in any of the embodiments of the present invention, second module is further used for:Determine working as the user
The dynamic change of preceding motor pattern and the thus change of caused exercise data, alternatively, determining the user same current
The dynamic change of exercise data under motor pattern, real-time update is carried out to the exercise data being associated with the virtual scene.
Alternatively, in any of the embodiments of the present invention, second module is further used for selected characteristic point simultaneously
Using characteristic point analog quantity to the current motion state of the user into line trace.
Alternatively, in any of the embodiments of the present invention, second module is further used for clicking through selected feature
Row position predicts, and using characteristic point analog quantity to the current motion state of the user into line trace.
The embodiment of the present invention also provides a kind of electronic equipment, it includes processor, and execution is configured with the processor such as
The module of lower technical finesse:
For determining the current kinetic pattern of user, the virtual scene for being adapted to the current kinetic pattern of the user is created;
For obtaining exercise data of the user under current kinetic pattern and associating with the virtual scene.
In the following embodiments of the present invention, by determining the current kinetic pattern of user, create and be adapted to the current of the user
The virtual scene of motor pattern;Exercise data of the user under current kinetic pattern is obtained again and associates with the void
Intend in scene, it is achieved thereby that the augmented reality processing of moving scene, improves user experience.
Brief description of the drawings
Be described in detail by way of example, and not by way of limitation with reference to the accompanying drawings hereinafter the embodiment of the present application some are specific
Embodiment.Identical reference numeral denotes same or similar component or part in attached drawing.Those skilled in the art should manage
Solution, what these attached drawings were not necessarily drawn to scale.In attached drawing:
Fig. 1 is the process flow schematic diagram of augmented reality in the embodiment of the present invention one;
Fig. 2 is the structure diagram of the processing unit of augmented reality in the embodiment of the present invention two;
The structure diagram of three electronic equipment of Fig. 3 embodiment of the present invention.
Embodiment
All advantages for reaching the above at the same time must be not necessarily required to by implementing any technical solution of the embodiment of the present invention.
In order to make those skilled in the art more fully understand the technical solution in the embodiment of the present invention, below in conjunction with the present invention
Attached drawing in embodiment, is clearly and completely described the technical solution in the embodiment of the present invention, it is clear that described reality
It is only part of the embodiment of the embodiment of the present invention to apply example, instead of all the embodiments.Based on the implementation in the embodiment of the present invention
Example, those of ordinary skill in the art's all other embodiments obtained, should all belong to the scope that the embodiment of the present invention is protected.
In the following embodiments of the present invention, by determining the current kinetic pattern of user, create and be adapted to the current of the user
The virtual scene of motor pattern;Exercise data of the user under current kinetic pattern is obtained again and associates with the void
Intend in scene, it is achieved thereby that the augmented reality processing of moving scene, improves user experience.
Fig. 1 is the process flow schematic diagram of augmented reality in the embodiment of the present invention one;As shown in Figure 1, it is included such as
Lower step:
S101, determine electric terminal local third party application motor pattern configuration item;
In the present embodiment, the third party application of electric terminal local such as should for the movement on smart mobile phone
With program, the motor pattern configuration item that user selectes in sports application is read, so that it is determined that motor pattern configuration item.No
There is same motor pattern configuration item different numerical value to define, and difference other different motion configuration item is defined by numerical value.Movement
Pattern is such as running, fitness exercise etc., is good for away.
Alternately, in other embodiments, can be to different fortune if being configured with acceleration transducer on electric terminal
The exercise data that acceleration transducer produces under dynamic model formula carries out statistics and determines, the amplitude model of acceleration transducer output data
Enclose and frequency range.For wearing is configured with the electric terminal of acceleration transducer, its local third party application
The data of acceleration transducer output are read, and the amplitude range with counting before and frequency range are matched, if matching,
Then movement configuration item is defined accordingly.
Specifically, making an uproar in acceleration transducer output data can be taken out by medium filtering or high-pass filtering etc.
Sound, further extracts characteristic value therein, such as peak separation characteristic value, maximum composite vector characteristic value etc., so that according to
The characteristic value of extraction carries out determining for motor pattern, and then determines motor pattern configuration item.
The similar method that motor pattern is determined above by exercise data, in other embodiments, can also be according to difference
The scope of heart rate carries out determining for motor pattern under motor pattern.And heart rate can be used to carry out the electronic device that motor pattern determines
It can be smart mobile phone or intelligent earphone, can also be Intelligent bracelet.Certainly, it is above-mentioned based on acceleration transducer
Scheme, is also applied for Intelligent bracelet, intelligent earphone etc..In the same way, can be that any acceleration transducer etc. that is configured with can be caught
Obtain the electronic device of user movement data, including intelligent clothing.
S102, the action according to the video flowing of camera and to user in the video flowing carry out trace analysis, determine institute
State the current kinetic pattern of user;
In the present embodiment, camera is specifically as follows on intelligent terminal existing camera either by wired or wireless
The external common camera of data-interface (such as USB interface either wifi or bluetooth) or intelligent video camera head.As described above, intelligence
Can terminal can with but be not limited to smart mobile phone, Intelligent bracelet, intelligent earphone etc..
In the present embodiment, realize that augmented reality is handled based on page end, then camera is started by page end to carry out institute
State the capture of video flowing.
Specifically, can (webpage leads in real time by WebRTC when starting camera by page end and carrying out video flowing capture
Letter, Web Real-Time Communication) in API:The API of get User Media () pull-up camera, and then
Start camera capture video flowing.Furthermore it is also possible to lead to another API:Navigator get User Media () pull-up
The API of camera.In the API of pull-up camera, the parameter of camera can also be configured, such as resolution ratio etc..
It is multiframe static images by video flowing cutting when capturing video flowing in step S102 in the present embodiment, further according to
The background model established for different motion scene, each frame picture is compared with background model, if picture between the two
Plain difference is more than the threshold value of setting, then corresponding pixel belongs to the user that moving target is in motion process.Specifically,
Background model can be the average value of preceding multiframe static images pixel in video flowing.
Alternately, in other embodiments, can also based on the pixel value difference between consecutive frame again by thresholding at
Manage to determine moving target, if being more than the threshold value of setting than pixel value difference, then give the variable assignments 1 of setting, otherwise assignment 0,
So as to fulfill thresholding processing.
After moving target (i.e. user) is determined according to above-mentioned background model or adjacent interframe pixel value difference, then to
The action of moving target carries out trace analysis.Specifically, can be chosen by the moving target oriented on the image multiple
Characteristic point, these characteristic points can correspond to the arm of user's body, leg, thus by these characteristic points into line trace realize pair
The tracking and analysis of user action.Further, to these characteristic points carry out based on color histogram either contour line or
The matching of template, so as to fulfill the tracking and analysis to user action.
In the present embodiment, in order to analyze user action, the action template of different motion pattern is pre-established with, than
Such as run, fitness exercise, such as by the way that to running, during fitness exercise, the limb action of user carries out analysis foundation action mould
Plate, such as when running, swing of staggering before and after the arm of left and right, then there are corresponding characteristic point position on the image to be sat for reaction
Mark change, therefore, by being matched with same characteristic point in action template, so as to fulfill the analysis of athletic performance.
S103, create the virtual scene for being adapted to the current kinetic pattern of the user by WEBGL;
In the present embodiment, for example, if having two users A, B, user A start camera on its intelligent terminal into
The shooting of the above-mentioned video flowing of row, and user B, are moved in another place, and are needed user B being synthesized to A and transported
In dynamic scene.
In the present embodiment, specifically, the image of user B and its athletic performance be added to user A is corresponding to be regarded in real time
During frequency flows, and locally presented in user A, transported so as to reach and provide one kind to user A and B in same moving scene
Dynamic experience.For another example, the image of user A and its athletic performance can also be added in the corresponding live video streams of user B,
And locally presented in user B.
For this reason, to realize the establishment of above-mentioned virtual scene, in the present embodiment, it can specifically pass through WEBGL and carry out three-dimensional mould
The foundation of type and render.Model foundation based on WEBGL and render including:H5canvas painting canvas is created to define Drawing zone
Domain, then embeds GLSL ES drawing three-dimensional models in canvas by JavaScript, wherein building including tinter in detail
Stand and tinter is attached to Shader objects, and then complete to render.
S104, obtain exercise data under current kinetic pattern of the user and associate with the virtual scene
In.
In the present embodiment, the establishment of virtual scene is completed by above-mentioned steps S102, while created in virtual scene
Virtual objects models, therefore, in this step embodiment, pass through and associate exercise data of the user under the current kinetic pattern
Virtual objects model into virtual scene can be achieved.In addition, it is above-mentioned when being associated with the virtual scene, can be according to user
The continuous real-time change of exercise data synchronously carries out real-time update to the exercise data being associated with virtual scene.
It should be noted that in other embodiments, the exercise data of multiple users can be associated with same virtual field
Jing Zhong, which can be the actual scene moved of some user, and the thing virtual field for other users
Scape or be entirely virtual scene for all users.
Correspond to above-mentioned steps S102, it is determined that the current kinetic pattern of multiple and different users, it is the multiple to create adaptation
The same virtual scene of the current kinetic pattern of different user.
In the present embodiment, when exercise data is associated with virtual scene, corresponding exercise data is associated with virtually
The corresponding position of object model, such as the action of arm are associated with the arm of virtual objects model.In order to realize action data
To the association of virtual objects model corresponding position.Obtain user actual motion data when, establish the actual motion data with
The incidence relation of user's body motive position, user's body motive position and the motive position of virtual objects model are further closed
Connection, so as to fulfill actual motion data dynamically associating to virtual objects model corresponding position.Equivalent to the definite user's
The dynamic change of current kinetic pattern and the thus change of caused exercise data, alternatively, determining that the user works as same
The dynamic change of exercise data under preceding motor pattern, real-time update is carried out to the exercise data being associated with the virtual scene.
It should be noted that above-mentioned exercise data can carry out area with associating for motive position by way of establishing and identifying
Not, there is the exercise data of different parts different leaders to know.
It should be noted that when determining the change of exercise data, to selected characteristic point and feature only can also be used
Point analog quantity is to the current motion state of the user into line trace.Specifically, to selected characteristic point and using characteristic point phase
Carrying out tracking to the current motion state of the user like amount includes:Position prediction is carried out to selected characteristic point, and using special
Sign point analog quantity is to the current motion state of the user into line trace.
In an other embodiment, if the camera is to be opened by the locally-installed third party application of electric terminal
It is dynamic, then the virtual scene for being adapted to the current kinetic pattern of the user is created by OPENGL.
Fig. 2 is the structure diagram of the processing unit of augmented reality in the embodiment of the present invention two;As shown in Fig. 2, it includes:
First module 201, for determining the current kinetic pattern of user, creates the current kinetic pattern for being adapted to the user
Virtual scene;
Second module 202, for obtaining exercise data of the user under current kinetic pattern and associating with institute
State in virtual scene.
Alternatively, in any of the embodiments of the present invention, first module 201 is further used for:
Determine the motor pattern configuration item of the third party application of electric terminal local;
The current kinetic pattern of user is determined by the motor pattern configuration item.
Alternatively, in any of the embodiments of the present invention, first module 201 is further used for:According to camera
Video flowing and action to user in the video flowing carries out trace analysis, determines the current kinetic pattern of the user.
Alternatively, in any of the embodiments of the present invention, the camera was started by page end or by electronics end
Holding locally-installed third party application need to start to carry out the capture of the video flowing.
Alternatively, in any of the embodiments of the present invention, if the camera is starts by page end, described first
The virtual scene that module 201 is further used for creating the current kinetic pattern of the adaptation user includes:Created by WEBGL suitable
The virtual scene of current kinetic pattern with the user;
If alternatively, the camera is to be started by the locally-installed third party application of electric terminal, described the
One module 201 is further used for creating the virtual scene for being adapted to the current kinetic pattern of the user by OPENGL.
Alternatively, in any of the embodiments of the present invention, first module 201 is further used for:Determine multiple and different
The current kinetic pattern of user, creates the same virtual scene for the current kinetic pattern for being adapted to the multiple different user.
Alternatively, in any of the embodiments of the present invention, second module 202, which is further used for obtaining the user, exists
Exercise data under current kinetic pattern simultaneously will be loaded into the user on the model in the virtual scene.
Alternatively, in any of the embodiments of the present invention, second module 202 is further used for:Determine the user
Current kinetic pattern dynamic change and the thus change of caused exercise data, alternatively, determine the user same
The dynamic change of exercise data under current kinetic pattern, carries out in real time more the exercise data being associated with the virtual scene
Newly.
Alternatively, in any of the embodiments of the present invention, second module 202 is further used for selected characteristic point
And using characteristic point analog quantity to the current motion state of the user into line trace.
Alternatively, in any of the embodiments of the present invention, second module 202 is further used for selected characteristic point
Carry out position prediction, and using characteristic point analog quantity to the current motion state of the user into line trace.
It should be noted that in above-mentioned Fig. 2 embodiments, the first module, the second module can be multiplexed between each other, for this reason,
In Fig. 2 embodiments, the quantity of actual module can be less than 2.
In addition, above-mentioned first module, the second module can be based on distributed arrangement, for example part of module is located at front end, portion
Sub-module is located at rear end.
The structure diagram of three electronic equipment of Fig. 3 embodiment of the present invention;As shown in figure 3, it includes processor 301, it is described
The program module 302 for performing following technical finesse is configured with processor:
For determining the current kinetic pattern of user, the virtual scene for being adapted to the current kinetic pattern of the user is created;
For obtaining exercise data of the user under current kinetic pattern and associating with the virtual scene.
In the present embodiment, program module 302 can include above-mentioned first module, the second module performs relatively independent work(
Can, can also there was only a module, perform above-mentioned all functions.
In the present embodiment, the electronic equipment can include but is not limited to as smart mobile phone, Intelligent bracelet, intelligent earphone
Deng.
The embodiment of the present application also provides a kind of storage medium, is stored with the storage medium and performs above procedure module
The instruction of 302 functions.
Refer to special characteristic, structure, in configuration, or described in the embodiment throughout this specification " one embodiment "
Feature is included at least one embodiment of the invention.Therefore, the phrase " in one embodiment " of appearance is in this theory
The different places present invention not necessarily refers to identical embodiment in bright book.In addition, specific feature, structure, configuration, or characteristic can be with
Combine in any suitable manner in one or more embodiments.
Term " generation ", " ", " to ", " " and " " due to as used herein can refer to relative to it is another layer by layer
Relative position.One layer " generation ", " ", or " " another layer or bonding " to " another layer can directly contact another
On layer or there can be one or more to inject layers.The layer or can have one or more insert that one layer " " layer can be contacted directly
Into layer.
Before detailed description below is carried out, some words for being set out in used in this patent document full text and short
The definition of language is probably beneficial:Term " including (include) " and " including (comprise) " and its modification, mean including and
It is unrestricted;Term " or (or) " is inclusive, mean and/or;Phrase " with ... associate (associated with) " and " with
It is correlation (associated therewith) " and its modification can mean including, be included, " with ... be connected with each other ", bag
Contain, be included, " being connected to ... " or " with ... be connected ", " being attached to ... " or " with ... connection ", " can with ... communicate ",
" with ... coordinate ", staggeredly, side by side, close to, " being constrained to ... " or " use ... constraint ", with, " with ... property " etc.;With
And term " controller " means any equipment, system or its component for controlling at least one operation,
This equipment may be implemented in hardware, firmware or software, or realize in hardware, firmware and software at least two
In some combinations in kind.It should be noted that the function related with any specific controller can locally or remotely be concentrated or
It is scattered.The definition for some words and phrase is provided in this patent document full text, it will be understood by those skilled in the art that being permitted
(even if not being majority of case) in the case of more, this definition suitable for the prior art and the word suitable for so limiting and
The use in the future of phrase.
In the disclosure, state " including (include) " or " may include (may include) " and refer to corresponding function, behaviour
The presence of work or element, without limiting one or more additional function, operation or elements.In the disclosure, such as " including
(include) " and/or the term of " having (have) " can be regarded as representing some characteristics, numeral, step, operation, composition member
Part, element or its combination, and be not to be construed as excluding one or more of the other characteristic, numeral, step, operation, constituent element, member
Part or its combination there is a possibility that or it is additional.
In the disclosure, statement " A or B ", " at least one in A or/and B " or " one or more of A or/and B "
It may include all possible combination of Listed Items." A or B ", " at least one in A and B " or " in A or B for example, statement
It is at least one " it may include:(1) at least one A, (2) at least one B, or (3) at least one A and at least one B.
Statement " first ", " second " used in the various embodiments of the disclosure, " described the first " or " described
Two " can modify various parts and unrelated with order and/or importance, but these statements do not limit corresponding component.It is presented above
It is only used for the purpose for distinguishing element and other elements.For example, the first user equipment and second user equipment represent different
User equipment, although both of which is user equipment.For example, on the premise of without departing substantially from the scope of the present disclosure, the first element can claim
Make the second element, similarly, the second element can be referred to as the first element.
When an element (for example, first element) referred to as " (operationally or can with another element (for example, second element)
Communicatedly) connection " or " (operationally or communicably) being attached to " another element (for example, second element) or " being connected to " are another
During one element (for example, second element), it is thus understood that an element is connected directly to another element or an element
Another element is indirectly connected to via another element (for example, third element).Conversely, it is appreciated that when element (for example,
First element) referred to as " be directly connected to " or during " directly connection " to another element (the second element), then without element (for example, the
Three elements) it is inserted between both.
Statement " being configured to " can alternatively be used with following statement as used in this article:" being suitable for ", " have ...
Ability ", " being designed as ", " being suitable for ", " being fabricated to " or " can ".Term " being configured to " need not can mean on hardware " special
It is designed as ".Alternately, in some cases, statement " equipment being configured to ... " can mean the equipment and miscellaneous equipment or portion
Part is together " can ... ".It is used only for performing phase for example, phrase " being suitable for the processor that (or being configured to) performs A, B and C " is gratifying
The application specific processor (for example, embeded processor) that should operate may be implemented within one or more of storage device
Software program performs the general processor (for example, central processing unit (CPU) or application processor (AP)) of corresponding operating.
Device embodiment described above is only schematical, wherein the module illustrated as separating component can
To be or may not be physically separate, physics mould is may or may not be as the component that module is shown
Block, you can with positioned at a place, or can also be distributed on multiple mixed-media network modules mixed-medias.It can be selected according to the actual needs
In some or all of module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying creativeness
Work in the case of, you can to understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can
Realized by the mode of software plus required general hardware platform, naturally it is also possible to pass through hardware.Based on such understanding, on
The part that technical solution substantially in other words contributes to the prior art is stated to embody in the form of software product, should
Computer software product can store in a computer-readable storage medium, the computer readable recording medium storing program for performing include be used for
The readable form storage of computer (such as computer) or any mechanism of transmission information.For example, machine readable media is included only
Read memory (ROM), random access memory (RAM), magnetic disk storage medium, optical storage media, flash medium, electricity, light,
Sound or the transmitting signal of other forms (for example, carrier wave, infrared signal, digital signal etc.) etc., which includes
Some instructions are used so that a computer equipment (can be personal computer, server, or network equipment etc.) performs respectively
Method described in some parts of a embodiment or embodiment.
Finally it should be noted that:Above example is only to illustrate the technical solution of the embodiment of the present application, rather than it is limited
System;Although the application is described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that:Its
It can still modify to the technical solution described in foregoing embodiments, or which part technical characteristic is equal
Replace;And these modifications or replacement, the essence of appropriate technical solution is departed from each embodiment technical solution of the application
Spirit and scope.
It will be understood by those skilled in the art that the embodiment of the embodiment of the present invention can be provided as method, apparatus (equipment) or
Computer program product.Therefore, the embodiment of the present invention can use complete hardware embodiment, complete software embodiment or combine soft
The form of the embodiment of part and hardware aspect.Moreover, the embodiment of the present invention can be used wherein includes calculating in one or more
The computer-usable storage medium of machine usable program code (includes but not limited to magnetic disk storage, CD-ROM, optical memory
Deng) on the form of computer program product implemented.
The embodiment of the present invention with reference to according to the method for the embodiment of the present invention, device (equipment) and computer program product
Flowchart and/or the block diagram describes.It should be understood that it can be realized by computer program instructions every in flowchart and/or the block diagram
The combination of flow and/or square frame in one flow and/or square frame and flowchart and/or the block diagram.These computers can be provided
Processor of the programmed instruction to all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices
To produce a machine so that the instruction performed by computer or the processor of other programmable data processing devices produces use
In the dress for realizing the function of being specified in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames
Put.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or
The instruction performed on other programmable devices is provided and is used for realization in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a square frame or multiple square frames.
Claims (21)
- A kind of 1. processing method of augmented reality, it is characterised in that including:Determine the current kinetic pattern of user, create the virtual scene for being adapted to the current kinetic pattern of the user;Obtain exercise data of the user under current kinetic pattern and associate with the virtual scene.
- 2. according to the method described in claim 1, it is characterized in that, determining the current kinetic pattern of user includes:Determine the motor pattern configuration item of the third party application of electric terminal local;The current kinetic pattern of user is determined by the motor pattern configuration item.
- 3. according to the method described in claim 1, it is characterized in that, determining the current kinetic pattern of user includes:Action according to the video flowing of camera and to user in the video flowing carries out trace analysis, determines working as the user Preceding motor pattern.
- 4. according to the method described in claim 3, it is characterized in that, the camera is started or is passed through electronics by page end The third party application of terminal local installation need to start to carry out the capture of the video flowing.
- 5. if according to the method described in claim 3, it is characterized in that, the camera creates to be started by page end Being adapted to the virtual scene of the current kinetic pattern of the user includes:The current kinetic for being adapted to the user is created by WEBGL The virtual scene of pattern;If alternatively, the camera is to start by the locally-installed third party application of electric terminal, pass through OPENGL Create the virtual scene for being adapted to the current kinetic pattern of the user.
- 6. according to the method described in claim 1, it is characterized in that, the current kinetic pattern of user is determined, described in establishment adaptation The virtual scene of the current kinetic pattern of user includes:Determine the current kinetic pattern of multiple and different users, establishment is adapted to the same of the current kinetic pattern of the multiple different user One virtual scene.
- 7. according to the method described in claim 1, it is characterized in that, obtain movement number of the user under current kinetic pattern According to and associate with the virtual scene and include:Obtain exercise data of the user under current kinetic pattern and the user will be loaded into the virtual scene Model on.
- 8. the method according to the description of claim 7 is characterized in that obtain movement number of the user under current kinetic pattern According to and associate with the virtual scene and include:The dynamic change of the current kinetic pattern of the user and the thus change of caused exercise data are determined, alternatively, really The dynamic change of fixed user exercise data under same current kinetic pattern, the movement to being associated with the virtual scene Data carry out real-time update.
- 9. according to the method described in claim 8, it is characterized in that, determining the change of exercise data includes:To selected feature Put and use characteristic point analog quantity to the current motion state of the user into line trace.
- 10. according to the method described in claim 9, it is characterized in that, to selected characteristic point and use characteristic point analog quantity pair The current motion state of the user, which carries out tracking, to be included:Position prediction is carried out to selected characteristic point, and uses characteristic point phase Like amount to the current motion state of the user into line trace.
- A kind of 11. processing unit of augmented reality, it is characterised in that including:First module, for determining the current kinetic pattern of user, establishment is adapted to the virtual of the current kinetic pattern of the user Scene;Second module, for obtaining exercise data of the user under current kinetic pattern and associating with the virtual field Jing Zhong.
- 12. according to the devices described in claim 11, it is characterised in that first module is further used for:Determine the motor pattern configuration item of the third party application of electric terminal local;The current kinetic pattern of user is determined by the motor pattern configuration item.
- 13. according to the devices described in claim 11, it is characterised in that first module is further used for:According to camera Video flowing and action to user in the video flowing carry out trace analysis, determine the current kinetic pattern of the user.
- 14. device according to claim 13, it is characterised in that the camera is started or passed through electricity by page end The third party application of sub- terminal local installation need to start to carry out the capture of the video flowing.
- 15. device according to claim 13, it is characterised in that if the camera is to be started by page end, institute Stating the virtual scene for the current kinetic pattern that the first module is further used for the establishment adaptation user includes:Created by WEBGL Build the virtual scene for being adapted to the current kinetic pattern of the user;If alternatively, the camera is by the locally-installed third party application startup of electric terminal, first mould Block is further used for creating the virtual scene for being adapted to the current kinetic pattern of the user by OPENGL.
- 16. according to the devices described in claim 11, it is characterised in that first module is further used for:Determine it is multiple not With the current kinetic pattern of user, establishment is adapted to the same virtual scene of the current kinetic pattern of the multiple different user.
- 17. according to the devices described in claim 11, it is characterised in that second module is further used for obtaining the user Exercise data under current kinetic pattern simultaneously will be loaded into the user on the model in the virtual scene.
- 18. device according to claim 17, it is characterised in that second module is further used for:Determine the use The dynamic change of the current kinetic pattern at family and the thus change of caused exercise data, alternatively, determining the user same The dynamic change of exercise data under one current kinetic pattern, carries out in real time more the exercise data being associated with the virtual scene Newly.
- 19. device according to claim 18, it is characterised in that second module is further used for selected feature Put and use characteristic point analog quantity to the current motion state of the user into line trace.
- 20. device according to claim 19, it is characterised in that second module is further used for selected feature Click through row position prediction, and using characteristic point analog quantity to the current motion state of the user into line trace.
- 21. a kind of electronic equipment, it is characterised in that including processor, be configured with the processor and perform following technical finesse Module:For determining the current kinetic pattern of user, the virtual scene for being adapted to the current kinetic pattern of the user is created;For obtaining exercise data of the user under current kinetic pattern and associating with the virtual scene.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711250261.9A CN107918956A (en) | 2017-12-01 | 2017-12-01 | Processing method, device and the electronic equipment of augmented reality |
PCT/CN2018/105116 WO2019105100A1 (en) | 2017-12-01 | 2018-09-11 | Augmented reality processing method and apparatus, and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711250261.9A CN107918956A (en) | 2017-12-01 | 2017-12-01 | Processing method, device and the electronic equipment of augmented reality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107918956A true CN107918956A (en) | 2018-04-17 |
Family
ID=61898203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711250261.9A Pending CN107918956A (en) | 2017-12-01 | 2017-12-01 | Processing method, device and the electronic equipment of augmented reality |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107918956A (en) |
WO (1) | WO2019105100A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019105100A1 (en) * | 2017-12-01 | 2019-06-06 | 广州市动景计算机科技有限公司 | Augmented reality processing method and apparatus, and electronic device |
CN113515187A (en) * | 2020-04-10 | 2021-10-19 | 咪咕视讯科技有限公司 | Virtual reality scene generation method and network side equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104883556A (en) * | 2015-05-25 | 2015-09-02 | 深圳市虚拟现实科技有限公司 | Three dimensional display method based on augmented reality and augmented reality glasses |
CN106355153A (en) * | 2016-08-31 | 2017-01-25 | 上海新镜科技有限公司 | Virtual object display method, device and system based on augmented reality |
CN107168532A (en) * | 2017-05-05 | 2017-09-15 | 武汉秀宝软件有限公司 | A kind of virtual synchronous display methods and system based on augmented reality |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9741169B1 (en) * | 2014-05-20 | 2017-08-22 | Leap Motion, Inc. | Wearable augmented reality devices with object detection and tracking |
US9990689B2 (en) * | 2015-12-16 | 2018-06-05 | WorldViz, Inc. | Multi-user virtual reality processing |
CN106125903B (en) * | 2016-04-24 | 2021-11-16 | 林云帆 | Multi-person interaction system and method |
CN106843532A (en) * | 2017-02-08 | 2017-06-13 | 北京小鸟看看科技有限公司 | The implementation method and device of a kind of virtual reality scenario |
CN107918956A (en) * | 2017-12-01 | 2018-04-17 | 广州市动景计算机科技有限公司 | Processing method, device and the electronic equipment of augmented reality |
-
2017
- 2017-12-01 CN CN201711250261.9A patent/CN107918956A/en active Pending
-
2018
- 2018-09-11 WO PCT/CN2018/105116 patent/WO2019105100A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104883556A (en) * | 2015-05-25 | 2015-09-02 | 深圳市虚拟现实科技有限公司 | Three dimensional display method based on augmented reality and augmented reality glasses |
CN106355153A (en) * | 2016-08-31 | 2017-01-25 | 上海新镜科技有限公司 | Virtual object display method, device and system based on augmented reality |
CN107168532A (en) * | 2017-05-05 | 2017-09-15 | 武汉秀宝软件有限公司 | A kind of virtual synchronous display methods and system based on augmented reality |
Non-Patent Citations (1)
Title |
---|
黄晓立: "基于视觉图像的手指关节角度测量方法和实现", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019105100A1 (en) * | 2017-12-01 | 2019-06-06 | 广州市动景计算机科技有限公司 | Augmented reality processing method and apparatus, and electronic device |
CN113515187A (en) * | 2020-04-10 | 2021-10-19 | 咪咕视讯科技有限公司 | Virtual reality scene generation method and network side equipment |
CN113515187B (en) * | 2020-04-10 | 2024-02-13 | 咪咕视讯科技有限公司 | Virtual reality scene generation method and network side equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2019105100A1 (en) | 2019-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102649272B1 (en) | Body posture estimation | |
US9747495B2 (en) | Systems and methods for creating and distributing modifiable animated video messages | |
CN108525305B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN110390704A (en) | Image processing method, device, terminal device and storage medium | |
CN109325450A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110119700B (en) | Avatar control method, avatar control device and electronic equipment | |
US20220327709A1 (en) | Garment segmentation | |
CN109729426A (en) | A kind of generation method and device of video cover image | |
US11734866B2 (en) | Controlling interactive fashion based on voice | |
CN114930399A (en) | Image generation using surface-based neurosynthesis | |
CN102567716B (en) | Face synthetic system and implementation method | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
CN106156237B (en) | Information processing method, information processing unit and user equipment | |
CN108345385A (en) | Virtual accompany runs the method and device that personage establishes and interacts | |
KR20220108812A (en) | Skeletal tracking using previous frames | |
CN107333086A (en) | A kind of method and device that video communication is carried out in virtual scene | |
CN110058699A (en) | A kind of user behavior recognition method based on Intelligent mobile equipment sensor | |
WO2023279713A1 (en) | Special effect display method and apparatus, computer device, storage medium, computer program, and computer program product | |
KR20240066263A (en) | Control interactive fashion based on facial expressions | |
CN107623622A (en) | A kind of method and electronic equipment for sending speech animation | |
CN107918956A (en) | Processing method, device and the electronic equipment of augmented reality | |
CN107204026A (en) | A kind of method and apparatus for showing animation | |
CN108320331A (en) | A kind of method and apparatus for the augmented reality video information generating user's scene | |
CN105893011A (en) | Application interface display method and apparatus | |
CN104751454B (en) | A kind of method and apparatus for being used to determine the character contour in image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200522 Address after: 310051 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba (China) Co.,Ltd. Address before: 510627 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping B radio square 14 storey tower Applicant before: Guangzhou Dongjing Computer Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180417 |
|
RJ01 | Rejection of invention patent application after publication |