WO2019105100A1 - Augmented reality processing method and apparatus, and electronic device - Google Patents

Augmented reality processing method and apparatus, and electronic device Download PDF

Info

Publication number
WO2019105100A1
WO2019105100A1 PCT/CN2018/105116 CN2018105116W WO2019105100A1 WO 2019105100 A1 WO2019105100 A1 WO 2019105100A1 CN 2018105116 W CN2018105116 W CN 2018105116W WO 2019105100 A1 WO2019105100 A1 WO 2019105100A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
current motion
motion mode
virtual scene
mode
Prior art date
Application number
PCT/CN2018/105116
Other languages
French (fr)
Chinese (zh)
Inventor
汤锦鹏
查俊莉
Original Assignee
广州市动景计算机科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市动景计算机科技有限公司 filed Critical 广州市动景计算机科技有限公司
Publication of WO2019105100A1 publication Critical patent/WO2019105100A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Abstract

Augmented reality processing method and apparatus, and an electronic device, the method comprising: determining a current motion mode of a user, and creating a virtual scene which adapts to the current motion mode of the user; and acquiring motion data of the user in the current motion mode, and associating same with the virtual scene. The method and apparatus, and the electronic device can provide a richer user experience.

Description

增强现实的处理方法、装置及电子设备Augmented reality processing method, device and electronic device 技术领域Technical field
本申请实施例涉及软件技术领域,尤其涉及一种增强现实的处理方法、装置及电子设备。The embodiments of the present invention relate to the field of software technologies, and in particular, to a processing method, device, and electronic device for augmented reality.
背景技术Background technique
随着互联网的快速发展,使得基于互联网的社交功能更加丰富多彩,比如即时通讯等等。与此同时,智能硬件设备的兴起,增加了实现社交功能的手段。而在一些场合,比如体育运动或者体育锻炼,往往是用户的单独行动居多,导致锻炼或者运动的过程较为枯燥,缺少与其他用户的实时、立体的互动。With the rapid development of the Internet, Internet-based social functions are more colorful, such as instant messaging. At the same time, the rise of smart hardware devices has increased the means to achieve social functions. In some occasions, such as sports or physical exercise, it is often the user's individual actions, resulting in a boring exercise or exercise, lack of real-time, three-dimensional interaction with other users.
发明内容Summary of the invention
有鉴于此,本发明实施例所解决的技术问题之一在于提供一种增强现实的处理方法、装置及电子设备,用以克服或者缓解现有技术中缺陷。In view of this, one of the technical problems to be solved by the embodiments of the present invention is to provide an augmented reality processing method, apparatus, and electronic device for overcoming or alleviating the defects in the prior art.
本申请实施例提供了一种增强现实的处理方法,其包括:The embodiment of the present application provides a processing method for augmented reality, which includes:
确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;Determining a current motion mode of the user, creating a virtual scene that adapts the current motion mode of the user;
获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。Acquiring motion data of the user in the current sport mode and associating it into the virtual scene.
可选地,在本发明的任一实施例中,确定用户的当前运动模式包括:Optionally, in any embodiment of the present invention, determining the current motion mode of the user includes:
确定电子终端本地的第三方应用程序的运动模式配置项;Determining a motion mode configuration item of a third party application local to the electronic terminal;
通过所述运动模式配置项确定用户的当前运动模式。The current motion mode of the user is determined by the motion mode configuration item.
可选地,在本发明的任一实施例中,确定用户的当前运动模式包括:Optionally, in any embodiment of the present invention, determining the current motion mode of the user includes:
根据摄像头的视频流并对所述视频流中用户的动作进行跟踪分析,确定所述用户的当前运动模式。The current motion mode of the user is determined according to the video stream of the camera and tracking analysis of the actions of the user in the video stream.
可选地,在本发明的任一实施例中,所述摄像头通过网页端启动或者通过电子终端本地安装的第三方应用程序需启动以进行所述视频流的捕获。Optionally, in any embodiment of the present invention, the camera is activated by a webpage end or a third party application installed locally through the electronic terminal to perform capturing of the video stream.
可选地,在本发明的任一实施例中,若所述摄像头为通过网页端启动,则创建适配所述用户的当前运动模式的虚拟场景包括:通过WEBGL创建适配所述用户的当前运动模式的虚拟场景;Optionally, in any embodiment of the present invention, if the camera is launched by using a webpage end, creating a virtual scenario that adapts the current motion mode of the user includes: creating, by using WEBGL, the current user of the user Virtual scene of the sport mode;
或者,若所述摄像头为通过电子终端本地安装的第三方应用程序启动,则通过OPENGL创建适配所述用户的当前运动模式的虚拟场景。Alternatively, if the camera is launched for a third party application installed locally through the electronic terminal, a virtual scene adapted to the current motion mode of the user is created by OPENGL.
可选地,在本发明的任一实施例中,确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景包括:Optionally, in any embodiment of the present invention, determining a current motion mode of the user, and creating a virtual scenario that adapts the current motion mode of the user includes:
确定多个不同用户的当前运动模式,创建适配所述多个不同用户的当前运动模式的同一虚拟场景。Determining a current motion pattern of a plurality of different users, creating a same virtual scene that adapts the current motion patterns of the plurality of different users.
可选地,在本发明的任一实施例中,获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中包括:Optionally, in any embodiment of the present invention, acquiring the motion data of the user in the current motion mode and associating the data into the virtual scene includes:
获取所述用户在当前运动模式下的运动数据并将加载到所述用户在所述虚拟场景中的模型上。Obtaining motion data of the user in the current motion mode and loading it onto the model of the user in the virtual scene.
可选地,在本发明的任一实施例中,获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中包括:Optionally, in any embodiment of the present invention, acquiring the motion data of the user in the current motion mode and associating the data into the virtual scene includes:
确定所述用户的当前运动模式的动态变化以及由此导致的运动数据的变化,或者,确定所述用户在同一当前运动模式下运动数据的动态变化,对关联到所述虚拟场景中的运动数据进行实时更新。Determining a dynamic change of the current motion mode of the user and a change of the motion data caused thereby, or determining a dynamic change of the motion data of the user in the same current motion mode, and correlating the motion data in the virtual scene Perform real-time updates.
可选地,在本发明的任一实施例中,确定运动数据的变化包括:对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪。Optionally, in any embodiment of the present invention, determining the change in the motion data comprises tracking the current motion state of the user for the selected feature point and using the feature point similarity.
可选地,在本发明的任一实施例中,对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪包括:对选定的特征点进行位置预测,并采用特征点相似量对所述用户的当前运动状态进行跟踪。Optionally, in any embodiment of the present invention, tracking the current motion state of the user by using the feature point similarity for the selected feature point comprises: performing position prediction on the selected feature point, and adopting The feature point similarity tracks the current motion state of the user.
本发明实施例还提供一种增强现实的处理装置,其包括:The embodiment of the invention further provides an augmented reality processing device, which includes:
第一模块,用于确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;a first module, configured to determine a current motion mode of the user, and create a virtual scenario that adapts the current motion mode of the user;
第二模块,用于获取所述用户在当前运动模式下的运动数据并将其关 联到所述虚拟场景中。And a second module, configured to acquire motion data of the user in the current motion mode and associate it into the virtual scene.
可选地,在本发明的任一实施例中,所述第一模块进一步用于:Optionally, in any embodiment of the present invention, the first module is further configured to:
确定电子终端本地的第三方应用程序的运动模式配置项;Determining a motion mode configuration item of a third party application local to the electronic terminal;
通过所述运动模式配置项确定用户的当前运动模式。The current motion mode of the user is determined by the motion mode configuration item.
可选地,在本发明的任一实施例中,所述第一模块进一步用于:根据摄像头的视频流并对所述视频流中用户的动作进行跟踪分析,确定所述用户的当前运动模式。Optionally, in any embodiment of the present invention, the first module is further configured to: determine, according to a video stream of the camera, and perform tracking analysis on a motion of the user in the video stream, to determine a current motion mode of the user. .
可选地,在本发明的任一实施例中,所述摄像头通过网页端启动或者通过电子终端本地安装的第三方应用程序需启动以进行所述视频流的捕获。Optionally, in any embodiment of the present invention, the camera is activated by a webpage end or a third party application installed locally through the electronic terminal to perform capturing of the video stream.
可选地,在本发明的任一实施例中,若所述摄像头为通过网页端启动,则所述第一模块进一步用于创建适配所述用户的当前运动模式的虚拟场景包括:通过WEBGL创建适配所述用户的当前运动模式的虚拟场景;Optionally, in any embodiment of the present invention, if the camera is started by using a webpage, the first module is further configured to create a virtual scenario that adapts a current motion mode of the user, including: by using WEBGL Creating a virtual scene that adapts to the current motion mode of the user;
或者,若所述摄像头为通过电子终端本地安装的第三方应用程序启动,则所述第一模块进一步用于通过OPENGL创建适配所述用户的当前运动模式的虚拟场景。Alternatively, if the camera is activated by a third party application installed locally through the electronic terminal, the first module is further configured to create a virtual scene that adapts the current motion mode of the user by using OPENGL.
可选地,在本发明的任一实施例中,所述第一模块进一步用于:确定多个不同用户的当前运动模式,创建适配所述多个不同用户的当前运动模式的同一虚拟场景。Optionally, in any embodiment of the present invention, the first module is further configured to: determine a current motion mode of multiple different users, and create a same virtual scenario that adapts a current motion mode of the multiple different users. .
可选地,在本发明的任一实施例中,所述第二模块进一步用于获取所述用户在当前运动模式下的运动数据并将加载到所述用户在所述虚拟场景中的模型上。Optionally, in any embodiment of the present invention, the second module is further configured to acquire motion data of the user in a current motion mode and load the model on the model of the user in the virtual scene. .
可选地,在本发明的任一实施例中,所述第二模块进一步用于:确定所述用户的当前运动模式的动态变化以及由此导致的运动数据的变化,或者,确定所述用户在同一当前运动模式下运动数据的动态变化,对关联到所述虚拟场景中的运动数据进行实时更新。Optionally, in any embodiment of the present invention, the second module is further configured to: determine a dynamic change of a current motion mode of the user and a change of the motion data caused thereby, or determine the user The dynamic change of the motion data in the same current motion mode, and the motion data associated with the virtual scene is updated in real time.
可选地,在本发明的任一实施例中,所述第二模块进一步用于对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪。Optionally, in any embodiment of the present invention, the second module is further configured to track the current motion state of the user by using the feature point similarity for the selected feature point.
可选地,在本发明的任一实施例中,所述第二模块进一步用于对选定的特征点进行位置预测,并采用特征点相似量对所述用户的当前运动状态 进行跟踪。Optionally, in any embodiment of the present invention, the second module is further configured to perform position prediction on the selected feature points, and track the current motion state of the user by using the feature point similarity.
本发明实施例还提供一种电子设备,其包括处理器,所述处理器上配置有执行如下技术处理的模块:The embodiment of the invention further provides an electronic device, which includes a processor, and the processor is configured with a module that performs the following technical processing:
用于确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;Used to determine a current motion mode of the user, and create a virtual scene that adapts the current motion mode of the user;
用于获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。And used to acquire the motion data of the user in the current motion mode and associate it into the virtual scene.
本发明下述实施例中,通过确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;再获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中,可以实现运动场景的增强现实处理,还可以提高用户体验。In the following embodiments of the present invention, a virtual scene that adapts the current motion mode of the user is created by determining a current motion mode of the user; and the motion data of the user in the current motion mode is acquired and associated with the In the virtual scene, the augmented reality processing of the motion scene can be realized, and the user experience can also be improved.
附图说明DRAWINGS
后文将参照附图以示例性而非限制性的方式详细描述本申请实施例的一些具体实施例。附图中相同的附图标记标示了相同或类似的部件或部分。本领域技术人员应该理解,这些附图未必是按比例绘制的。附图中:Some specific embodiments of the embodiments of the present application will be described in detail, by way of example, and not limitation, The same reference numbers in the drawings identify the same or similar parts. Those skilled in the art should understand that the drawings are not necessarily drawn to scale. In the figure:
图1为本发明实施例一中增强现实的处理方法流程示意图;1 is a schematic flowchart of a processing method of an augmented reality according to Embodiment 1 of the present invention;
图2为本发明实施例二中增强现实的处理装置的结构示意图;2 is a schematic structural diagram of a processing device for augmented reality according to Embodiment 2 of the present invention;
图3本发明实施例三电子设备的结构示意图。FIG. 3 is a schematic structural diagram of an electronic device according to Embodiment 3 of the present invention.
具体实施方式Detailed ways
实施本发明实施例的任一技术方案必不一定需要同时达到以上的所有优点。Any technical solution for implementing the embodiments of the present invention necessarily does not necessarily need to achieve all of the above advantages at the same time.
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。For a better understanding of the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described in conjunction with the accompanying drawings in the embodiments of the present invention. The embodiments are only a part of the embodiments of the embodiments of the invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art should be within the scope of protection of the embodiments of the present invention based on the embodiments in the embodiments of the present invention.
本发明下述实施例中,通过确定用户的当前运动模式,创建适配所述 用户的当前运动模式的虚拟场景;再获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中,从而实现了运动场景的增强现实处理,提高了用户体验。In the following embodiments of the present invention, a virtual scene that adapts the current motion mode of the user is created by determining a current motion mode of the user; and the motion data of the user in the current motion mode is acquired and associated with the In the virtual scene, the augmented reality processing of the motion scene is realized, and the user experience is improved.
图1为本发明实施例一中增强现实的处理方法流程示意图;如图1所示,其包括如下步骤S101~S104:1 is a schematic flowchart of a processing method of an augmented reality according to Embodiment 1 of the present invention; as shown in FIG. 1, the method includes the following steps S101 to S104:
S101、确定电子终端本地的第三方应用程序的运动模式配置项;S101. Determine a motion mode configuration item of a third-party application local to the electronic terminal.
本实施例中,电子终端本地的第三方应用程序,比如可以为安装在智能手机上的运动应用程序,读取用户在运动应用程序上选定的运动模式配置项,从而确定运动模式配置项。不同的运动模式配置项具有不同的数值定义,通过数值定义差别却别不同运动配置项。运动模式比如为跑步、健身操、健走等。In this embodiment, a third-party application local to the electronic terminal, for example, may be a sports application installed on the smart phone, and read a motion mode configuration item selected by the user on the motion application, thereby determining a motion mode configuration item. Different motion mode configuration items have different numerical definitions, and different motion configuration items are defined by numerical differences. Sports modes such as running, aerobics, walking, etc.
可替代地,在其他实施例中,若电子终端上配置有加速度传感器,则可以对不同运动模式下加速度传感器产生的运动数据进行统计确定出,加速度传感器输出数据的幅值范围和频率范围。对于佩戴配置有加速度传感器的电子终端来说,其本地的第三方应用程序读取加速度传感器输出的数据,并与之前统计出的幅值范围和频率范围进行匹配,若匹配,则据此对运动配置项进行定义。Alternatively, in other embodiments, if the acceleration sensor is disposed on the electronic terminal, the motion data generated by the acceleration sensor in different motion modes may be statistically determined, and the amplitude range and the frequency range of the acceleration sensor output data. For an electronic terminal equipped with an acceleration sensor, its local third-party application reads the data output by the acceleration sensor and matches the previously measured amplitude range and frequency range. If it matches, the motion is accordingly Configuration items are defined.
具体地,可以通过中值滤波或者高通滤波等滤除加速度传感器输出数据中的噪声,再进一步提取其中的特征值,比如波峰间隔特征值、或最大合成向量特征值等,从而根据提取的特征值进行运动模式的确定,进而确定运动模式配置项。Specifically, the noise in the output data of the acceleration sensor may be filtered by median filtering or high-pass filtering, and then the feature values, such as the peak interval feature value or the maximum composite vector feature value, may be further extracted, thereby extracting the feature value according to the extracted feature value. The determination of the motion mode is performed to determine the motion mode configuration item.
类似上述通过运动数据确定运动模式的方法,在其他实施例中,也可以根据不同运动模式下心率的范围进行运动模式的确定。而能使用心率进行运动模式确定的电子装置可以是智能手机,也可以是智能耳机,还可以为智能手环。当然,上述基于加速度传感器的方案,也适用于智能手环、智能耳机等。推而广之,可以是任一配置了加速度传感器等能捕获用户运动数据的电子装置即可,包括智能服装。Similar to the above method of determining the motion mode by the motion data, in other embodiments, the determination of the motion mode may also be performed according to the range of heart rate in different motion modes. The electronic device capable of determining the exercise mode using the heart rate can be a smart phone, a smart earphone, or a smart wristband. Of course, the above acceleration sensor based solution is also applicable to smart bracelets, smart headphones, and the like. In general, any electronic device capable of capturing user motion data, such as an acceleration sensor, may be used, including smart clothing.
S102、根据摄像头的视频流并对所述视频流中用户的动作进行跟踪分 析,确定所述用户的当前运动模式;S102. Determine a current motion mode of the user according to a video stream of the camera and perform tracking analysis on a motion of the user in the video stream.
本实施例中,摄像头具体可以为智能终端上已有的摄像头或者通过有线或者无线数据接口(如USB接口或者wifi或者蓝牙)外接的普通摄像头或者智能摄像头。如上所述,智能终端可以但不局限为智能手机、智能手环、智能耳机等。In this embodiment, the camera may specifically be an existing camera on the smart terminal or an ordinary camera or a smart camera externally connected through a wired or wireless data interface (such as a USB interface or wifi or Bluetooth). As mentioned above, the smart terminal can be, but is not limited to, a smart phone, a smart bracelet, a smart headset, and the like.
本实施例中,基于网页端实现增强现实处理,则摄像头通过网页端启动以进行所述视频流的捕获。In this embodiment, the augmented reality processing is implemented based on the webpage end, and the camera is activated by the webpage end to capture the video stream.
具体地,通过网页端启动摄像头进行视频流捕获时,可以通过WebRTC(网页实时通信,Web Real-Time Communication)中的API:get User Media()拉起摄像头的API,进而启动摄像头捕获视频流。另外,还可以通另外一个API(Navigator get User Media)拉起摄像头的API。在拉起摄像头的API时,还可以对摄像头的参数进行配置,比如分辨率等。Specifically, when the webpage is started by the webpage to capture the video stream, the API of the webcam can be pulled up by the API: get User Media() in WebRTC (Web Real-Time Communication), and then the camera captures the video stream. In addition, you can pull up the camera's API via another API (Navigator get User Media). When you pull up the camera's API, you can also configure the camera's parameters, such as resolution.
本实施例中,步骤S102中捕获到视频流时,将视频流切分为多帧静态图片,再根据针对不同运动场景建立的背景模型,将每一帧图片与背景模型进行比对,若两者之间的像素差值大于设定的阈值,则对应的像素点属于运动目标即处于运动过程中的用户。具体地,背景模型可以是视频流中前多帧静态图片像素的平均值。In this embodiment, when the video stream is captured in step S102, the video stream is divided into multiple frames of static pictures, and then each frame picture is compared with the background model according to the background model established for different motion scenes, if two The pixel difference between the two is greater than the set threshold, and the corresponding pixel belongs to the moving target, that is, the user who is in motion. Specifically, the background model may be an average of the previous plurality of still picture pixels in the video stream.
可替代地,在其他实施例中,也可以基于相邻帧之间的像素差值再通过阈值化处理来确定运动目标,比如若像素差值大于设定的阈值,则给设定的变量赋值1,否则赋值0,从而实现阈值化处理。Alternatively, in other embodiments, the moving target may also be determined by thresholding processing based on the pixel difference value between adjacent frames, for example, if the pixel difference value is greater than a set threshold, the set variable is assigned 1, otherwise assigned a value of 0, thereby achieving thresholding.
在根据上述背景模型或者相邻帧间像素差值确定出运动目标(即用户)后,再对用运动目标的动作进行跟踪分析。具体地,可以通过在图像上定位出的运动目标选取的多个特征点,这些特征点可以对应用户身体的胳膊、腿,从而通过对这些特征点进行跟踪实现对用户动作的跟踪和分析。进一步地,再对这些特征点进行基于颜色直方图或者轮廓线或者模板的匹配,从而实现对用户动作的跟踪和分析。After the moving target (ie, the user) is determined according to the background model or the difference between adjacent inter-frame pixels, the motion of the moving target is tracked and analyzed. Specifically, a plurality of feature points selected by the moving target positioned on the image may be selected, and the feature points may correspond to the arms and legs of the user's body, thereby tracking and analyzing the user actions by tracking the feature points. Further, the feature points are further matched based on a color histogram or a contour or a template, thereby implementing tracking and analysis of user actions.
本实施例中,为了对用户动作进行分析,预先建立有不同运动模式的动作模板,比如跑步、健身操等。比如通过对跑步、健身操过程中用户的肢体动作进行分析建立动作模板,跑步时,左右胳膊前后错开摆动,反应 在图像上则存在对应的特征点位置发生了坐标变化,因此,通过与动作模板上同一特征点进行匹配,从而实现运动动作的分析。In this embodiment, in order to analyze the user action, motion templates with different motion modes, such as running, aerobics, etc., are established in advance. For example, through the analysis of the user's body movements during running and aerobics, the motion template is established. When running, the left and right arms are staggered back and forth, and the corresponding feature points are changed in the image on the image. Therefore, the action template is adopted. The same feature points are matched to achieve the analysis of the motion action.
S103、通过WEBGL创建适配所述用户的当前运动模式的虚拟场景;S103. Create, by using WEBGL, a virtual scenario that adapts a current motion mode of the user.
本实施例中,举例来说,假如有两个用户A、B,用户A启动其智能终端上的摄像头进行上述视频流的拍摄,而用户B,在另外一个地方进行运动,而需要将用户B合成到A进行运动的场景中。In this embodiment, for example, if there are two users A and B, the user A activates the camera on the smart terminal to perform the shooting of the video stream, and the user B moves in another place, and the user B needs to be performed. Synthesized into the scene where A moves.
本实施例中,具体地,将用户B的形象以及其运动动作添加到用户A对应的实时视频流中,并在用户A本地进行呈现,从而达到给用户A和B提供一种在同一运动场景中进行运动的体验。再比如,也可以将用户A的形象及其运动动作添加到用户B对应的实时视频流中,并在用户B本地进行呈现。In this embodiment, specifically, the image of the user B and its motion action are added to the real-time video stream corresponding to the user A, and are presented locally in the user A, thereby providing the user A and the B with a same motion scene. The experience of exercising in the middle. For another example, the image of the user A and its motion action may be added to the real-time video stream corresponding to the user B, and presented in the user B local.
为此,为实现上述虚拟场景的创建,本实施例中,具体可以通过WEBGL进行三维模型的建立以及渲染。基于WEBGL的模型建立和渲染包括:创建H5 canvas画布来定义绘图区域,然后通过JavaScript内嵌GLSL ES在canvas中绘制三维模型,其中详细包括着色器的建立、并将着色器附加到Shader对象,进而完成渲染。To this end, in order to realize the creation of the above-mentioned virtual scene, in this embodiment, the creation and rendering of the three-dimensional model can be specifically performed through WEBGL. WEBGL-based model creation and rendering includes: creating an H5 canvas canvas to define the drawing area, and then drawing a 3D model in the canvas through JavaScript embedded GLSL ES, which includes the creation of the shader in detail, and attaches the shader to the Shader object, and then Finish rendering.
S104、获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。S104. Acquire motion data of the user in a current motion mode and associate it into the virtual scene.
本实施例中,通过上述步骤S102完成了虚拟场景的创建,同时在虚拟场景了创建了虚拟对象模型,因此,在本步骤实施例中,通过将用户在当前运动模式下的运动数据关联到虚拟场景中的虚拟对象模型即可实现。另外,上述关联到所述虚拟场景时,可以根据用户运动数据的不断实时变化对关联到虚拟场景中的运动数据进行实时更新。In this embodiment, the creation of the virtual scene is completed by the above step S102, and the virtual object model is created in the virtual scene. Therefore, in this embodiment, the motion data of the user in the current motion mode is associated with the virtual The virtual object model in the scene can be implemented. In addition, when the foregoing is associated with the virtual scene, the motion data associated with the virtual scene may be updated in real time according to the continuous real-time change of the user motion data.
需要说明的是,在其他实施例中,可以将多个用户的运动数据关联到同一虚拟场景中,该虚拟场景可以是某一个用户实际进行运动的场景,而对于其他用户来说事虚拟场景,也可以是对所有用户来说均是完全虚拟的场景。It should be noted that, in other embodiments, the motion data of multiple users may be associated with the same virtual scene, where the virtual scene may be a scene in which one user actually performs motion, and for other users, a virtual scene, It can also be a completely virtual scene for all users.
对应到上述步骤S102,则确定多个不同用户的当前运动模式,创建适配所述多个不同用户的当前运动模式的同一虚拟场景。Corresponding to the above step S102, determining a current motion mode of a plurality of different users, and creating the same virtual scene that adapts the current motion modes of the plurality of different users.
本实施例中,在将运动数据关联到虚拟场景中时,将对应的运动数据关联到虚拟对象模型对应的部位,比如胳膊的动作关联到虚拟对象模型的胳膊上。为了实现动作数据到虚拟对象模型对应部位的关联,在获取用户的实际运动数据时,建立该实际运动数据与用户身体运动部位的关联关系,用户身体运动部位与虚拟对象模型的运动部位进一步关联,从而实现实际运动数据到虚拟对象模型对应部位的动态关联。相当于确定所述用户的当前运动模式的动态变化以及由此导致的运动数据的变化,或者,确定所述用户在同一当前运动模式下运动数据的动态变化,对关联到所述虚拟场景中的运动数据进行实时更新。In this embodiment, when the motion data is associated with the virtual scene, the corresponding motion data is associated with the location corresponding to the virtual object model, for example, the motion of the arm is associated with the arm of the virtual object model. In order to realize the association of the action data to the corresponding part of the virtual object model, when the actual motion data of the user is acquired, the relationship between the actual motion data and the moving part of the user body is established, and the moving part of the user body is further associated with the moving part of the virtual object model. Thereby, the dynamic association of the actual motion data to the corresponding part of the virtual object model is realized. Corresponding to determining a dynamic change of the current motion mode of the user and a change of the motion data caused thereby, or determining a dynamic change of the motion data of the user in the same current motion mode, associated with the virtual scene The motion data is updated in real time.
需要说明的是,上述运动数据和运动部位的关联可以通过建立标识的方式进行区别,不同部位的运动数据具有不同的头标识。It should be noted that the association between the motion data and the motion part may be distinguished by establishing an identifier, and the motion data of different parts has different header identifiers.
需要说明的是,在确定运动数据的变化时,也可以仅对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪。具体地,对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪包括:对选定的特征点进行位置预测,并采用特征点相似量对所述用户的当前运动状态进行跟踪。It should be noted that, when determining the change of the motion data, the current motion state of the user may also be tracked only for the selected feature points and using the feature point similarity. Specifically, tracking the current feature state of the user by using the feature point similarity by using the feature point similarity includes: performing position prediction on the selected feature point, and using the similarity of the feature point to the current motion of the user Status is tracked.
在另外一实施例中,若所述摄像头为通过电子终端本地安装的第三方应用程序启动,则通过OPENGL创建适配所述用户的当前运动模式的虚拟场景。In another embodiment, if the camera is launched for a third party application installed locally by the electronic terminal, a virtual scene adapted to the current motion mode of the user is created by OPENGL.
图2为本发明实施例二中增强现实的处理装置的结构示意图;如图2所示,其包括:2 is a schematic structural diagram of a processing device for augmented reality according to Embodiment 2 of the present invention; as shown in FIG. 2, it includes:
第一模块201,用于确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;The first module 201 is configured to determine a current motion mode of the user, and create a virtual scenario that adapts the current motion mode of the user;
第二模块202,用于获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。The second module 202 is configured to acquire motion data of the user in the current motion mode and associate it into the virtual scene.
可选地,在本发明的任一实施例中,所述第一模块201进一步用于:Optionally, in any embodiment of the present invention, the first module 201 is further configured to:
确定电子终端本地的第三方应用程序的运动模式配置项;Determining a motion mode configuration item of a third party application local to the electronic terminal;
通过所述运动模式配置项确定用户的当前运动模式。The current motion mode of the user is determined by the motion mode configuration item.
可选地,在本发明的任一实施例中,所述第一模块201进一步用于:根据摄像头的视频流并对所述视频流中用户的动作进行跟踪分析,确定所述用户的当前运动模式。Optionally, in any embodiment of the present invention, the first module 201 is further configured to: determine, according to a video stream of the camera, and perform tracking analysis on a motion of the user in the video stream, to determine a current motion of the user. mode.
可选地,在本发明的任一实施例中,所述摄像头通过网页端启动或者通过电子终端本地安装的第三方应用程序需启动以进行所述视频流的捕获。Optionally, in any embodiment of the present invention, the camera is activated by a webpage end or a third party application installed locally through the electronic terminal to perform capturing of the video stream.
可选地,在本发明的任一实施例中,若所述摄像头为通过网页端启动,则所述第一模块201进一步用于创建适配所述用户的当前运动模式的虚拟场景包括:通过WEBGL创建适配所述用户的当前运动模式的虚拟场景;Optionally, in any embodiment of the present invention, if the camera is activated by using a webpage, the first module 201 is further configured to create a virtual scenario that adapts a current motion mode of the user, including: WEBGL creates a virtual scene that adapts to the current motion mode of the user;
或者,若所述摄像头为通过电子终端本地安装的第三方应用程序启动,则所述第一模块201进一步用于通过OPENGL创建适配所述用户的当前运动模式的虚拟场景。Alternatively, if the camera is activated by a third-party application installed locally through the electronic terminal, the first module 201 is further configured to create a virtual scene that adapts the current motion mode of the user through the OPENGL.
可选地,在本发明的任一实施例中,所述第一模块201进一步用于:确定多个不同用户的当前运动模式,创建适配所述多个不同用户的当前运动模式的同一虚拟场景。Optionally, in any embodiment of the present invention, the first module 201 is further configured to: determine a current motion mode of multiple different users, and create a same virtual virtual current mode that is adapted to the multiple different users. Scenes.
可选地,在本发明的任一实施例中,所述第二模块202进一步用于获取所述用户在当前运动模式下的运动数据并将加载到所述用户在所述虚拟场景中的模型上。Optionally, in any embodiment of the present invention, the second module 202 is further configured to acquire motion data of the user in a current motion mode and load the model into the user in the virtual scene. on.
可选地,在本发明的任一实施例中,所述第二模块202进一步用于:确定所述用户的当前运动模式的动态变化以及由此导致的运动数据的变化,或者,确定所述用户在同一当前运动模式下运动数据的动态变化,对关联到所述虚拟场景中的运动数据进行实时更新。Optionally, in any embodiment of the present invention, the second module 202 is further configured to: determine a dynamic change of a current motion mode of the user and a change of the motion data caused thereby, or determine the The user dynamically updates the motion data associated with the virtual scene in the same current motion mode.
可选地,在本发明的任一实施例中,所述第二模块202进一步用于对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪。Optionally, in any embodiment of the present invention, the second module 202 is further configured to track the current motion state of the user by using the feature point similarity for the selected feature points.
可选地,在本发明的任一实施例中,所述第二模块202进一步用于对选定的特征点进行位置预测,并采用特征点相似量对所述用户的当前运动状态进行跟踪。Optionally, in any embodiment of the present invention, the second module 202 is further configured to perform position prediction on the selected feature points, and track the current motion state of the user by using the feature point similarity.
需要说明的是,上述图2实施例中,第一模块、第二模块相互之间可 以复用,为此,图2实施例中,实际模块的数量可少于2个。It should be noted that, in the foregoing embodiment of FIG. 2, the first module and the second module may be multiplexed with each other. For this reason, in the embodiment of FIG. 2, the number of actual modules may be less than two.
另外,上述第一模块、第二模块可以基于分布式布置,比如部分模块位于前端,部分模块位于后端。In addition, the first module and the second module may be based on a distributed arrangement, for example, some modules are located at the front end, and some modules are located at the back end.
图3本发明实施例三电子设备的结构示意图;如图3所示,其包括处理器301,所述处理器上配置有执行如下技术处理的程序模块302:3 is a schematic structural diagram of an electronic device according to Embodiment 3 of the present invention; as shown in FIG. 3, it includes a processor 301, and the processor is configured with a program module 302 that performs the following technical processing:
用于确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;Used to determine a current motion mode of the user, and create a virtual scene that adapts the current motion mode of the user;
用于获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。And used to acquire the motion data of the user in the current motion mode and associate it into the virtual scene.
本实施例中,程序模块302可以包括上述第一模块、第二模块执行相对独立的功能,也可以只有一个模块,执行上述所有功能。In this embodiment, the program module 302 may include the foregoing first module and the second module performing relatively independent functions, or only one module may perform all the above functions.
本实施例中,所述电子设备可以包括但不限于为智能手机、智能手环、智能耳机等。In this embodiment, the electronic device may include, but is not limited to, a smart phone, a smart bracelet, a smart headset, and the like.
本申请实施例还提供一种存储介质,所述存储介质上存储有执行上述程序模块302功能的指令。The embodiment of the present application further provides a storage medium, where the instruction for executing the function of the program module 302 is stored.
遍及本说明书“一个实施例”是指特定特征,结构,配置中,或该实施例中所描述的特征被包括在本发明的至少一个实施例中。因此,出现的短语“在一个实施方案中”在本说明书中不同地方本发明不一定指相同的实施例。此外,具体的特征,结构,配置,或特性可以以任何合适的方式组合在一个或多个实施例中。Throughout the specification, "one embodiment" means that a particular feature, structure, configuration, or feature described in this embodiment is included in at least one embodiment of the invention. Thus, the appearance of the phrase "in one embodiment", "the" Furthermore, the particular features, structures, configurations, or characteristics may be combined in any suitable manner in one or more embodiments.
术语“生成”,“在”,“对”,“在”和“在”由于在用于本文时可以指相对于另一层层的相对位置。一个层“生成”,“在”,或“在”另一个层或者粘合“对”另一层可以直接接触的另一层上或可以有一个或多个插进层。一个层“在”层可以直接接触的层或可以有一个或多个插进层。The terms "generating," "in," "in," "in," and "in" as used herein may refer to the relative position relative to another layer. One layer "generates", "is", or "at" another layer or another layer that may be in direct contact with another layer or may have one or more intervening layers. A layer "on" layer may be in direct contact with the layer or may have one or more intervening layers.
在进行以下具体实施方式之前,陈述在本专利文件全文中所使用的某些词语和短语的定义可能是有益的:用语“包括(include)”和“包括(comprise)”及其变型,意为包括而非限制;用语“或(or)”是包括性的,意为和/或;短语“与…关联(associated with)”和“与之相关(associated therewith)”及 其变型可意为包括、被包括在内、“与…相互连接”、包含、被包含在内、“连接至…”或“与…连接”、“联接至…”或“与…联接”、“可与…通信”、“与…配合”、交错、并列、接近于、“被约束到…”或“用…约束”、具有、“具有…的性质”等;以及用语“控制器”意为控制至少一个操作的任何设备、系统或其部件,这种设备可实现在硬件、固件或软件中,或者实现在硬件、固件和软件中的至少两种中的一些组合中。应注意到,与任何特定控制器有关的功能可被局域地或远程地集中或分散。在本专利文件全文中提供对于某些词语和短语的定义,本领域技术人员应理解,在许多情况下(即使不是大多数情况),这种定义适用于现有技术以及适用于如此限定的词语和短语的将来的使用。Before the following detailed description, the definitions of certain words and phrases used throughout this patent document may be useful: the terms "include" and "comprise" and variations thereof mean The term "or" is inclusive, meaning and/or; the phrase "associated with" and "associated therewith" and variations thereof may be meant to include , included, "connected with", included, included, "connected to" or "connected to", "coupled to" or "coupled with", "communicable with" "cooperating with", interlaced, juxtaposed, close to, "constrained to" or "constrained with", possessed, "having a property of", etc.; and the term "controller" means controlling at least one operation Any device, system or component thereof, such device may be implemented in hardware, firmware or software, or in some combination of at least two of hardware, firmware and software. It should be noted that the functionality associated with any particular controller may be centralized or distributed locally or remotely. The definitions of certain words and phrases are provided throughout this patent document, and those skilled in the art will appreciate that in many cases, if not the most cases, such definitions apply to the prior art and to the words so defined. And the future use of phrases.
在本公开中,表述“包括(include)”或“可包括(may include)”指代相应功能、操作或元件的存在,而不限制一个或多个附加功能、操作或元件。在本公开中,诸如“包括(include)”和/或“具有(have)”的用语可理解为表示某些特性、数字、步骤、操作、组成元件、元件或其组合,而不可理解为排除一个或多个其它特性、数字、步骤、操作、组成元件、元件或其组合的存在或附加的可能性。In the present disclosure, the expression "include" or "may include" refers to the existence of the corresponding function, operation or element, and does not limit one or more additional functions, operations or elements. In the present disclosure, terms such as "include" and / or "have" are understood to mean certain features, numbers, steps, operations, components, elements or combinations thereof, and are not to be construed as being excluded. The existence or additional possibility of one or more other characteristics, numbers, steps, operations, constituent elements, elements or combinations thereof.
在本公开中,表述“A或B”、“A或/和B中的至少一个”或者“A或/和B的一个或多个”可包括所列项目所有可能的组合。例如,表述“A或B”、“A和B中的至少一个”或者“A或B中的至少一个”可包括:(1)至少一个A,(2)至少一个B,或者(3)至少一个A和至少一个B。In the present disclosure, the expression "A or B", "at least one of A or / and B" or "one or more of A or / and B" may include all possible combinations of the listed items. For example, the expression "A or B", "at least one of A and B" or "at least one of A or B" may include: (1) at least one A, (2) at least one B, or (3) at least One A and at least one B.
在本公开的各种实施方式中所使用的表述“第一”、“第二”、“所述第一”或“所述第二”可修饰各种部件而与顺序和/或重要性无关,但是这些表述不限制相应部件。以上表述仅用于将元件与其它元件区分开的目的。例如,第一用户设备和第二用户设备表示不同的用户设备,虽然两者均是用户设备。例如,在不背离本公开的范围的前提下,第一元件可称作第二元件,类似地,第二元件可称作第一元件。The expression "first", "second", "the first" or "the second" as used in the various embodiments of the present disclosure may modify various components regardless of order and/or importance. , but these statements do not limit the corresponding components. The above statements are only used for the purpose of distinguishing components from other components. For example, the first user device and the second user device represent different user devices, although both are user devices. For example, a first element could be termed a second element, and a second element could be termed a first element, without departing from the scope of the disclosure.
当一个元件(例如,第一元件)称为与另一元件(例如,第二元件)“(可操作地或可通信地)联接”或“(可操作地或可通信地)联接至”另一元件(例如,第二元件)或“连接至”另一元件(例如,第二元件)时,应理解为该一个元 件直接连接至该另一元件或者该一个元件经由又一个元件(例如,第三元件)间接连接至该另一个元件。相反,可理解,当元件(例如,第一元件)称为“直接连接”或“直接联接”至另一元件(第二元件)时,则没有元件(例如,第三元件)插入在这两者之间。When an element (eg, a first element) is referred to as being "operably or communicably coupled" or "operably or communicably" coupled to another element (eg, a second element) An element (e.g., a second element) or "connected to" another element (e.g., a second element) is understood to mean that the one element is directly connected to the other element or the one element is via the other element (e.g., The third component is indirectly connected to the other component. Rather, it will be understood that when an element (e.g., a first element) is referred to as "directly connected" or "directly connected" to another element (the second element), then no element (e.g., the third element) is inserted in either Between the people.
如本文中使用的表述“配置为”可与以下表述可替换地使用:“适合于”、“具有...的能力”、“设计为”、“适于”、“制造为”或“能够”。用语“配置为”可不必意为在硬件上“专门设计为”。可替代地,在一些情况下,表述“配置为…的设备”可意为该设备与其它设备或部件一起“能够…”。例如,短语“适于(或配置为)执行A、B和C的处理器”可意为仅用于执行相应操作的专用处理器(例如,嵌入式处理器)或可通过执行存储在存储设备中的一个或多个软件程序执行相应操作的通用处理器(例如,中央处理器(CPU)或应用处理器(AP))。The expression "configured to" as used herein may be used interchangeably with the following expressions: "suitable to", "capable of having", "designed as", "suitable", "manufactured as" or "capable of being able to" ". The phrase "configured to" does not necessarily mean "designed specifically" on hardware. Alternatively, in some cases, the expression "a device configured as" may mean that the device "can..." with other devices or components. For example, the phrase "a processor adapted to (or configured to) perform A, B, and C" may mean a dedicated processor (eg, an embedded processor) for performing only the corresponding operations or may be stored in the storage device by execution A general purpose processor (eg, a central processing unit (CPU) or an application processor (AP)) in which one or more software programs perform corresponding operations.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载波、红外信号、数字信号等)等,该计算机软件产品包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the various embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware. Based on such an understanding, portions of the above technical solutions that contribute substantially or to the prior art may be embodied in the form of a software product that may be stored in a computer readable storage medium, the computer readable record The medium includes any mechanism for storing or transmitting information in a form readable by a computer (eg, a computer). For example, a machine-readable medium includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc., the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiments described Methods.
最后应说明的是:以上实施例仅用以说明本申请实施例的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to explain the technical solutions of the embodiments of the present application, and are not limited thereto; although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that The technical solutions described in the foregoing embodiments may be modified, or some of the technical features may be equivalently replaced; and the modifications or substitutions do not deviate from the spirit of the technical solutions of the embodiments of the present application. range.
本领域的技术人员应明白,本发明实施例的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the embodiments of the invention may be provided as a method, apparatus (device), or computer program product. Thus, embodiments of the invention may be in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, embodiments of the invention may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
本发明实施例是参照根据本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Claims (21)

  1. 一种增强现实的处理方法,其特征在于,包括:An augmented reality processing method, comprising:
    确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;Determining a current motion mode of the user, creating a virtual scene that adapts the current motion mode of the user;
    获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。Acquiring motion data of the user in the current sport mode and associating it into the virtual scene.
  2. 根据权利要求1所述的方法,其特征在于,确定用户的当前运动模式包括:The method of claim 1 wherein determining the current mode of motion of the user comprises:
    确定电子终端本地的第三方应用程序的运动模式配置项;Determining a motion mode configuration item of a third party application local to the electronic terminal;
    通过所述运动模式配置项确定用户的当前运动模式。The current motion mode of the user is determined by the motion mode configuration item.
  3. 根据权利要求1或2所述的方法,其特征在于,确定用户的当前运动模式包括:The method according to claim 1 or 2, wherein determining the current motion mode of the user comprises:
    根据摄像头的视频流并对所述视频流中用户的动作进行跟踪分析,确定所述用户的当前运动模式。The current motion mode of the user is determined according to the video stream of the camera and tracking analysis of the actions of the user in the video stream.
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述摄像头通过网页端启动或者通过电子终端本地安装的第三方应用程序需启动以进行所述视频流的捕获。The method according to any one of claims 1-3, wherein the camera is activated by a webpage or by a third party application installed locally by the electronic terminal to perform capture of the video stream.
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,若所述摄像头为通过网页端启动,则创建适配所述用户的当前运动模式的虚拟场景包括:通过WEBGL创建适配所述用户的当前运动模式的虚拟场景;The method according to any one of claims 1 to 4, wherein if the camera is activated by a webpage, creating a virtual scene that adapts the current motion mode of the user comprises: creating an adaptation by WEBGL a virtual scene of the current motion mode of the user;
    或者,若所述摄像头为通过电子终端本地安装的第三方应用程序启动,则通过OPENGL创建适配所述用户的当前运动模式的虚拟场景。Alternatively, if the camera is launched for a third party application installed locally through the electronic terminal, a virtual scene adapted to the current motion mode of the user is created by OPENGL.
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景包括:The method according to any one of claims 1 to 5, wherein determining a current motion mode of the user and creating a virtual scene that adapts the current motion mode of the user comprises:
    确定多个不同用户的当前运动模式,创建适配所述多个不同用户的当前运动模式的同一虚拟场景。Determining a current motion pattern of a plurality of different users, creating a same virtual scene that adapts the current motion patterns of the plurality of different users.
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中包括:The method according to any one of claims 1-6, wherein acquiring the motion data of the user in the current motion mode and associating it with the virtual scene comprises:
    获取所述用户在当前运动模式下的运动数据并将加载到所述用户在所 述虚拟场景中的模型上。The motion data of the user in the current sport mode is obtained and loaded onto the model of the user in the virtual scene.
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中包括:The method according to any one of claims 1 to 7, wherein acquiring the motion data of the user in the current motion mode and associating it with the virtual scene comprises:
    确定所述用户的当前运动模式的动态变化以及由此导致的运动数据的变化,或者,确定所述用户在同一当前运动模式下运动数据的动态变化,对关联到所述虚拟场景中的运动数据进行实时更新。Determining a dynamic change of the current motion mode of the user and a change of the motion data caused thereby, or determining a dynamic change of the motion data of the user in the same current motion mode, and correlating the motion data in the virtual scene Perform real-time updates.
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,确定运动数据的变化包括:对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪。The method according to any one of claims 1-8, wherein determining the change in the motion data comprises tracking the current motion state of the user for the selected feature point and using the feature point similarity.
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪包括:对选定的特征点进行位置预测,并采用特征点相似量对所述用户的当前运动状态进行跟踪。The method according to any one of claims 1 to 9, wherein tracking the current motion state of the user with the selected feature point and using the feature point similarity comprises: performing the selected feature point Position prediction, and tracking the current motion state of the user by using feature point similarity.
  11. 一种增强现实的处理装置,其特征在于,包括:An augmented reality processing device, comprising:
    第一模块,用于确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;a first module, configured to determine a current motion mode of the user, and create a virtual scenario that adapts the current motion mode of the user;
    第二模块,用于获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。And a second module, configured to acquire motion data of the user in a current motion mode and associate it into the virtual scene.
  12. 根据权利要求11所述的装置,其特征在于,所述第一模块进一步用于:The apparatus according to claim 11, wherein said first module is further configured to:
    确定电子终端本地的第三方应用程序的运动模式配置项;Determining a motion mode configuration item of a third party application local to the electronic terminal;
    通过所述运动模式配置项确定用户的当前运动模式。The current motion mode of the user is determined by the motion mode configuration item.
  13. 根据权利要求11或12所述的装置,其特征在于,所述第一模块进一步用于:根据摄像头的视频流并对所述视频流中用户的动作进行跟踪分析,确定所述用户的当前运动模式。The device according to claim 11 or 12, wherein the first module is further configured to: determine a current motion of the user according to a video stream of the camera and perform tracking analysis on a motion of the user in the video stream. mode.
  14. 根据权利要求11-13中任一项所述的装置,其特征在于,所述摄像头通过网页端启动或者通过电子终端本地安装的第三方应用程序需启动以进行所述视频流的捕获。The apparatus according to any one of claims 11 to 13, wherein the camera is activated by a webpage or by a third party application installed locally by the electronic terminal to perform capture of the video stream.
  15. 根据权利要求11-14中任一项所述的装置,其特征在于,若所述摄 像头为通过网页端启动,则所述第一模块进一步用于创建适配所述用户的当前运动模式的虚拟场景包括:通过WEBGL创建适配所述用户的当前运动模式的虚拟场景;The apparatus according to any one of claims 11-14, wherein the first module is further configured to create a virtual adaptation of a current motion mode of the user if the camera is activated by a webpage end The scenario includes: creating, by the WEBGL, a virtual scene that adapts the current motion mode of the user;
    或者,若所述摄像头为通过电子终端本地安装的第三方应用程序启动,则所述第一模块进一步用于通过OPENGL创建适配所述用户的当前运动模式的虚拟场景。Alternatively, if the camera is activated by a third party application installed locally through the electronic terminal, the first module is further configured to create a virtual scene that adapts the current motion mode of the user by using OPENGL.
  16. 根据权利要求11-15中任一项所述的装置,其特征在于,所述第一模块进一步用于:确定多个不同用户的当前运动模式,创建适配所述多个不同用户的当前运动模式的同一虚拟场景。The apparatus according to any one of claims 11 to 15, wherein the first module is further configured to: determine a current motion pattern of a plurality of different users, and create a current motion that adapts the plurality of different users The same virtual scene of the pattern.
  17. 根据权利要求11-16中任一项所述的装置,其特征在于,所述第二模块进一步用于获取所述用户在当前运动模式下的运动数据并将加载到所述用户在所述虚拟场景中的模型上。The apparatus according to any one of claims 11 to 16, wherein the second module is further configured to acquire motion data of the user in a current motion mode and load the user to the virtual On the model in the scene.
  18. 根据权利要求11-17中任一项所述的装置,其特征在于,所述第二模块进一步用于:确定所述用户的当前运动模式的动态变化以及由此导致的运动数据的变化,或者,确定所述用户在同一当前运动模式下运动数据的动态变化,对关联到所述虚拟场景中的运动数据进行实时更新。The apparatus according to any one of claims 11-17, wherein the second module is further configured to: determine a dynamic change of a current motion pattern of the user and a change in motion data caused thereby, or And determining a dynamic change of the motion data of the user in the same current motion mode, and performing real-time update on the motion data associated with the virtual scene.
  19. 根据权利要求11-18中任一项所述的装置,其特征在于,所述第二模块进一步用于对选定的特征点并采用特征点相似量对所述用户的当前运动状态进行跟踪。Apparatus according to any one of claims 11-18, wherein the second module is further for tracking a selected feature point and using a feature point similarity to track the current motion state of the user.
  20. 根据权利要求11-19中任一项所述的装置,其特征在于,所述第二模块进一步用于对选定的特征点进行位置预测,并采用特征点相似量对所述用户的当前运动状态进行跟踪。The apparatus according to any one of claims 11 to 19, wherein the second module is further configured to perform position prediction on the selected feature points and to use the similarity of the feature points to the current motion of the user Status is tracked.
  21. 一种电子设备,其特征在于,包括处理器,所述处理器上配置有执行如下技术处理的模块:An electronic device, comprising: a processor, wherein the processor is configured with a module that performs the following technical processing:
    用于确定用户的当前运动模式,创建适配所述用户的当前运动模式的虚拟场景;Used to determine a current motion mode of the user, and create a virtual scene that adapts the current motion mode of the user;
    用于获取所述用户在当前运动模式下的运动数据并将其关联到所述虚拟场景中。And used to acquire the motion data of the user in the current motion mode and associate it into the virtual scene.
PCT/CN2018/105116 2017-12-01 2018-09-11 Augmented reality processing method and apparatus, and electronic device WO2019105100A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711250261.9 2017-12-01
CN201711250261.9A CN107918956A (en) 2017-12-01 2017-12-01 Processing method, device and the electronic equipment of augmented reality

Publications (1)

Publication Number Publication Date
WO2019105100A1 true WO2019105100A1 (en) 2019-06-06

Family

ID=61898203

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105116 WO2019105100A1 (en) 2017-12-01 2018-09-11 Augmented reality processing method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN107918956A (en)
WO (1) WO2019105100A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918956A (en) * 2017-12-01 2018-04-17 广州市动景计算机科技有限公司 Processing method, device and the electronic equipment of augmented reality
CN113515187B (en) * 2020-04-10 2024-02-13 咪咕视讯科技有限公司 Virtual reality scene generation method and network side equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
US20170178272A1 (en) * 2015-12-16 2017-06-22 WorldViz LLC Multi-user virtual reality processing
US20170345219A1 (en) * 2014-05-20 2017-11-30 Leap Motion, Inc. Wearable augmented reality devices with object detection and tracking
CN107918956A (en) * 2017-12-01 2018-04-17 广州市动景计算机科技有限公司 Processing method, device and the electronic equipment of augmented reality

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104883556B (en) * 2015-05-25 2017-08-29 深圳市虚拟现实科技有限公司 3 D displaying method and augmented reality glasses based on augmented reality
CN106355153B (en) * 2016-08-31 2019-10-18 上海星视度科技有限公司 A kind of virtual objects display methods, device and system based on augmented reality
CN107168532B (en) * 2017-05-05 2020-09-11 武汉秀宝软件有限公司 Virtual synchronous display method and system based on augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170345219A1 (en) * 2014-05-20 2017-11-30 Leap Motion, Inc. Wearable augmented reality devices with object detection and tracking
US20170178272A1 (en) * 2015-12-16 2017-06-22 WorldViz LLC Multi-user virtual reality processing
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
CN107918956A (en) * 2017-12-01 2018-04-17 广州市动景计算机科技有限公司 Processing method, device and the electronic equipment of augmented reality

Also Published As

Publication number Publication date
CN107918956A (en) 2018-04-17

Similar Documents

Publication Publication Date Title
US10880495B2 (en) Video recording method and apparatus, electronic device and readable storage medium
US20230283748A1 (en) Communication using interactive avatars
US11657557B2 (en) Method and system for generating data to provide an animated visual representation
US20240062444A1 (en) Virtual clothing try-on
US9640218B2 (en) Physiological cue processing
US10187690B1 (en) Systems and methods to detect and correlate user responses to media content
KR102112743B1 (en) Display apparatus, server and control method thereof
CN110050461A (en) The system and method for real-time composite video are provided from the multi-source equipment characterized by augmented reality element
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP2020039029A (en) Video distribution system, video distribution method, and video distribution program
JP2015184689A (en) Moving image generation device and program
CN114930399A (en) Image generation using surface-based neurosynthesis
CN115412743A (en) Apparatus, system, and method for automatically delaying a video presentation
CN109242940B (en) Method and device for generating three-dimensional dynamic image
EP4200745A1 (en) Cross-domain neural networks for synthesizing image with fake hair combined with real image
CN108876878B (en) Head portrait generation method and device
KR20230098244A (en) Adaptive skeletal joint facilitation
KR20230170722A (en) Garment segmentation
JP2022003797A (en) Static video recognition
US9161012B2 (en) Video compression using virtual skeleton
WO2019105100A1 (en) Augmented reality processing method and apparatus, and electronic device
KR20230148239A (en) Robust facial animation from video using neural networks
US11880947B2 (en) Real-time upper-body garment exchange
JP6563580B1 (en) Communication system and program
CN116917938A (en) Visual effect of whole body

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18884375

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18884375

Country of ref document: EP

Kind code of ref document: A1