WO2019127325A1 - Information processing method and apparatus, cloud processing device, and computer program product - Google Patents

Information processing method and apparatus, cloud processing device, and computer program product Download PDF

Info

Publication number
WO2019127325A1
WO2019127325A1 PCT/CN2017/119720 CN2017119720W WO2019127325A1 WO 2019127325 A1 WO2019127325 A1 WO 2019127325A1 CN 2017119720 W CN2017119720 W CN 2017119720W WO 2019127325 A1 WO2019127325 A1 WO 2019127325A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
touch screen
virtual touch
user
model
Prior art date
Application number
PCT/CN2017/119720
Other languages
French (fr)
Chinese (zh)
Inventor
杨文超
王恺
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2017/119720 priority Critical patent/WO2019127325A1/en
Priority to CN201780002728.XA priority patent/CN109643182B/en
Publication of WO2019127325A1 publication Critical patent/WO2019127325A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers

Definitions

  • the present application relates to the field of data processing technologies, and in particular, to an information processing method, apparatus, cloud processing device, and computer program product.
  • Computer vision can be a representative. field of. Computer vision is a science that studies how to make a machine "look”. Further, it refers to the use of equipment instead of the human eye to identify, track, and measure the machine vision, and further image processing. It is more suitable for human eyes to observe or transmit images detected by the instrument.
  • some AR (Augmented Reality) glasses such as HoloLens glasses are used for displaying virtual reality scenes, which are representative wearable devices in computer vision.
  • the camera uses the camera to obtain depth maps of different angles in real time, and then accumulates different depth maps, thereby calculating an accurate three-dimensional model of the scene and its internal target objects by means of stereoscopic vision and the like, and presenting the corresponding images to the user. Users can also interact with glasses by clicking on images, clicking and sliding gestures by viewing image information.
  • the embodiment of the present invention provides an information processing method, a device, a cloud processing device, and a computer program product, so that a user can operate a virtual touch screen on a surface of a real object, thereby enhancing the touch and authenticity and improving the detection precision.
  • an embodiment of the present application provides an information processing method, including:
  • the embodiment of the present application further provides an information processing apparatus, including:
  • the receiving unit is configured to receive current environment information sent by the first device, and perform modeling according to the current environment information to obtain model information, and locate the first device to obtain positioning information.
  • a generating unit configured to generate a virtual touch screen on the surface of the developer body in the model according to the model information and the positioning information.
  • a sending unit configured to send the virtual touch screen to the first device.
  • the embodiment of the present application further provides a cloud processing device, where the device includes an input and output interface, a processor, and a memory;
  • the memory is for storing instructions that, when executed by the processor, cause the device to perform any of the methods of the first aspect.
  • the embodiment of the present application further provides a computer program product, which can be directly loaded into an internal memory of a computer and includes software code. After the computer program is loaded and executed by a computer, the first aspect can be implemented. One such method.
  • the information processing method, the device, the cloud processing device, and the computer program product provided by the embodiment of the present application by using a device, such as a cloud computing center, model and locate the current environment information sent by the first device, and according to the model information and Positioning information, a virtual touch screen is generated on the surface of the developing body in the model, and then sent to the first device for display by the first device, and the developing body in the model corresponds to the physical object in the actual scene, so the user can
  • the surface is operated to complete the control of the virtual touch screen, the touch and the authenticity are enhanced, and the operation on the real object surface can also improve the detection precision, and further, the interaction between the first device and the second device, the current environment Modeling and positioning require more complicated operations to be completed by using the second device, which reduces the load on the first device, and solves the problem in the prior art that the arm is fatigued due to lack of force feedback, and it is difficult to complete the accuracy requirement.
  • FIG. 1 is a flowchart of an embodiment of an information processing method according to an embodiment of the present application
  • FIG. 2 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a first scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a second scenario provided by an embodiment of the present application.
  • FIG. 5 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure.
  • FIG. 6 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure.
  • FIG. 7 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure.
  • the word “if” as used herein may be interpreted as “when” or “when” or “in response to determining” or “in response to detecting.”
  • the phrase “if determined” or “if detected (conditions or events stated)” may be interpreted as “when determined” or “in response to determination” or “when detected (stated condition or event) “Time” or “in response to a test (condition or event stated)”.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories.
  • Wearable devices are more than just a hardware device. They also implement powerful functions through software support, data interaction, and cloud interaction.
  • eye-related devices such as smart glasses and helmets are devices that can directly interact with the user by using vision.
  • smart glasses and helmets When smart glasses and helmets are worn on the user's head, they can create virtual scenes in three-dimensional space in front of the user's eyes. It allows users to view not only the scene, but also interact with the scene, such as clicking, dragging, sliding, and so on.
  • FIG. 1 is a flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure, which is applied to a third device.
  • the information processing method provided by the embodiment of the present application may specifically include the following steps:
  • the first device acquires current environment information, and sends current environment information to the third device.
  • the first device refers to the wearable device, and the first device includes at least a display unit, a basic operation unit, a wireless transmission unit, an environment sensing unit, an interaction detecting unit, and a power unit.
  • the first device includes at least a display unit, a basic operation unit, a wireless transmission unit, an environment sensing unit, an interaction detecting unit, and a power unit.
  • the third device refers to a device having strong computing power, and the third device includes at least an arithmetic unit, a wireless transmission unit, for example, a local computer, a cloud processing center, or the like.
  • the first device and the third device can communicate with each other, and the communication mode can use wireless communication methods such as 2G, 3G, 4G, and WiFi.
  • the first device obtains the current environment information through the environment sensing unit.
  • the environment sensing unit needs to include at least an IMU (Inertial Measurement Unit) and an image capturing module (preferably using a binocular camera), and the SLAM is utilized in practical applications.
  • IMU Inertial Measurement Unit
  • image capturing module preferably using a binocular camera
  • the SLAM is utilized in practical applications.
  • Simultaneous localization and mapping, real-time positioning and map construction technology to calculate, to achieve the acquisition of current environmental information, specifically, the current environmental information includes the first device itself positioning, surrounding environment image acquisition and object surface three-dimensional information Get and so on.
  • the current environment information is sent to the third device by using the wireless transmission unit, so that the third device can perform subsequent processing on the current environment information.
  • the third device in order to speed up the processing and increase the transmission speed, it is preferable to use 4G, WiFi or even faster wireless communication.
  • the third device receives the current environment information sent by the first device, and performs modeling according to the current environment information to obtain model information, and locates the first device to obtain positioning information.
  • the third device parses the current environment information to construct a virtual scenario.
  • the process may include: first, acquiring the horizontal direction of each physical object in the pre-environment information, and the zenith Parameters such as distance, slant range and reflection intensity are automatically stored and calculated to obtain point cloud data; then, editing of point cloud data, splicing and merging of scanned data, three-dimensional measurement of image data points, visualization of point cloud images, and spatial data 3D modeling, texture analysis processing and data conversion processing, constructing virtual scenes, and obtaining model information.
  • the third device parses the current environment information, extracts the positioning information of the first device, and performs positioning on the first device to obtain positioning information.
  • the third device generates a virtual touch screen on the surface of the developer body in the model according to the model information and the positioning information.
  • the developing body refers to any thing that can generate a virtual touch screen on the surface thereof, because the things in the model are virtual images of real objects in the real scene, so in the real scene All real objects can be used as developing bodies in the model, such as tables, walls, water dispensers, water heaters, windows, and the like. Therefore, the third device can generate a virtual touch screen on any developer surface in the model according to the model information and the positioning information.
  • the virtual touch screen can be automatically generated on the surface of the developing body or generated after interacting with the user.
  • the user when the user operates the virtual touch, the user performs the same operation on the real object corresponding to the real scene.
  • the third device sends the virtual touch screen to the first device.
  • the virtual touch screen is sent to the first device by using the wireless transmission unit.
  • the first device receives and displays a virtual touch screen.
  • the virtual touch screen fits the surface of the developing body in the model.
  • the virtual touch screen is attached to the surface of the water bucket, that is, the curved curvature of the virtual touch screen is consistent with the curvature of the water tub.
  • a virtual touch screen fits over the surface of the table. The purpose is that the user can operate on the surface of a real object in a real scene to obtain a real touch.
  • the information processing method provided by the embodiment of the present application uses a device, such as a cloud computing center, to model and locate current environment information sent by the first device, and based on the model information and the positioning information, the surface of the developer in the model.
  • the virtual touch screen is generated and then sent to the first device for display by the first device, and the developing body in the model corresponds to the physical object in the actual scene, so the user can operate on the surface of the solid object to complete the virtual touch screen.
  • the control enhances the tactile sensation and authenticity, and the operation on the surface of the real object can also improve the detection accuracy.
  • the current environment for modeling and positioning requires more complicated operations.
  • the second device is used to reduce the load of the first device, which solves the problem in the prior art that the arm is easily fatigued due to lack of force feedback, and the operation with high precision is difficult to complete.
  • the third device may automatically generate a virtual touch screen.
  • the user may determine when to generate a virtual touch screen according to needs, specifically FIG. 2 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure, which is applied to a third device.
  • the information processing method provided by the embodiment of the present application is further It can include the following steps:
  • the first device receives a startup instruction of the first user.
  • the first device sends a startup command to the third device.
  • the third device receives a startup command sent by the first device.
  • step 103 is executed as "the third device generates a virtual touch screen on the surface of the developer in the model according to the startup command, the model information, and the positioning information.”
  • the user's startup instruction includes two cases: the first type is that the first device has a physical button, and the function is to automatically generate a button, and the user operates the automatically generated button. The second is that the user selects the display area.
  • FIG. 3 is a schematic diagram of a first scenario according to an embodiment of the present disclosure.
  • the first device receives an operation of the user automatically generating a button, and the trigger is Start the instruction and send the start command to the third device.
  • the third device starts to generate the virtual touch screen when receiving the startup instruction.
  • the third device determines the location of the marker information in the current environment information according to the startup instruction.
  • the third device marks the information according to the model information and the location information.
  • the location generates a virtual touchscreen with a specified size.
  • the physical button is set on the first device, so that the user touches the physical When the button is pressed, the first device can be operated.
  • the user's operation of automatically generating a button may be a click, a double click, or the like.
  • at least one tag information is preset in the current environment in which the user is located, and the location of the tag information is on the surface of the specified object.
  • the third device when the user operates the automatically generated button, the third device first acquires the location of the tag information in the current environment, and then the third device generates the tag information according to the user's operation on the automatically generated button, the model information, and the positioning information. A virtual touch screen of a specified size.
  • the third device calculates the three-dimensional coordinate information (the three-dimensional coordinate information includes three dimensions of x, y, and z), and then uses the three-dimensional coordinate information of the tag information.
  • the position of each object and the current positioning information of the first device are generated, and a virtual touch screen having a specified size is generated at the position of the marker information. For example, if the user needs to generate a tablet screen on the wall, when the user clicks the auto-generate button, first obtain the location of the mark information on the wall, and then generate a virtual touch screen of the same size as the tablet at the mark information.
  • FIG. 4 is a schematic diagram of a second scenario according to an embodiment of the present disclosure.
  • the first device acquires a display area selected by the first user frame;
  • the display area is converted into a startup command, and the startup command is sent to the third device, and the third device generates a virtual touch screen in the display area selected by the first user according to the model information and the positioning information.
  • the user may generate a virtual touch screen at any position according to his own needs, and the user selects the display area by using a finger on the surface of an object in the current environment.
  • the first device collects the display area selected by the first user on the surface of the specified object, converts it into a startup command, and sends it to the third device, so that the third device can be according to the first instruction, etc.
  • the content generates a virtual touch screen within the display area.
  • the tag information includes at least one of a two-dimensional code, a graphic, a pattern, a picture, a text, a letter, or a number.
  • FIG. 5 is another flowchart of the information processing method according to the embodiment of the present application. As shown in FIG. 5, the information processing method provided by the embodiment of the present application may further include the following steps:
  • the first device acquires account information of the first user.
  • the first user can input his/her account name and password in the virtual touch screen to log in, so that the first device can acquire the first user. account information.
  • the first device sends the account information of the first user to the third device.
  • the account information of the first user is sent to the third device by using the wireless transmission unit.
  • the third device updates the display content of the virtual touch screen according to the account information of the first user and the current environment information.
  • the third device stores a large amount of user information, where the account information of the first user and the account content corresponding to the account information of the first user are included, and the account content may include the account information of the first user.
  • Device information for all associated devices for example, tablets, washing machines, air conditioners, water dispensers, water purifiers, etc.
  • the third device stores the system desktop information of the tablet.
  • the third device stores information such as the current water storage capacity of the water purifier, the cleanliness level of the water, and whether the filter cartridge needs to be replaced.
  • the third device may provide the first user with at least one display content for the user to select, A user can drag and slide left and right to replace the content in the current virtual touch screen.
  • the third device may provide the first user with electrical information corresponding to the electrical device, so that the first user can view the current electrical device. status.
  • the third device sends the updated virtual touch screen to the first device.
  • the updated virtual touch screen is sent to the first device by using the wireless transmission unit.
  • the first device receives the updated virtual touch screen and displays the same.
  • the information processing method provided by the embodiment of the present application further provides a manner, which can improve operability, so that the user can use the first device according to his own usage habits, thereby improving the use efficiency.
  • FIG. 6 is another flow of the embodiment of the information processing method provided by the embodiment of the present application.
  • the information processing method provided by the embodiment of the present application may further include the following steps:
  • the first device detects an action of the first user on the virtual touch screen.
  • the first device has an interaction detecting unit, and the interaction detecting unit detects the user action based on the computer vision, and specifically uses the binocular camera in the first device to detect the fingertip of the user on the virtual touch screen.
  • the location or action may include: first, selecting a key point of the hand to establish a skeleton model of the hand; then tracking the opponent to obtain the coordinates of the key point of the hand, the skeleton of the hand The model is optimized; the skeletal model of the opponent is extracted to obtain the position of the fingertip; the position change information of the fingertip from the initial point to the end point is tracked, and the action is determined according to the position change information.
  • the first device matches the corresponding operation instruction to the action and sends the operation instruction to the third device.
  • the corresponding relationship between the action and the operation instruction is preset in the first device, and after the interaction detecting unit determines the action of the first user, according to the corresponding relationship between the preset action and the operation instruction, the first relationship is The action of the user matches the corresponding operation instruction. For example, when the image of the tablet is presented in the virtual touch screen, and the first user clicks an icon in the virtual touch screen, the action of the first user is detected as a click. And there is an application icon at the fingertip position of the first user, and the operation instruction for opening the application is matched for the click action.
  • the first user slides from the left side of the virtual touch screen to the right side, and detects that the action of the first user is sliding, and the switching page is matched for the sliding action. Operation instructions.
  • the operation command is transmitted to the third device by using the wireless transmission unit.
  • an auxiliary detecting device may be set in advance on the surface of the object in the current environment, for example, an infrared laser emitting device and a radar are installed near the marking information.
  • the scanning device or the like determines the position of the finger by the interaction between the auxiliary detecting device and the finger, for example, installing an infrared laser emitting device near the mark, and when the virtual touch screen is generated at the mark, the first user clicks on the virtual touch
  • the screen is controlled, the infrared rays are blocked by the fingers, and a bright spot is formed at the fingertips, so that the interactive detecting unit can quickly position the fingertips according to the position of the bright spots.
  • the third device processes the operation instruction and updates the display content of the virtual touch screen according to the current environment information.
  • the third device After receiving the operation instruction sent by the first device, the third device determines the content corresponding to the operation instruction in response to the content of the operation instruction, and further, since the position of the first user is uncontrollable, the movement may occur at any time. The third device further updates the display content of the virtual touch screen according to the current environment information combined with the content of the operation instruction.
  • the third device sends the updated virtual touch screen to the first device.
  • the updated virtual touch screen is sent to the first device by using the wireless transmission unit.
  • the first device receives the updated virtual touch screen and displays the same.
  • the user can perform operations on the surface of a real object, for example, clicking, sliding, etc., the touch is real and the user can feel the feedback of the force, and the detection accuracy can be improved and the detection can be improved in detecting the user action. effectiveness.
  • FIG. 7 is an implementation of the information processing method provided by the embodiment of the present application. Another flowchart of the example, as shown in FIG. 7, the information processing method provided by the embodiment of the present application may further include the following steps:
  • the second device is connected to the third device.
  • the second device is the same as the first device, and refers to the wearable device, and the second device is used by the second user, and the first device is used by the first user.
  • the first user logs in the account information of the first user by using the first device, and the first display content of the first device is sent by the third device, if the second user wants to obtain the same as the first user.
  • the content first needs to connect to the third device using the second device.
  • the manner in which the second device is connected to the third device is the same as the manner in which the first device is connected to the third device.
  • the third device After the second device is connected to the third device, the third device sends the virtual touch screen to the second device.
  • the third device after the second device determines that the third device is connected, the third device sends the virtual touch screen to the second device by using the wireless transmission unit.
  • the second device receives the virtual touch screen and displays the same.
  • the second user may also operate the content displayed in the display screen in the same manner as the first user. Specifically, first, the second device detects an action of the second user on the virtual touch screen; the second device matches the corresponding operation instruction with the action and sends the operation instruction to the third device; then, the third device processes the operation instruction and combines the current The environment information is used to update the display content of the virtual touch screen; the third device sends the updated virtual touch screen to the first device and the second device; so that the first device and the second device can respectively receive the updated virtual touch Control the screen and display it separately.
  • the third device de-reprocesses the instruction and selects the instruction received first.
  • FIG. 8 is a schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure.
  • the information processing apparatus provided by the embodiment of the present application may include: a receiving unit 11, a generating unit 12, and a sending unit 13.
  • the receiving unit 11 is configured to receive current environment information sent by the first device, and perform modeling to obtain model information according to current environment information, and locate the first device to obtain positioning information.
  • the generating unit 12 is configured to generate a virtual touch screen on the surface of the developing body in the model according to the model information and the positioning information.
  • the sending unit 13 is configured to send the virtual touch screen to the first device.
  • FIG. 9 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 9, the information processing apparatus provided by the embodiment of the present application may further include: an updating unit 14.
  • the receiving unit 11 is further configured to:
  • the updating unit 14 is configured to update the display content of the virtual touch screen according to the account information of the first user and the current environment information;
  • the sending unit 13 is further configured to:
  • the receiving unit 11 is further configured to:
  • the updating unit 14 is further configured to:
  • the sending unit 13 is further configured to:
  • the receiving unit 11 is further configured to:
  • the generating unit 12 is specifically configured to:
  • a virtual touch screen is generated on the surface of the developer in the model according to the startup command, the model information, and the positioning information.
  • the generating unit 12 is specifically configured to:
  • a virtual touch screen having a specified size is generated at the position of the marker information.
  • the generating unit 12 is specifically configured to:
  • a virtual touch screen is generated in the display area selected by the first user.
  • the updating unit 14 is specifically configured to:
  • the display content of the virtual touch screen is updated according to the action and the current environment information.
  • the virtual touch screen fits the surface of the developing body in the model.
  • the marking information includes:
  • FIG. 10 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 10, the information processing apparatus provided by the embodiment of the present application may further include: a connection unit 15.
  • the receiving unit 11 is further configured to: receive a connection request sent by the second device.
  • the connecting unit 15 is configured to connect the second device and send the virtual touch screen to the second device.
  • the information processing apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 to FIG. 7.
  • the implementation principle and technical effects are similar, and details are not described herein again.
  • the embodiment of the present application further provides a cloud processing device, where the device includes an input and output interface, a processor, and a memory;
  • the memory is for storing instructions that, when executed by the processor, cause the device to perform the method of any of Figures 1-7.
  • the cloud processing device provided by the embodiment of the present application may be used to implement the technical solution of the method embodiment shown in FIG. 1 to FIG. 7.
  • the implementation principle and technical effects are similar, and details are not described herein again.
  • the embodiment of the present application further provides a computer program product, which can be directly loaded into an internal memory of a computer and contains software code. After the computer program is loaded and executed by the computer, the method of any one of FIG. 1 to FIG. 7 can be implemented. .
  • the computer program product of this embodiment can be used to implement the technical solution of the method embodiment shown in FIG. 1 to FIG. 7.
  • the implementation principle and technical effects are similar, and details are not described herein again.
  • the device embodiments described above are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located in one place. Or it can be distributed to at least two network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Abstract

Embodiments of the present application relate to the technical field of data processing, provide an information processing method and apparatus, a cloud processing device, and a computer program product, and achieve that a user can operate the surface of an entity object to complete control to a virtual touch screen so that the tactility and reality are augmented and the detection precision can also be improved by operating the surface of a real object. The information processing method provided by the embodiments of the present application comprises: receiving current environment information sent by a first device, modeling the current environment information to obtain model information, and locating the first device to obtain locating information; generating the virtual touch screen on the surface of a developing body in a model according to the model information and the locating information; and sending the virtual touch screen to the first device.

Description

信息处理方法、装置、云处理设备及计算机程序产品Information processing method, device, cloud processing device and computer program product 技术领域Technical field
本申请涉及数据处理技术领域,尤其涉及一种信息处理方法、装置、云处理设备及计算机程序产品。The present application relates to the field of data processing technologies, and in particular, to an information processing method, apparatus, cloud processing device, and computer program product.
背景技术Background technique
随着物联网技术的快速发展,普适计算、全息计算、云计算等全新数据计算模式正逐渐步入人们日常生活中,其可以应用到多种领域中,其中,计算机视觉可以是一个具有代表性的领域。计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是指使用设备代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图像处理,用处理器处理成为更适合人眼观察或传送给仪器检测的图像。With the rapid development of Internet of Things technology, new data computing modes such as ubiquitous computing, holographic computing, and cloud computing are gradually entering people's daily lives, which can be applied to many fields. Among them, computer vision can be a representative. field of. Computer vision is a science that studies how to make a machine "look". Further, it refers to the use of equipment instead of the human eye to identify, track, and measure the machine vision, and further image processing. It is more suitable for human eyes to observe or transmit images detected by the instrument.
现阶段,HoloLens眼镜等一些AR(Augmented Reality,增强现实)眼镜为用于进行虚拟现实场景显示,其为计算机视觉中具有代表性的可穿戴设备。其利用摄像头来实时获取不同角度的深度图,再对不同的深度图进行累积,从而借助立体视觉等技术计算出场景及其内部目标物体的精确的三维模型,呈献给用户相应的图像。用户还可以通过观看图像信息,通过点击、滑动等手势与眼镜进行互动。At this stage, some AR (Augmented Reality) glasses such as HoloLens glasses are used for displaying virtual reality scenes, which are representative wearable devices in computer vision. The camera uses the camera to obtain depth maps of different angles in real time, and then accumulates different depth maps, thereby calculating an accurate three-dimensional model of the scene and its internal target objects by means of stereoscopic vision and the like, and presenting the corresponding images to the user. Users can also interact with glasses by clicking on images, clicking and sliding gestures by viewing image information.
但是,在三维空间中,通过隔空手势与设备进行交互的操作,因缺乏力反馈而容易使人手臂产生疲劳,且难以完成对精度要求较高的操作。However, in the three-dimensional space, the operation of interacting with the device through the gap gesture is liable to cause fatigue of the human arm due to lack of force feedback, and it is difficult to perform operations requiring high precision.
发明内容Summary of the invention
本申请实施例提供一种信息处理方法、装置、云处理设备及计算机程序产品,使得用户可以在真实物体表面对虚拟触控屏幕进行操作,增强了触感与真实性,提高检测精度。The embodiment of the present invention provides an information processing method, a device, a cloud processing device, and a computer program product, so that a user can operate a virtual touch screen on a surface of a real object, thereby enhancing the touch and authenticity and improving the detection precision.
第一方面,本申请实施例提供一种信息处理方法,包括:In a first aspect, an embodiment of the present application provides an information processing method, including:
接收第一设备发送的当前环境信息,并根据所述当前环境信息进行建模得到模型信息,以及对所述第一设备进行定位得到定位信息;Receiving current environment information sent by the first device, and modeling the model information according to the current environment information, and positioning the first device to obtain positioning information;
根据所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕;Generating a virtual touch screen on the surface of the developer body in the model according to the model information and the positioning information;
发送所述虚拟触控屏幕至所述第一设备。Sending the virtual touch screen to the first device.
第二方面,本申请实施例还提供一种信息处理装置,包括:In a second aspect, the embodiment of the present application further provides an information processing apparatus, including:
接收单元,用于接收第一设备发送的当前环境信息,并根据所述当前环境信息进行建模得到模型信息,以及对所述第一设备进行定位得到定位信息。The receiving unit is configured to receive current environment information sent by the first device, and perform modeling according to the current environment information to obtain model information, and locate the first device to obtain positioning information.
生成单元,用于根据所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕。And a generating unit, configured to generate a virtual touch screen on the surface of the developer body in the model according to the model information and the positioning information.
发送单元,用于发送所述虚拟触控屏幕至所述第一设备。a sending unit, configured to send the virtual touch screen to the first device.
第三方面,本申请实施例还提供一种云处理设备,设备包括输入输出接口、处理器以及存储器;In a third aspect, the embodiment of the present application further provides a cloud processing device, where the device includes an input and output interface, a processor, and a memory;
所述存储器用于存储指令,所述指令被所述处理器执行时,使得所述设备执行如第一方面中任一种方法。The memory is for storing instructions that, when executed by the processor, cause the device to perform any of the methods of the first aspect.
第四方面,本申请实施例还提供一种计算机程序产品,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现如第一方面中任一种所述的方法。In a fourth aspect, the embodiment of the present application further provides a computer program product, which can be directly loaded into an internal memory of a computer and includes software code. After the computer program is loaded and executed by a computer, the first aspect can be implemented. One such method.
本申请实施例提供的信息处理方法、装置、云处理设备及计算机程序产品,通过使用设备,例如云端计算中心,来对第一设备发送的当前环境信息进行建模以及定位,并根据模型信息以及定位信息,在模型内的显影体表面生成虚拟触控屏幕,然后发送至第一设备由第一设备进行显示,模型中的显影体均对应实际场景中的实体物体,因此,用户可以对实体物体表面进行操作来完成对虚拟触控屏幕的控制,增强了触感与真实性,且在真实物表面进行操作还能够提高检测精度,此外,通过第一设备与第二设备之间的交互,当前环境进行建模以及定位等需要较复杂运算利用第二设备来完成,减少了第一设备的负荷,解决了现有技术中因缺乏力反馈而容易使人手臂产生疲劳,且难以完成对精度要求较高的操作的问题。The information processing method, the device, the cloud processing device, and the computer program product provided by the embodiment of the present application, by using a device, such as a cloud computing center, model and locate the current environment information sent by the first device, and according to the model information and Positioning information, a virtual touch screen is generated on the surface of the developing body in the model, and then sent to the first device for display by the first device, and the developing body in the model corresponds to the physical object in the actual scene, so the user can The surface is operated to complete the control of the virtual touch screen, the touch and the authenticity are enhanced, and the operation on the real object surface can also improve the detection precision, and further, the interaction between the first device and the second device, the current environment Modeling and positioning require more complicated operations to be completed by using the second device, which reduces the load on the first device, and solves the problem in the prior art that the arm is fatigued due to lack of force feedback, and it is difficult to complete the accuracy requirement. High operational problems.
附图说明DRAWINGS
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description of the drawings used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description It is a certain embodiment of the present application, and other drawings can be obtained according to the drawings without any creative labor for those skilled in the art.
图1为本申请实施例提供的信息处理方法实施例的流程图;FIG. 1 is a flowchart of an embodiment of an information processing method according to an embodiment of the present application;
图2为本申请实施例提供的信息处理方法实施例的另一流程图;FIG. 2 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure;
图3为本申请实施例提供的第一场景示意图;FIG. 3 is a schematic diagram of a first scenario provided by an embodiment of the present application;
图4为本申请实施例提供的第二场景示意图;4 is a schematic diagram of a second scenario provided by an embodiment of the present application;
图5为本申请实施例提供的信息处理方法实施例的另一流程图;FIG. 5 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure;
图6为本申请实施例提供的信息处理方法实施例的另一流程图;FIG. 6 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure;
图7为本申请实施例提供的信息处理方法实施例的另一流程图;FIG. 7 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure;
图8为本申请实施例提供的信息处理装置实施例的结构示意图;FIG. 8 is a schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure;
图9为本申请实施例提供的信息处理装置实施例的另一结构示意图;FIG. 9 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure;
图10为本申请实施例提供的信息处理装置实施例的另一结构示意图。FIG. 10 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present application. It is a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
在本申请实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。The terms used in the embodiments of the present application are for the purpose of describing particular embodiments only, and are not intended to limit the application. The singular forms "a", "the", and "the"
应当理解,本文中使用的术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that the term "and/or" as used herein is merely an association describing the associated object, indicating that there may be three relationships, for example, A and/or B, which may indicate that A exists separately, while A and B, there are three cases of B alone. In addition, the character "/" in this article generally indicates that the contextual object is an "or" relationship.
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。Depending on the context, the word "if" as used herein may be interpreted as "when" or "when" or "in response to determining" or "in response to detecting." Similarly, depending on the context, the phrase "if determined" or "if detected (conditions or events stated)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event) "Time" or "in response to a test (condition or event stated)".
可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能,例如,智能手表、智能鞋、智能手环、眼镜、头盔等。其中,智能眼镜与头盔等与眼睛相关的设备是可以直接利用视觉与用户进行交互的设备,当智能眼镜与头盔等佩戴在用户的头部时,其可以在用户眼前产生三维空间的虚拟景物,使得用户不但可以观看,还可以与景物进行互动,例如,点击、拖拽、滑动等。然而,由于用户在与景物进行互动时,多通过隔空手势与设备进行交互的操作,因缺乏力反馈而容易使人手臂产生疲劳,且由于人体是不稳定的,手和身体会随时改变位置,进而难以完成对精度要求较高的操作。此外,由于在三维空间内生成虚拟景物需要经过大量的运算,对软硬件要求较高,而可穿戴式设备的设计又限制了设备的体积和重量,所以处理速度会较慢,因此,在本申请实施例中,通过采用设备分离的方式来提高计算速度,以及在真实物体的表面生成虚拟景物的方式,使得用户可以通过对真实物体的表面进行操作来增强触感与真实性。具体的,图1为本申请实施例提供的信息处理方法实施例的流程图,应用于第三设备中,如图1所示,本申请实施例提供的信息处理方法,具体可以包括如下步骤:A wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Wearable devices are more than just a hardware device. They also implement powerful functions through software support, data interaction, and cloud interaction. For example, smart watches, smart shoes, smart bracelets, glasses, helmets, etc. Among them, eye-related devices such as smart glasses and helmets are devices that can directly interact with the user by using vision. When smart glasses and helmets are worn on the user's head, they can create virtual scenes in three-dimensional space in front of the user's eyes. It allows users to view not only the scene, but also interact with the scene, such as clicking, dragging, sliding, and so on. However, since the user interacts with the scene while interacting with the scene, the operation of interacting with the device through the gesture of the space is easy to cause fatigue of the arm due to lack of force feedback, and since the human body is unstable, the hand and the body change position at any time. Therefore, it is difficult to perform operations requiring high precision. In addition, since a virtual scene is generated in a three-dimensional space, a large number of operations are required, and the hardware and software requirements are high, and the design of the wearable device limits the size and weight of the device, so the processing speed is slow, and therefore, In the application embodiment, the calculation speed is increased by adopting the device separation method, and the virtual scene is generated on the surface of the real object, so that the user can enhance the touch and the authenticity by operating the surface of the real object. Specifically, FIG. 1 is a flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure, which is applied to a third device. As shown in FIG. 1 , the information processing method provided by the embodiment of the present application may specifically include the following steps:
101、第一设备获取当前环境信息,并将当前环境信息发送至第三设备。101. The first device acquires current environment information, and sends current environment information to the third device.
在本申请实施例中,第一设备指代可穿戴设备,并且,第一设备至少包含显示单元、基本运算单元、无线传输单元、环境感知单元、交互检测单元和电源单元。例如,智能眼镜、头盔等。第三设备指代具有较强计算能力的设备,并且,第三设备至少包括运算单元、无线传输单元,例如,本地计算机、云处理中心等。其中,第一设备与第三设备之间可以相互进行通信,通信的方式可以使用2G、3G、4G、WiFi等无线通信方式。In the embodiment of the present application, the first device refers to the wearable device, and the first device includes at least a display unit, a basic operation unit, a wireless transmission unit, an environment sensing unit, an interaction detecting unit, and a power unit. For example, smart glasses, helmets, etc. The third device refers to a device having strong computing power, and the third device includes at least an arithmetic unit, a wireless transmission unit, for example, a local computer, a cloud processing center, or the like. The first device and the third device can communicate with each other, and the communication mode can use wireless communication methods such as 2G, 3G, 4G, and WiFi.
第一设备通过环境感知单元来获取当前环境信息,具体的,环境感知单元至少需要包含IMU(Inertial measurement unit,惯性测量单元)以及图像捕捉模块(优选使用双目摄像头),在实际应用中利用SLAM(simultaneous localization and mapping,即时定位与地图构建)技术中的算法来进行计算,实现对当前环境信息的获取,具体的,当前环境信息包括第一设备自身定位、周围环境图像采集和物体表面三维信息获取等内容。The first device obtains the current environment information through the environment sensing unit. Specifically, the environment sensing unit needs to include at least an IMU (Inertial Measurement Unit) and an image capturing module (preferably using a binocular camera), and the SLAM is utilized in practical applications. (simultaneous localization and mapping, real-time positioning and map construction) technology to calculate, to achieve the acquisition of current environmental information, specifically, the current environmental information includes the first device itself positioning, surrounding environment image acquisition and object surface three-dimensional information Get and so on.
当第一设备获取到当前环境信息后,利用无线传输单元将当前环境信息发送至第三设备,使得第三设备可以当前环境信息进行后续的处理。在一个具体的实现过程中,为了加快处理进程,提高传输速度,优选使用4G、WiFi甚至更快的无线通信方式。After the first device obtains the current environment information, the current environment information is sent to the third device by using the wireless transmission unit, so that the third device can perform subsequent processing on the current environment information. In a specific implementation process, in order to speed up the processing and increase the transmission speed, it is preferable to use 4G, WiFi or even faster wireless communication.
102、第三设备接收第一设备发送的当前环境信息,并根据当前环境信息进行建模得到模型信息,以及对第一设备进行定位得到定位信息。102. The third device receives the current environment information sent by the first device, and performs modeling according to the current environment information to obtain model information, and locates the first device to obtain positioning information.
第三设备在接收到第一设备发送的当前环境信息后,对当前环境信息进行解析,构建虚拟场景,具体的,构建过程可以包括:首先,获取前环境信息中各实物的水平方向、天顶距、斜距和反射强度等参数,自动存储并计算,获得点云数据;然后,对点云数据进行编辑、扫描数据拼接与合并、影像数据点三维空间量测、点云影像可视化、空间数据三维建模、纹理分析处理和数据转换处理,构建虚拟场景,得到模型信息。After receiving the current environment information sent by the first device, the third device parses the current environment information to construct a virtual scenario. Specifically, the process may include: first, acquiring the horizontal direction of each physical object in the pre-environment information, and the zenith Parameters such as distance, slant range and reflection intensity are automatically stored and calculated to obtain point cloud data; then, editing of point cloud data, splicing and merging of scanned data, three-dimensional measurement of image data points, visualization of point cloud images, and spatial data 3D modeling, texture analysis processing and data conversion processing, constructing virtual scenes, and obtaining model information.
以及,第三设备对当前环境信息进行解析,提取第一设备自身定位信息,对第一设备进行定位,得到定位信息。And the third device parses the current environment information, extracts the positioning information of the first device, and performs positioning on the first device to obtain positioning information.
103、第三设备根据模型信息以及定位信息,在模型内的显影体表面生成虚拟触控屏幕。103. The third device generates a virtual touch screen on the surface of the developer body in the model according to the model information and the positioning information.
在本申请实施例中,显影体指代任意一个可以在其表面上生成虚拟触控屏幕的事物,因为,模型内的事物均为真实场景中真实物体的虚拟影像,所以,在真实场景中的所有真实物体在模型内都可以作为显影体,例如,桌子、墙壁、饮水机、热水器、窗户等。因此,第三设备可以根据模型信息以及定位信息,在模型内的任意显影体表面生成虚拟触控屏幕。In the embodiment of the present application, the developing body refers to any thing that can generate a virtual touch screen on the surface thereof, because the things in the model are virtual images of real objects in the real scene, so in the real scene All real objects can be used as developing bodies in the model, such as tables, walls, water dispensers, water heaters, windows, and the like. Therefore, the third device can generate a virtual touch screen on any developer surface in the model according to the model information and the positioning information.
在本申请实施例中,虚拟触控屏幕既可以自动在显影体的表面生成,也可以与用户交互后生成。In the embodiment of the present application, the virtual touch screen can be automatically generated on the surface of the developing body or generated after interacting with the user.
相应的,用户在对虚拟触控进行操作时,对应于真实场景中为用户在 真实物体进行相同的操作。Correspondingly, when the user operates the virtual touch, the user performs the same operation on the real object corresponding to the real scene.
104、第三设备发送虚拟触控屏幕至第一设备。104. The third device sends the virtual touch screen to the first device.
在本申请实施例中,当第三设备生成虚拟触控屏幕后,利用无线传输单元将虚拟触控屏幕发送至第一设备。In the embodiment of the present application, after the third device generates the virtual touch screen, the virtual touch screen is sent to the first device by using the wireless transmission unit.
105、第一设备接收并显示虚拟触控屏幕。105. The first device receives and displays a virtual touch screen.
需要说明的是,在本申请实施例中,虚拟触控屏幕贴合模型内显影体的表面。例如,虚拟触控屏幕贴合在水桶的表面,即,虚拟触控屏幕的弯曲弧度与水桶的弧度一致。又例如,虚拟触控屏幕贴合在桌子的表面。其目的在于用户可以在真实场景中可以在真实物体的表面进行操作,获取真实的触感。It should be noted that, in the embodiment of the present application, the virtual touch screen fits the surface of the developing body in the model. For example, the virtual touch screen is attached to the surface of the water bucket, that is, the curved curvature of the virtual touch screen is consistent with the curvature of the water tub. As another example, a virtual touch screen fits over the surface of the table. The purpose is that the user can operate on the surface of a real object in a real scene to obtain a real touch.
本申请实施例提供的信息处理方法,通过使用设备,例如云端计算中心,来对第一设备发送的当前环境信息进行建模以及定位,并根据模型信息以及定位信息,在模型内的显影体表面生成虚拟触控屏幕,然后发送至第一设备由第一设备进行显示,模型中的显影体均对应实际场景中的实体物体,因此,用户可以对实体物体表面进行操作来完成对虚拟触控屏幕的控制,增强了触感与真实性,且在真实物表面进行操作还能够提高检测精度,此外,通过第一设备与第二设备之间的交互,当前环境进行建模以及定位等需要较复杂运算利用第二设备来完成,减少了第一设备的负荷,解决了现有技术中因缺乏力反馈而容易使人手臂产生疲劳,且难以完成对精度要求较高的操作的问题。The information processing method provided by the embodiment of the present application uses a device, such as a cloud computing center, to model and locate current environment information sent by the first device, and based on the model information and the positioning information, the surface of the developer in the model. The virtual touch screen is generated and then sent to the first device for display by the first device, and the developing body in the model corresponds to the physical object in the actual scene, so the user can operate on the surface of the solid object to complete the virtual touch screen. The control enhances the tactile sensation and authenticity, and the operation on the surface of the real object can also improve the detection accuracy. In addition, through the interaction between the first device and the second device, the current environment for modeling and positioning requires more complicated operations. The second device is used to reduce the load of the first device, which solves the problem in the prior art that the arm is easily fatigued due to lack of force feedback, and the operation with high precision is difficult to complete.
前述内容中第三设备可以自动生成虚拟触控屏幕,在实际应用中,可选的,为了增强用户的可操作性与参与性,用户可以根据需要来确定何时生成虚拟触控屏幕,具体的,图2为本申请实施例提供的信息处理方法实施例的另一流程图,应用于第三设备中,如图2所示,本申请实施例提供的信息处理方法,在步骤103之前,还可以包括如下步骤:In the foregoing content, the third device may automatically generate a virtual touch screen. In an actual application, optionally, in order to enhance user operability and participation, the user may determine when to generate a virtual touch screen according to needs, specifically FIG. 2 is another flowchart of an embodiment of an information processing method according to an embodiment of the present disclosure, which is applied to a third device. As shown in FIG. 2, the information processing method provided by the embodiment of the present application is further It can include the following steps:
106、第一设备接收第一用户的启动指令。106. The first device receives a startup instruction of the first user.
107、第一设备将启动指令发送至第三设备。107. The first device sends a startup command to the third device.
108、第三设备接收第一设备发送的启动指令。108. The third device receives a startup command sent by the first device.
相应的,步骤103被执行为“第三设备根据启动指令、模型信息以及定位信息,在模型内的显影体表面生成虚拟触控屏幕”。Correspondingly, step 103 is executed as "the third device generates a virtual touch screen on the surface of the developer in the model according to the startup command, the model information, and the positioning information."
具体地,在本申请实施例中,用户的启动指令包含两种情况:第一种是第一设备带有物理按键,其功能为自动生成按钮,用户对自动生成按钮进行操作。第二种是用户框选显示区域。Specifically, in the embodiment of the present application, the user's startup instruction includes two cases: the first type is that the first device has a physical button, and the function is to automatically generate a button, and the user operates the automatically generated button. The second is that the user selects the display area.
图3为本申请实施例提供的第一场景示意图,如图3示,在第一种情况下,当用户对自动生成按钮进行操作时,第一设备接收用户对自动生成按钮的操作,触发为启动指令,并将启动指令发送至第三设备。第三设备在接收到启动指令时开始生成虚拟触控屏幕,首先,第三设备根据启动指令在当前环境信息中确定标记信息的位置;然后,第三设备根据模型信息 以及定位信息,在标记信息的位置生成具有指定大小的虚拟触控屏幕。具体的,在本申请实施例中,由于用户将第一设备佩戴在头部,且至少阻挡了部分视线,因此,为了方便用户操作,在第一设备上设置物理按键,使得用户在触摸到物理按键时,就可以对第一设备进行操作。在一个具体的实现过程中,用户对自动生成按钮的操作可以是单击、双击等。而且,在本申请实施例中,用户所处的当前环境中会预先设定至少一个标记信息,标记信息的位置在指定物体的表面。因此,当用户对自动生成按钮进行操作时,第三设备首先获取当前环境中标记信息的位置,然后第三设备根据用户对自动生成按钮的操作、模型信息以及定位信息,在标记信息处生成具有指定大小的虚拟触控屏幕。在一个具体的实现过程中,第三设备在获取到标记信息的影像后,计算其三维坐标信息(三维坐标信息包含x、y、z三个维度),然后,利用标记信息的三维坐标信息、对当前环境建模后各个物体的位置以及第一设备当前定位信息,在标记信息的位置生成具有指定大小的虚拟触控屏幕。例如,用户需要在墙壁上生成平板电脑屏幕,则,当用户单击自动生成按钮后,首先获取墙壁上标记信息的位置,然后在标记信息处生成与平板电脑大小相同的虚拟触控屏幕。FIG. 3 is a schematic diagram of a first scenario according to an embodiment of the present disclosure. As shown in FIG. 3, in a first case, when a user operates an automatically generated button, the first device receives an operation of the user automatically generating a button, and the trigger is Start the instruction and send the start command to the third device. The third device starts to generate the virtual touch screen when receiving the startup instruction. First, the third device determines the location of the marker information in the current environment information according to the startup instruction. Then, the third device marks the information according to the model information and the location information. The location generates a virtual touchscreen with a specified size. Specifically, in the embodiment of the present application, since the user wears the first device on the head and blocks at least part of the line of sight, in order to facilitate the user operation, the physical button is set on the first device, so that the user touches the physical When the button is pressed, the first device can be operated. In a specific implementation process, the user's operation of automatically generating a button may be a click, a double click, or the like. Moreover, in the embodiment of the present application, at least one tag information is preset in the current environment in which the user is located, and the location of the tag information is on the surface of the specified object. Therefore, when the user operates the automatically generated button, the third device first acquires the location of the tag information in the current environment, and then the third device generates the tag information according to the user's operation on the automatically generated button, the model information, and the positioning information. A virtual touch screen of a specified size. In a specific implementation process, after acquiring the image of the tag information, the third device calculates the three-dimensional coordinate information (the three-dimensional coordinate information includes three dimensions of x, y, and z), and then uses the three-dimensional coordinate information of the tag information, After the current environment is modeled, the position of each object and the current positioning information of the first device are generated, and a virtual touch screen having a specified size is generated at the position of the marker information. For example, if the user needs to generate a tablet screen on the wall, when the user clicks the auto-generate button, first obtain the location of the mark information on the wall, and then generate a virtual touch screen of the same size as the tablet at the mark information.
图4为本申请实施例提供的第二场景示意图,如图4所示,在第二种情况下,第一设备获取第一用户框选的显示区域;第一设备将第一用户框选的显示区域转化为启动指令,并将启动指令发送至第三设备,第三设备根据模型信息以及定位信息,在第一用户框选的显示区域内生成虚拟触控屏幕。具体的,在本申请实施例中,为了提高交互性以及个性化,用户可以根据自己的需要在任意位置生成虚拟触控屏幕,用户用手指在当前环境中的某一物体表面框选显示区域,例如,画一个矩形,然后第一设备采集到第一用户在指定物体表面框选的显示区域后,将其转化为启动指令,发送至第三设备中,使得第三设备可以根据第一指令等内容在显示区域内生成虚拟触控屏幕。FIG. 4 is a schematic diagram of a second scenario according to an embodiment of the present disclosure. As shown in FIG. 4, in the second case, the first device acquires a display area selected by the first user frame; The display area is converted into a startup command, and the startup command is sent to the third device, and the third device generates a virtual touch screen in the display area selected by the first user according to the model information and the positioning information. Specifically, in the embodiment of the present application, in order to improve interactivity and personalization, the user may generate a virtual touch screen at any position according to his own needs, and the user selects the display area by using a finger on the surface of an object in the current environment. For example, after drawing a rectangle, the first device collects the display area selected by the first user on the surface of the specified object, converts it into a startup command, and sends it to the third device, so that the third device can be according to the first instruction, etc. The content generates a virtual touch screen within the display area.
并且,在本申请实施例中,标记信息包括:二维码、图形、图案、图片、文字、字母或者数字中的至少一种。Moreover, in the embodiment of the present application, the tag information includes at least one of a two-dimensional code, a graphic, a pattern, a picture, a text, a letter, or a number.
随着用户对个性化的需求越来越强,不同的用户在使用设备时具有不同的使用习惯、产生不同的历史数据、在设备内安装不同的软件等情况,因此,为了实现个性化需求,以及提高用户的使用感受,在前述内容的基础上,本申请实施例进一步的提供如下实现方式,具体的,图5为本申请实施例提供的信息处理方法实施例的另一流程图,如图5所示,本申请实施例提供的信息处理方法,还可以包括如下步骤:As users become more and more demanding for personalization, different users have different usage habits, different historical data, and different software installed in the device when using the device. Therefore, in order to achieve individualized requirements, And improving the user experience. Based on the foregoing content, the embodiment of the present application further provides the following implementation manner. Specifically, FIG. 5 is another flowchart of the information processing method according to the embodiment of the present application. As shown in FIG. 5, the information processing method provided by the embodiment of the present application may further include the following steps:
108、第一设备获取第一用户的账户信息。108. The first device acquires account information of the first user.
由于在步骤103中生成了虚拟触控屏幕,因此,在本申请实施例中,第一用户可以在虚拟触控屏幕中输入自己的账户名和密码进行登录,使得第一设备可以获取第一用户的账户信息。Since the virtual touch screen is generated in step 103, in the embodiment of the present application, the first user can input his/her account name and password in the virtual touch screen to log in, so that the first device can acquire the first user. account information.
109、第一设备将第一用户的账户信息发送至第三设备。109. The first device sends the account information of the first user to the third device.
在本申请实施例中,当第一设备获取到第一用户的账户信息后,利用无线传输单元将第一用户的账户信息发送至第三设备。In the embodiment of the present application, after the first device acquires the account information of the first user, the account information of the first user is sent to the third device by using the wireless transmission unit.
110、第三设备根据第一用户的账户信息以及当前环境信息更新虚拟触控屏幕的显示内容。110. The third device updates the display content of the virtual touch screen according to the account information of the first user and the current environment information.
在本申请实施例中,第三设备内存储有大量的用户信息,其中包含第一用户的账户信息以及第一用户的账户信息对应的账户内容,账户内容中可以包含与第一用户的账户信息关联的所有设备(例如,平板电脑、洗衣机、空调、饮水机、净水器等)的设备信息。例如,第一用户使用账户信息关联平板电脑,则,第三设备内存储有平板电脑的系统桌面信息。又例如,第一用户使用账户信息关联净水器,则第三设备内存储有净水器的当前储水量、水的洁净等级、滤芯是否需要更换等信息。In the embodiment of the present application, the third device stores a large amount of user information, where the account information of the first user and the account content corresponding to the account information of the first user are included, and the account content may include the account information of the first user. Device information for all associated devices (for example, tablets, washing machines, air conditioners, water dispensers, water purifiers, etc.). For example, if the first user associates the tablet with the account information, the third device stores the system desktop information of the tablet. For another example, if the first user associates the water purifier with the account information, the third device stores information such as the current water storage capacity of the water purifier, the cleanliness level of the water, and whether the filter cartridge needs to be replaced.
在一个具体的实现过程中,当第一用户在墙壁上、桌面上等非电器表面生成虚拟触控屏幕时,第三设备可以为第一用户提供至少一种显示内容,供用户进行选择,第一用户可以拖动、左右滑动进行更换当前虚拟触控屏幕内的内容。In a specific implementation process, when the first user generates a virtual touch screen on a non-electrical surface such as a wall or a desktop, the third device may provide the first user with at least one display content for the user to select, A user can drag and slide left and right to replace the content in the current virtual touch screen.
在一个具体的实现过程中,当第一用户在某一个电器表面生成虚拟触控屏幕时,第三设备可以为第一用户提供与该电器对应的电器信息,使得第一用户可以查看该电器当前的状态。In a specific implementation process, when the first user generates a virtual touch screen on a certain electrical surface, the third device may provide the first user with electrical information corresponding to the electrical device, so that the first user can view the current electrical device. status.
111、第三设备发送更新后的虚拟触控屏幕至第一设备。在本申请实施例中,当第三设备更新虚拟触控屏幕的显示内容后,利用无线传输单元将更新后的虚拟触控屏幕发送至第一设备。111. The third device sends the updated virtual touch screen to the first device. In the embodiment of the present application, after the third device updates the display content of the virtual touch screen, the updated virtual touch screen is sent to the first device by using the wireless transmission unit.
112、第一设备接收更新后的虚拟触控屏幕并进行显示。112. The first device receives the updated virtual touch screen and displays the same.
通过前述内容的介绍,本申请实施例提供的信息处理方法进一步的提供了一种方式,能够提高可操作性,使得用户可以依照自身的使用习惯使用第一设备,提高了使用效率。Through the introduction of the foregoing content, the information processing method provided by the embodiment of the present application further provides a manner, which can improve operability, so that the user can use the first device according to his own usage habits, thereby improving the use efficiency.
在前述内容的基础上,本申请实施例提供的信息处理方法在采集用户的交互操作时,采用如下方式进行,具体的,图6为本申请实施例提供的信息处理方法实施例的另一流程图,如图6所示,本申请实施例提供的信息处理方法,还可以包括如下步骤:On the basis of the foregoing, the information processing method provided by the embodiment of the present application is performed in the following manner when collecting the interaction operation of the user. Specifically, FIG. 6 is another flow of the embodiment of the information processing method provided by the embodiment of the present application. As shown in FIG. 6, the information processing method provided by the embodiment of the present application may further include the following steps:
113、第一设备检测第一用户在虚拟触控屏幕的动作。113. The first device detects an action of the first user on the virtual touch screen.
在本申请实施例中,第一设备具有交互检测单元,交互检测单元基于计算机视觉来实现对用户动作的检测,具体利用第一设备中的双目摄像头检测用户的指尖在虚拟触控屏幕上的位置或者动作。在一个具体的实现过程中,交互检测单元的检测流程可以包括:首先,选取手部关键点,建立手部的骨架模型;然后对手部进行跟踪,获得手部关键点的坐标,对手部的骨架模型进行优化;对手部的骨架模型进行提取,获取指尖的位置;追踪指尖从初始点到终止点的位置变化信息,根据位置变化信息确定动作。In the embodiment of the present application, the first device has an interaction detecting unit, and the interaction detecting unit detects the user action based on the computer vision, and specifically uses the binocular camera in the first device to detect the fingertip of the user on the virtual touch screen. The location or action. In a specific implementation process, the detection process of the interaction detection unit may include: first, selecting a key point of the hand to establish a skeleton model of the hand; then tracking the opponent to obtain the coordinates of the key point of the hand, the skeleton of the hand The model is optimized; the skeletal model of the opponent is extracted to obtain the position of the fingertip; the position change information of the fingertip from the initial point to the end point is tracked, and the action is determined according to the position change information.
114、第一设备为动作匹配对应的操作指令并发送至第三设备。114. The first device matches the corresponding operation instruction to the action and sends the operation instruction to the third device.
在本申请实施例中,第一设备中预先设置了动作与操作指令的对应关系,当交互检测单元确定了第一用户的动作后,根据预先设置的动作与操作指令的对应关系,为第一用户的动作匹配对应的操作指令,例如,当虚拟触控屏幕中呈现平板电脑的影像,第一用户单击虚拟触控屏幕中的某个图标,则检测到第一用户的动作为单击,且第一用户的指尖位置处有一个应用图标,则为该单击动作匹配打开应用的操作指令。又例如,当虚拟触控屏幕中呈现平板电脑的影像,第一用户从虚拟触控屏幕的左侧滑动到右侧,则检测到第一用户的动作为滑动,则为该滑动动作匹配切换页面的操作指令。In the embodiment of the present application, the corresponding relationship between the action and the operation instruction is preset in the first device, and after the interaction detecting unit determines the action of the first user, according to the corresponding relationship between the preset action and the operation instruction, the first relationship is The action of the user matches the corresponding operation instruction. For example, when the image of the tablet is presented in the virtual touch screen, and the first user clicks an icon in the virtual touch screen, the action of the first user is detected as a click. And there is an application icon at the fingertip position of the first user, and the operation instruction for opening the application is matched for the click action. For example, when the image of the tablet is presented in the virtual touch screen, the first user slides from the left side of the virtual touch screen to the right side, and detects that the action of the first user is sliding, and the switching page is matched for the sliding action. Operation instructions.
当第一设备确定了操作指令后,将利用无线传输单元将操作指令发送至第三设备。After the first device determines the operation instruction, the operation command is transmitted to the third device by using the wireless transmission unit.
在本申请实施例中,为了进一步的提高确定第一用户的动作的效率以及准确性,可以预先在当前环境中的物体表面设置辅助检测装置,例如,在标记信息附近安装红外线激光发射装置、雷达扫描装置等,通过辅助检测装置与手指发生的交互对应关系,确定手指的位置,例如,在标记附近安装红外线激光发射装置,当在标记处生成虚拟触控屏幕后,第一用户在点击虚拟触控屏幕时,红外线被手指阻挡,会在指尖部形成亮斑,使得交互检测单元可以根据亮斑的位置迅速定位指尖的位置。In the embodiment of the present application, in order to further improve the efficiency and accuracy of determining the motion of the first user, an auxiliary detecting device may be set in advance on the surface of the object in the current environment, for example, an infrared laser emitting device and a radar are installed near the marking information. The scanning device or the like determines the position of the finger by the interaction between the auxiliary detecting device and the finger, for example, installing an infrared laser emitting device near the mark, and when the virtual touch screen is generated at the mark, the first user clicks on the virtual touch When the screen is controlled, the infrared rays are blocked by the fingers, and a bright spot is formed at the fingertips, so that the interactive detecting unit can quickly position the fingertips according to the position of the bright spots.
115、第三设备对操作指令进行处理以及结合当前环境信息,更新虚拟触控屏幕的显示内容。115. The third device processes the operation instruction and updates the display content of the virtual touch screen according to the current environment information.
第三设备在接收到第一设备发送的操作指令后,响应操作指令的内容,确定与操作指令对应的内容,另外,由于第一用户的位置是不可控制的,其会随时发生移动,因此,第三设备还要根据当前环境信息结合操作指令的内容,更新虚拟触控屏幕的显示内容。After receiving the operation instruction sent by the first device, the third device determines the content corresponding to the operation instruction in response to the content of the operation instruction, and further, since the position of the first user is uncontrollable, the movement may occur at any time. The third device further updates the display content of the virtual touch screen according to the current environment information combined with the content of the operation instruction.
116、第三设备发送更新后的虚拟触控屏幕至第一设备。116. The third device sends the updated virtual touch screen to the first device.
在本申请实施例中,当第三设备更新虚拟触控屏幕后,利用无线传输单元将更新后的虚拟触控屏幕发送至第一设备。In the embodiment of the present application, after the third device updates the virtual touch screen, the updated virtual touch screen is sent to the first device by using the wireless transmission unit.
117、第一设备接收更新后的虚拟触控屏幕并进行显示。117. The first device receives the updated virtual touch screen and displays the same.
本申请实施例提供的信息处理方法,用户可以在真实物体表面进行操作,例如,点击、滑动等,触感真实并且用户可以感受到力的反馈,在检测用户动作方面,能够提高检测精度,提高检测效率。In the information processing method provided by the embodiment of the present application, the user can perform operations on the surface of a real object, for example, clicking, sliding, etc., the touch is real and the user can feel the feedback of the force, and the detection accuracy can be improved and the detection can be improved in detecting the user action. effectiveness.
前述内容中,均为一个用户使用一个设备登录一个账户进行查看、交互等操作,而在实际生活场景中,还存在多人对同一个终端进行查看、操作的情况,例如,两个人同时使用一台平板电脑玩儿游戏、两个人同时使用一台平板电脑看电影等。因此,为了增强用户之间互动性,本申请实施例还提供一种信息处理方法,在前述内容的基础上,实现多人互动,具体的,图7为本申请实施例提供的信息处理方法实施例的另一流程图,如图7 所示,本申请实施例提供的信息处理方法,还可以包括如下步骤:In the foregoing content, a user uses one device to log in to an account for viewing, interaction, and the like. In an actual life scenario, there are situations in which multiple people view and operate the same terminal. For example, two people use one at the same time. A tablet computer plays games, and two people use a tablet computer to watch movies at the same time. Therefore, in order to enhance the interactivity between the users, the embodiment of the present application further provides an information processing method, and implements multi-person interaction on the basis of the foregoing content. Specifically, FIG. 7 is an implementation of the information processing method provided by the embodiment of the present application. Another flowchart of the example, as shown in FIG. 7, the information processing method provided by the embodiment of the present application may further include the following steps:
118、第二设备连接第三设备。118. The second device is connected to the third device.
在本申请实施例中,第二设备与第一设备相同,指代可穿戴设备,并且第二设备由第二用户使用,第一设备由第一用户使用。In the embodiment of the present application, the second device is the same as the first device, and refers to the wearable device, and the second device is used by the second user, and the first device is used by the first user.
前述内容中,第一用户使用第一设备登录了第一用户的账户信息,且,第一设备的第一显示内容是由第三设备发送的,若第二用户想要获得与第一用户同样的内容,则首先需要使用第二设备连接第三设备。In the foregoing, the first user logs in the account information of the first user by using the first device, and the first display content of the first device is sent by the third device, if the second user wants to obtain the same as the first user. The content first needs to connect to the third device using the second device.
在本申请实施例中,第二设备连接第三设备的方式与第一设备连接第三设备的方式相同。In the embodiment of the present application, the manner in which the second device is connected to the third device is the same as the manner in which the first device is connected to the third device.
119、当第二设备与第三设备连接后,第三设备将虚拟触控屏幕发送至第二设备。119. After the second device is connected to the third device, the third device sends the virtual touch screen to the second device.
在本申请实施例中,当第二设备确定了第三设备连接后,第三设备利用无线传输单元将虚拟触控屏幕发送至第二设备。In the embodiment of the present application, after the second device determines that the third device is connected, the third device sends the virtual touch screen to the second device by using the wireless transmission unit.
120、第二设备接收虚拟触控屏幕并进行显示。120. The second device receives the virtual touch screen and displays the same.
此外,为了进一步的提高互动性,在本申请实施例中,第二用户还可以对显示屏幕内显示的内容进行操作,其操作的方式与第一用户相同。具体的,首先,第二设备检测第二用户在虚拟触控屏幕的动作;第二设备为动作匹配对应的操作指令并发送至第三设备;然后,第三设备对操作指令进行处理以及结合当前环境信息,更新虚拟触控屏幕的显示内容;第三设备将更新后的虚拟触控屏幕发送至第一设备和第二设备;使得第一设备和第二设备可以分别接收到更新后的虚拟触控屏幕,并分别进行显示。In addition, in order to further improve the interaction, in the embodiment of the present application, the second user may also operate the content displayed in the display screen in the same manner as the first user. Specifically, first, the second device detects an action of the second user on the virtual touch screen; the second device matches the corresponding operation instruction with the action and sends the operation instruction to the third device; then, the third device processes the operation instruction and combines the current The environment information is used to update the display content of the virtual touch screen; the third device sends the updated virtual touch screen to the first device and the second device; so that the first device and the second device can respectively receive the updated virtual touch Control the screen and display it separately.
由于第一用户和第二用户看到的是同一块虚拟触控屏幕,且第一用户和第二用户均可以对其进行操作,因此,当第一用户与第二用户同时或者在指定时间范围内先后发出了相同的指令时,第三设备要对指令进行去重处理,选择最先接收到的指令执行。Since the first user and the second user see the same virtual touch screen, and the first user and the second user can operate on the same, when the first user and the second user are simultaneously or in a specified time range When the same instruction is issued in succession, the third device de-reprocesses the instruction and selects the instruction received first.
图8为本申请实施例提供的信息处理装置实施例的结构示意图,如图8所示,本申请实施例提供的信息处理装置可以包括:接收单元11、生成单元12、发送单元13。FIG. 8 is a schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 8, the information processing apparatus provided by the embodiment of the present application may include: a receiving unit 11, a generating unit 12, and a sending unit 13.
接收单元11,用于接收第一设备发送的当前环境信息,并根据当前环境信息进行建模得到模型信息,以及对第一设备进行定位得到定位信息。The receiving unit 11 is configured to receive current environment information sent by the first device, and perform modeling to obtain model information according to current environment information, and locate the first device to obtain positioning information.
生成单元12,用于根据模型信息以及定位信息,在模型内的显影体表面生成虚拟触控屏幕。The generating unit 12 is configured to generate a virtual touch screen on the surface of the developing body in the model according to the model information and the positioning information.
发送单元13,用于发送虚拟触控屏幕至第一设备。The sending unit 13 is configured to send the virtual touch screen to the first device.
图9为本申请实施例提供的信息处理装置实施例的另一结构示意图,如图9所示,本申请实施例提供的信息处理装置还可以包括:更新单元14。FIG. 9 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 9, the information processing apparatus provided by the embodiment of the present application may further include: an updating unit 14.
在本申请实施例中,接收单元11,还用于:In the embodiment of the present application, the receiving unit 11 is further configured to:
接收第一设备发送的第一用户的账户信息;Receiving account information of the first user sent by the first device;
更新单元14,用于根据第一用户的账户信息以及当前环境信息更新虚 拟触控屏幕的显示内容;The updating unit 14 is configured to update the display content of the virtual touch screen according to the account information of the first user and the current environment information;
发送单元13,还用于:The sending unit 13 is further configured to:
发送更新后的虚拟触控屏幕至第一设备。Send the updated virtual touch screen to the first device.
接收单元11,还用于:The receiving unit 11 is further configured to:
接收第一设备发送的操作指令;Receiving an operation instruction sent by the first device;
更新单元14,还用于:The updating unit 14 is further configured to:
对操作指令进行处理,以及结合当前环境信息更新虚拟触控屏幕的显示内容;Processing the operation instruction, and updating the display content of the virtual touch screen in combination with the current environment information;
发送单元13,还用于:The sending unit 13 is further configured to:
发送更新后的虚拟触控屏幕至第一设备。Send the updated virtual touch screen to the first device.
在一个具体的实现过程中,接收单元11,还用于:In a specific implementation process, the receiving unit 11 is further configured to:
接收第一设备发送的启动指令;Receiving a startup instruction sent by the first device;
生成单元12,具体用于:The generating unit 12 is specifically configured to:
根据启动指令、模型信息以及定位信息,在模型内的显影体表面生成虚拟触控屏幕。A virtual touch screen is generated on the surface of the developer in the model according to the startup command, the model information, and the positioning information.
在一个具体的实现过程中,生成单元12,具体用于:In a specific implementation process, the generating unit 12 is specifically configured to:
根据启动指令在当前环境信息中确定标记信息的位置;Determining the location of the tag information in the current environment information according to the startup command;
根据模型信息以及定位信息,在标记信息的位置生成具有指定大小的虚拟触控屏幕。Based on the model information and the positioning information, a virtual touch screen having a specified size is generated at the position of the marker information.
在一个具体的实现过程中,生成单元12,具体用于:In a specific implementation process, the generating unit 12 is specifically configured to:
根据启动指令确定第一用户框选的显示区域;Determining, according to the startup instruction, a display area selected by the first user;
根据模型信息以及定位信息,在第一用户框选的显示区域内生成虚拟触控屏幕。According to the model information and the positioning information, a virtual touch screen is generated in the display area selected by the first user.
在一个具体的实现过程中,更新单元14,具体用于:In a specific implementation process, the updating unit 14 is specifically configured to:
根据操作指令确定第一用户在虚拟触控屏幕的动作;Determining an action of the first user on the virtual touch screen according to the operation instruction;
根据动作以及结合当前环境信息更新虚拟触控屏幕的显示内容。The display content of the virtual touch screen is updated according to the action and the current environment information.
在本申请实施例中,虚拟触控屏幕贴合模型内显影体的表面。In the embodiment of the present application, the virtual touch screen fits the surface of the developing body in the model.
在本申请实施例中,标记信息包括:In the embodiment of the present application, the marking information includes:
二维码、图形、图案、图片、文字、字母或者数字中的至少一种。At least one of a QR code, a graphic, a pattern, a picture, a text, a letter, or a number.
图10为本申请实施例提供的信息处理装置实施例的另一结构示意图,如图10所示,本申请实施例提供的信息处理装置还可以包括:连接单元15。FIG. 10 is another schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present disclosure. As shown in FIG. 10, the information processing apparatus provided by the embodiment of the present application may further include: a connection unit 15.
在本申请实施例中,接收单元11,还用于:接收第二设备发送的连接请求。In the embodiment of the present application, the receiving unit 11 is further configured to: receive a connection request sent by the second device.
连接单元15,用于连接第二设备,并将虚拟触控屏幕发送至第二设备。The connecting unit 15 is configured to connect the second device and send the virtual touch screen to the second device.
本实施例的信息处理装置,可以用于执行图1~图7所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The information processing apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 to FIG. 7. The implementation principle and technical effects are similar, and details are not described herein again.
本申请实施例还提供一种云处理设备,设备包括输入输出接口、处理器以及存储器;The embodiment of the present application further provides a cloud processing device, where the device includes an input and output interface, a processor, and a memory;
存储器用于存储指令,指令被处理器执行时,使得设备执行如图1~图7中任一种的方法。The memory is for storing instructions that, when executed by the processor, cause the device to perform the method of any of Figures 1-7.
本申请实施例提供的云处理设备,可以用于执行图1~图7所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The cloud processing device provided by the embodiment of the present application may be used to implement the technical solution of the method embodiment shown in FIG. 1 to FIG. 7. The implementation principle and technical effects are similar, and details are not described herein again.
本申请实施例还提供一种计算机程序产品,可直接加载到计算机的内部存储器中,并含有软件代码,计算机程序经由计算机载入并执行后能够实现如图1~图7中任一种的方法。The embodiment of the present application further provides a computer program product, which can be directly loaded into an internal memory of a computer and contains software code. After the computer program is loaded and executed by the computer, the method of any one of FIG. 1 to FIG. 7 can be implemented. .
本实施例的计算机程序产品,可以用于执行图1~图7所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The computer program product of this embodiment can be used to implement the technical solution of the method embodiment shown in FIG. 1 to FIG. 7. The implementation principle and technical effects are similar, and details are not described herein again.
以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到至少两个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located in one place. Or it can be distributed to at least two network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only for explaining the technical solutions of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that The technical solutions described in the foregoing embodiments may be modified, or some or all of the technical features may be equivalently replaced; and the modifications or substitutions do not deviate from the technical solutions of the embodiments of the present application. range.

Claims (22)

  1. 一种信息处理方法,其特征在于,包括:An information processing method, comprising:
    接收第一设备发送的当前环境信息,并根据所述当前环境信息进行建模得到模型信息,以及对所述第一设备进行定位得到定位信息;Receiving current environment information sent by the first device, and modeling the model information according to the current environment information, and positioning the first device to obtain positioning information;
    根据所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕;Generating a virtual touch screen on the surface of the developer body in the model according to the model information and the positioning information;
    发送所述虚拟触控屏幕至所述第一设备。Sending the virtual touch screen to the first device.
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    接收第一设备发送的第一用户的账户信息;Receiving account information of the first user sent by the first device;
    根据所述第一用户的账户信息以及当前环境信息更新所述虚拟触控屏幕的显示内容;Updating the display content of the virtual touch screen according to the account information of the first user and the current environment information;
    发送更新后的所述虚拟触控屏幕至所述第一设备。Sending the updated virtual touch screen to the first device.
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:The method according to claim 1 or 2, wherein the method further comprises:
    接收所述第一设备发送的操作指令;Receiving an operation instruction sent by the first device;
    对所述操作指令进行处理,以及结合当前环境信息更新所述虚拟触控屏幕的显示内容;Processing the operation instruction, and updating display content of the virtual touch screen in combination with current environment information;
    发送更新后的所述虚拟触控屏幕至所述第一设备。Sending the updated virtual touch screen to the first device.
  4. 根据权利要求1所述的方法,其特征在于,在所述根据所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕之前,所述方法还包括:The method according to claim 1, wherein before the generating a virtual touch screen on the surface of the developer in the model according to the model information and the positioning information, the method further comprises:
    接收所述第一设备发送的启动指令;Receiving a startup instruction sent by the first device;
    所述根据所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕,包括:And generating, according to the model information and the positioning information, a virtual touch screen on a surface of the developer body in the model, including:
    根据所述启动指令、所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕。A virtual touch screen is generated on the surface of the developer in the model according to the startup command, the model information, and the positioning information.
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述启动指令、所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕,包括:The method according to claim 4, wherein the generating a virtual touch screen on the surface of the developer body in the model according to the startup instruction, the model information, and the positioning information comprises:
    根据所述启动指令在当前环境信息中确定标记信息的位置;Determining a location of the tag information in the current environment information according to the startup instruction;
    根据所述模型信息以及所述定位信息,在所述标记信息的位置生成具有指定大小的虚拟触控屏幕。And generating, according to the model information and the positioning information, a virtual touch screen having a specified size at a position of the mark information.
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述启动指令、所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕,包括:The method according to claim 4, wherein the generating a virtual touch screen on the surface of the developer body in the model according to the startup instruction, the model information, and the positioning information comprises:
    根据所述启动指令确定所述第一用户框选的显示区域;Determining, according to the startup instruction, a display area selected by the first user;
    根据所述模型信息以及所述定位信息,在所述第一用户框选的显示区域内生成虚拟触控屏幕。And generating, according to the model information and the positioning information, a virtual touch screen in a display area selected by the first user.
  7. 根据权利要求3所述的方法,其特征在于,所述对所述操作指令进 行处理,以及结合当前环境信息更新所述虚拟触控屏幕的显示内容,包括:The method according to claim 3, wherein the processing the operation instruction and updating the display content of the virtual touch screen in combination with the current environment information comprises:
    根据所述操作指令确定所述第一用户在所述虚拟触控屏幕的动作;Determining, according to the operation instruction, an action of the first user on the virtual touch screen;
    根据所述动作以及结合所述当前环境信息更新所述虚拟触控屏幕的显示内容。Updating the display content of the virtual touch screen according to the action and in combination with the current environment information.
  8. 根据权利要求1所述的方法,其特征在于,所述虚拟触控屏幕贴合所述模型内显影体的表面。The method of claim 1 wherein said virtual touch screen conforms to a surface of said developer within said mold.
  9. 根据权利要求5所述的方法,其特征在于,所述标记信息包括:The method of claim 5 wherein said tag information comprises:
    二维码、图形、图案、图片、文字、字母或者数字中的至少一种。At least one of a QR code, a graphic, a pattern, a picture, a text, a letter, or a number.
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    接收第二设备发送的连接请求;Receiving a connection request sent by the second device;
    连接所述第二设备,并将所述虚拟触控屏幕发送至所述第二设备。Connecting the second device and transmitting the virtual touch screen to the second device.
  11. 一种信息处理装置,其特征在于,包括:An information processing apparatus, comprising:
    接收单元,用于接收第一设备发送的当前环境信息,并根据所述当前环境信息进行建模得到模型信息,以及对所述第一设备进行定位得到定位信息。The receiving unit is configured to receive current environment information sent by the first device, and perform modeling according to the current environment information to obtain model information, and locate the first device to obtain positioning information.
    生成单元,用于根据所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕。And a generating unit, configured to generate a virtual touch screen on the surface of the developer body in the model according to the model information and the positioning information.
    发送单元,用于发送所述虚拟触控屏幕至所述第一设备。a sending unit, configured to send the virtual touch screen to the first device.
  12. 根据权利要求11所述的装置,其特征在于,The device of claim 11 wherein:
    所述接收单元,还用于:The receiving unit is further configured to:
    接收第一设备发送的第一用户的账户信息;Receiving account information of the first user sent by the first device;
    所述装置还包括:The device also includes:
    更新单元,用于根据所述第一用户的账户信息以及当前环境信息更新所述虚拟触控屏幕的显示内容;An update unit, configured to update display content of the virtual touch screen according to account information of the first user and current environment information;
    所述发送单元,还用于:The sending unit is further configured to:
    发送更新后的所述虚拟触控屏幕至所述第一设备。Sending the updated virtual touch screen to the first device.
  13. 根据权利要求11或12所述的装置,其特征在于,Device according to claim 11 or 12, characterized in that
    所述接收单元,还用于:The receiving unit is further configured to:
    接收所述第一设备发送的操作指令;Receiving an operation instruction sent by the first device;
    所述更新单元,还用于:The update unit is further configured to:
    对所述操作指令进行处理,以及结合当前环境信息更新所述虚拟触控屏幕的显示内容;Processing the operation instruction, and updating display content of the virtual touch screen in combination with current environment information;
    所述发送单元,还用于:The sending unit is further configured to:
    发送更新后的所述虚拟触控屏幕至所述第一设备。Sending the updated virtual touch screen to the first device.
  14. 根据权利要求11所述的装置,其特征在于,The device of claim 11 wherein:
    所述接收单元,还用于:The receiving unit is further configured to:
    接收所述第一设备发送的启动指令;Receiving a startup instruction sent by the first device;
    所述生成单元,具体用于:The generating unit is specifically configured to:
    根据所述启动指令、所述模型信息以及所述定位信息,在模型内的显影体表面生成虚拟触控屏幕。A virtual touch screen is generated on the surface of the developer in the model according to the startup command, the model information, and the positioning information.
  15. 根据权利要求14所述的装置,其特征在于,所述生成单元,具体用于:The device according to claim 14, wherein the generating unit is specifically configured to:
    根据所述启动指令在当前环境信息中确定标记信息的位置;Determining a location of the tag information in the current environment information according to the startup instruction;
    根据所述模型信息以及所述定位信息,在所述标记信息的位置生成具有指定大小的虚拟触控屏幕。And generating, according to the model information and the positioning information, a virtual touch screen having a specified size at a position of the mark information.
  16. 根据权利要求14所述的装置,其特征在于,所述生成单元,具体用于:The device according to claim 14, wherein the generating unit is specifically configured to:
    根据所述启动指令确定所述第一用户框选的显示区域;Determining, according to the startup instruction, a display area selected by the first user;
    根据所述模型信息以及所述定位信息,在所述第一用户框选的显示区域内生成虚拟触控屏幕。And generating, according to the model information and the positioning information, a virtual touch screen in a display area selected by the first user.
  17. 根据权利要求13所述的装置,其特征在于,所述更新单元,具体用于:The device according to claim 13, wherein the updating unit is specifically configured to:
    根据所述操作指令确定所述第一用户在所述虚拟触控屏幕的动作;Determining, according to the operation instruction, an action of the first user on the virtual touch screen;
    根据所述动作以及结合所述当前环境信息更新所述虚拟触控屏幕的显示内容。Updating the display content of the virtual touch screen according to the action and in combination with the current environment information.
  18. 根据权利要求11所述的装置,其特征在于,所述虚拟触控屏幕贴合所述模型内显影体的表面。The apparatus according to claim 11, wherein said virtual touch screen is attached to a surface of said developing body in said mold.
  19. 根据权利要求15所述的装置,其特征在于,所述标记信息包括:The device according to claim 15, wherein said tag information comprises:
    二维码、图形、图案、图片、文字、字母或者数字中的至少一种。At least one of a QR code, a graphic, a pattern, a picture, a text, a letter, or a number.
  20. 根据权利要求11所述的装置,其特征在于,The device of claim 11 wherein:
    所述接收单元,还用于:接收第二设备发送的连接请求;The receiving unit is further configured to: receive a connection request sent by the second device;
    所述装置还包括:The device also includes:
    连接单元,用于连接所述第二设备,并将所述虚拟触控屏幕发送至所述第二设备。a connecting unit, configured to connect the second device, and send the virtual touch screen to the second device.
  21. 一种云处理设备,其特征在于,所述装置包括输入输出接口、处理器以及存储器;A cloud processing device, characterized in that the device comprises an input and output interface, a processor and a memory;
    所述存储器用于存储指令,所述指令被所述处理器执行时,使得所述装置执行如权利要求1~10中任一种所述的方法。The memory is for storing instructions that, when executed by the processor, cause the apparatus to perform the method of any one of claims 1 to 10.
  22. 一种计算机程序产品,其特征在于,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现如权利要求1~10中任一种所述的方法。A computer program product, which can be directly loaded into an internal memory of a computer and containing software code, which can be implemented by any one of claims 1 to 10 after being loaded and executed by a computer Methods.
PCT/CN2017/119720 2017-12-29 2017-12-29 Information processing method and apparatus, cloud processing device, and computer program product WO2019127325A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/119720 WO2019127325A1 (en) 2017-12-29 2017-12-29 Information processing method and apparatus, cloud processing device, and computer program product
CN201780002728.XA CN109643182B (en) 2017-12-29 2017-12-29 Information processing method and device, cloud processing equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/119720 WO2019127325A1 (en) 2017-12-29 2017-12-29 Information processing method and apparatus, cloud processing device, and computer program product

Publications (1)

Publication Number Publication Date
WO2019127325A1 true WO2019127325A1 (en) 2019-07-04

Family

ID=66052329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119720 WO2019127325A1 (en) 2017-12-29 2017-12-29 Information processing method and apparatus, cloud processing device, and computer program product

Country Status (2)

Country Link
CN (1) CN109643182B (en)
WO (1) WO2019127325A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027481A1 (en) * 2022-08-03 2024-02-08 华为技术有限公司 Device control method, and devices

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555798B (en) * 2019-08-26 2023-10-17 北京字节跳动网络技术有限公司 Image deformation method, device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096311A (en) * 2014-07-01 2015-11-25 中国科学院科学传播研究中心 Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
CN106055113A (en) * 2016-07-06 2016-10-26 北京华如科技股份有限公司 Reality-mixed helmet display system and control method
CN106582016A (en) * 2016-12-05 2017-04-26 湖南简成信息技术有限公司 Augmented reality-based motion game control method and control apparatus
US20170295360A1 (en) * 2016-04-07 2017-10-12 Seiko Epson Corporation Head-mounted display device and computer program

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013186691A (en) * 2012-03-08 2013-09-19 Casio Comput Co Ltd Image processing device, image processing method, and program
JP6362391B2 (en) * 2014-04-10 2018-07-25 キヤノン株式会社 Information processing terminal, information processing method, and computer program
DE102016200225B4 (en) * 2016-01-12 2017-10-19 Siemens Healthcare Gmbh Perspective showing a virtual scene component
CN105843479A (en) * 2016-03-29 2016-08-10 禾穗(北京)教育科技有限公司 Content interaction method and system
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
US10852913B2 (en) * 2016-06-21 2020-12-01 Samsung Electronics Co., Ltd. Remote hover touch system and method
CN106951153B (en) * 2017-02-21 2020-11-20 联想(北京)有限公司 Display method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096311A (en) * 2014-07-01 2015-11-25 中国科学院科学传播研究中心 Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
US20170295360A1 (en) * 2016-04-07 2017-10-12 Seiko Epson Corporation Head-mounted display device and computer program
CN106055113A (en) * 2016-07-06 2016-10-26 北京华如科技股份有限公司 Reality-mixed helmet display system and control method
CN106582016A (en) * 2016-12-05 2017-04-26 湖南简成信息技术有限公司 Augmented reality-based motion game control method and control apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027481A1 (en) * 2022-08-03 2024-02-08 华为技术有限公司 Device control method, and devices

Also Published As

Publication number Publication date
CN109643182A (en) 2019-04-16
CN109643182B (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN105637559B (en) Use the structural modeling of depth transducer
CN106687886B (en) Three-dimensional hybrid reality viewport
KR101453815B1 (en) Device and method for providing user interface which recognizes a user's motion considering the user's viewpoint
CN108762482B (en) Data interaction method and system between large screen and augmented reality glasses
KR101890459B1 (en) Method and system for responding to user's selection gesture of object displayed in three dimensions
WO2013035758A1 (en) Information display system, information display method, and storage medium
CN103365411A (en) Information input apparatus, information input method, and computer program
CN103793060A (en) User interaction system and method
US9013396B2 (en) System and method for controlling a virtual reality environment by an actor in the virtual reality environment
JPWO2014141504A1 (en) 3D user interface device and 3D operation processing method
Bai et al. 3D gesture interaction for handheld augmented reality
Jimeno-Morenilla et al. Augmented and virtual reality techniques for footwear
CN109313510A (en) Integrated free space and surface input device
WO2018102615A1 (en) A system for importing user interface devices into virtual/augmented reality
Shim et al. Gesture-based interactive augmented reality content authoring system using HMD
JP2016122392A (en) Information processing apparatus, information processing system, control method and program of the same
JP2004265222A (en) Interface method, system, and program
WO2019127325A1 (en) Information processing method and apparatus, cloud processing device, and computer program product
Lee et al. Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality
TW201832049A (en) Input method, device, apparatus, system, and computer storage medium
Halim et al. Designing ray-pointing using real hand and touch-based in handheld augmented reality for object selection
CN114167997A (en) Model display method, device, equipment and storage medium
Zhang et al. A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality
Bai Mobile augmented reality: Free-hand gesture-based interaction
CN114647304A (en) Mixed reality interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17936817

Country of ref document: EP

Kind code of ref document: A1