WO2017092597A1 - 根据实时信息显示展现对象的方法和装置 - Google Patents

根据实时信息显示展现对象的方法和装置 Download PDF

Info

Publication number
WO2017092597A1
WO2017092597A1 PCT/CN2016/107014 CN2016107014W WO2017092597A1 WO 2017092597 A1 WO2017092597 A1 WO 2017092597A1 CN 2016107014 W CN2016107014 W CN 2016107014W WO 2017092597 A1 WO2017092597 A1 WO 2017092597A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
point
real
sliding
cycle
Prior art date
Application number
PCT/CN2016/107014
Other languages
English (en)
French (fr)
Inventor
曾岳伟
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to EP16869907.2A priority Critical patent/EP3367228B1/en
Priority to SG11201804351PA priority patent/SG11201804351PA/en
Priority to KR1020187018757A priority patent/KR102148583B1/ko
Priority to JP2018528716A priority patent/JP6640356B2/ja
Priority to AU2016363434A priority patent/AU2016363434B2/en
Publication of WO2017092597A1 publication Critical patent/WO2017092597A1/zh
Priority to US15/993,071 priority patent/US10551912B2/en
Priority to PH12018501175A priority patent/PH12018501175A1/en
Priority to AU2019101558A priority patent/AU2019101558A4/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present application relates to the field of computer and network technologies, and in particular, to a method and apparatus for displaying a presentation object according to real-time information.
  • gyroscopes With the popularity of cameras, gyroscopes, accelerometers, distance sensors and other devices on the terminal, applications can use these devices that provide real-time information to implement new functions, or to implement existing functions in a completely new way.
  • a certain display object displayed on the screen may be changed correspondingly when the real-time information changes, thereby interacting with the user, thereby providing an intuitive and novel experience for the user.
  • the display position of the presentation object on the screen can be changed according to the real-time information. For example, depending on the position of the user's hand in the live video, it is determined at which position on the screen a ball is displayed, as if the user took the ball on the hand.
  • the application extracts real-time information outputted from a device such as a sensor in a certain period, maps the real-time information to a corresponding screen display position according to a preset algorithm, and displays the display object on the screen display position.
  • a device such as a sensor
  • maps the real-time information to a corresponding screen display position according to a preset algorithm
  • the position of the presentation object on the screen also changes greatly, which seems to be that the presentation object jumps on the screen; and when the real-time information changes very small for several consecutive cycles It looks like it is showing the jitter of the object on the screen. In both cases, the display is not ideal.
  • the present application provides a method for displaying an object according to real-time information, including:
  • the display object is swiped between the start point and the end point.
  • the application also provides an apparatus for displaying an object according to real-time information, including:
  • a location information acquiring unit configured to acquire location information corresponding to real-time information in a certain period
  • a starting and ending point determining unit configured to determine a starting point according to the position information acquired in the previous period, and determine an ending point according to the position information acquired in the current period;
  • a sliding display unit for sliding display of the presentation object between the start point and the end point.
  • the start point and the end point are determined according to the position information corresponding to the real-time information of two adjacent periods, and the display object is slidly displayed between the start point and the end point, so that the displayed display object is displayed.
  • the smooth sliding from the starting point to the end point avoids the sudden change of the real-time information and the jitter of the real-time information change, which improves the display effect of the display object.
  • FIG. 1 is a flowchart of a method for displaying an object according to real-time information in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a hash point distribution of an average hash mode in the embodiment of the present application
  • FIG. 3 is a schematic diagram of a hash point distribution of an accelerated hashing manner in an embodiment of the present application
  • FIG. 4 is a schematic diagram of a hash point distribution of a deceleration hashing method in an embodiment of the present application
  • FIG. 5 is a schematic diagram of an interface of a face login in an application example of the present application.
  • FIG. 6 is a flowchart showing a display of a real-time avatar of a user in an application example of the present application
  • FIG. 7 is a hardware structure diagram of a terminal or a server
  • FIG. 8 is a logical structural diagram of an apparatus for displaying a presentation object according to real-time information in an embodiment of the present application.
  • the embodiment of the present application proposes a new method for displaying an object according to real-time information, and between the screen display positions corresponding to the real-time information of two adjacent periods, the screen display position of the previous period is slid in an animated manner.
  • the display object thus displayed does not change or be shaken due to excessive or too small change of real-time information, and has a better display effect, thereby solving the problems in the prior art.
  • the embodiment of the present application can be applied to a device capable of obtaining real-time information for determining a display position of a display object, having computing and storage capabilities, including a mobile phone, a tablet computer, a PC (Personal Computer), a notebook, a server, and the like.
  • the device can read real-time information from its own sensors, components of the camera, or continuously obtain real-time information of the target object from other devices through the network.
  • Real-time information includes any time-varying parameters that can be collected by the device, such as the speed of movement of an object in real-time video, the grayscale of real-time images, Various real-time variables of the industrial control process, real-time positioning data of the terminal, and the like.
  • the presentation object may be a non-real time picture, video, etc., and may also be a real-time picture and/or a live video.
  • the embodiment of the present application does not limit the type and source of the real-time information and the specific type of the display object.
  • the main body running the process may be an application, may be a process or a thread in the application, or may be a service, Or a process or thread in a service.
  • Step 110 Acquire location information corresponding to real-time information in a certain period.
  • the location information may be a screen position coordinate that matches the screen size of the display presentation object, or may be other parameters (such as screen scale coordinates, etc.) that can be used to determine the screen position coordinates;
  • the main body can read the real-time information from the sensors of the device, the components of the camera in a certain period, or obtain the real-time information from the network, and obtain the corresponding position information by using the real-time information as the input of the preset algorithm, or directly from the network.
  • the location information corresponding to the real-time information is obtained, and the embodiment of the present application is not limited.
  • the method for generating the corresponding location information according to the real-time information can be implemented by referring to the prior art, and details are not described herein.
  • Step 120 Determine a starting point according to the location information acquired in the previous period, and determine an end point according to the location information acquired in the current period.
  • the position information of the previous cycle is directly used as the starting point, and the position information of the current cycle is taken as the end point; otherwise, according to the position information and the screen position coordinates
  • the relationship is obtained by converting the position information of the previous cycle into the screen position coordinates as a starting point, and converting the position information of the current cycle into the screen position coordinates as an end point.
  • the position information acquired in each cycle is the screen scale coordinate (sX, sY), where: sX is the X-axis proportional coordinate, and the value range is [0, 1), indicating that the X-axis screen coordinates of the position point are relative to The ratio of the maximum value of the horizontal axis X; sY is the Y-axis proportional coordinate, and the value range is [0, 1), indicating the ratio of the Y-axis screen coordinates of the position point to the maximum value of the vertical axis Y.
  • sX and sY can be converted into the horizontal axis coordinate value x and the vertical axis coordinate value y of the screen position coordinates, respectively; specifically, the total width of the current screen is W, and the total length is H, the screen position coordinates converted by (sX, sY) are (sX*W, sY*H), and the units are the same as the units of W and H, as if they are pixels or millimeters.
  • step 130 the presentation object is displayed in a sliding manner between the start point and the end point.
  • the slide display of this cycle can be performed, and the presentation object is animated from the start point to the end point, which looks like the display object slides from the start point to the end point.
  • Sliding display The specific method can be implemented by referring to the technical means applied to various dynamic images, animations, and the like in the prior art, which is not limited in the embodiment of the present application.
  • the sliding trajectory of the display object between the start point and the end point may be determined first, and N (N is a natural number) hash position points are selected on the sliding track; and then the display object is displayed in a certain single point display duration in turn.
  • the starting point, the N hash position points, and the end point form a sliding effect.
  • the sliding track may be a straight line or a curved line; the hash point may be selected according to the desired sliding display effect, such as averaging, acceleration, deceleration, and the like.
  • A be the starting point
  • B the ending point
  • the sliding trajectory between A and B be a straight line
  • the hash point of the average hashing mode is evenly distributed at points A and B, as shown in Fig.
  • the sliding trajectory of each cycle, the specific value of N, the way of selecting the hash point, and the duration of the single-point display may be the same or different; when using different sliding trajectories per cycle, specific values of N, and selecting scattered
  • the manner of column points, and/or the duration of single-point display one may be polled or randomly selected from a plurality of preset options, or a specific sliding track may be determined according to changes in position information or position information.
  • the specific value of N, the way to select the hash point, and / or the length of the single point display are not limited.
  • the sliding distance of the object (determined by the position information of the previous cycle, the position information of the current cycle, and the sliding trajectory of the current cycle) can be selected according to the cycle to select the value of N and the length of the single-point display, if the sliding distance If the length is long, the value of N is increased to reduce the length of the single-point display. If the sliding distance is short, the value of N is decreased, and the length of the single-point display is increased, thereby achieving a better display effect.
  • the display object has its own shape and occupies a certain screen display area.
  • the display object is displayed at the start point, the end point, and the hash point, and the display point is displayed by the start point, the end point, or the hash point.
  • a fixed point on the object such as the center point, the upper left border point, and so on.
  • the total duration of the sliding of one cycle may be greater than, less than, or equal to the length of the period. That is to say, at the beginning of this cycle, the sliding display of the last cycle presentation object may have ended, just ended or not yet finished.
  • the start point of this cycle can be modified to the display position of the display object at the beginning of the cycle, and the portion where the previous cycle is not displayed is canceled.
  • the presentation object will slide from the display position at the beginning of the cycle (ie, an intermediate position of the sliding process of the previous cycle) to the end point, so that the display position can reflect the change of the real-time information more timely, and can still achieve smoothness.
  • Sliding shows the display of the object.
  • a machine learning algorithm is used in the process of obtaining corresponding location information from real-time information. For example, in the application of determining location information according to certain feature values of real-time video recorded by the camera, nerves are often used. A network algorithm is used to identify video feature values. Since the real-time information and processing result of each acquisition are used to correct the algorithm itself, when the last period is completely consistent with the real-time information of the current period, the position information output by the algorithm may have a small difference, which tends to Let the user feel that the object is unstable.
  • the selection of the predetermined threshold may be determined according to the degree of influence that the machine learning algorithm may have on the location information.
  • the position change of each period presentation object is presented in a sliding display manner, and the start point and the end point of the sliding display are determined by the position information corresponding to the real-time information of two adjacent periods, thereby The displayed object is smoothly moved from the starting point to the end point, and the sudden change or jitter of the display object does not occur because the real-time information changes too much or too small, thereby improving the display effect.
  • a terminal App uses a face authentication login, and its login interface is as shown in FIG. 5.
  • the App adopts the face recognition technology to extract the user's avatar from the real-time video from the terminal camera, and displays the user's real-time avatar as a display object in a circular frame.
  • the position of the user's real-time avatar on the screen is determined according to the left and right deflection amplitude and the up and down inclination of the real-time avatar.
  • the App uses a neural network algorithm to identify the rotation angle of the front side of the user's real-time avatar in the horizontal direction and the rotation angle in the vertical direction, and the horizontal rotation angle corresponds to the X-axis proportional coordinate sX, and the vertical rotation angle corresponds to The Y-axis proportional coordinate sY.
  • the corresponding scale coordinate is (0.5, 0.5)
  • the user real-time avatar will be displayed in the middle of the screen
  • the recognized real-time avatar of the user is horizontal On the left side and the front side in the vertical direction
  • the corresponding scale coordinates are (0, 0.5)
  • the user real-time avatar will be displayed in the middle of the left border of the screen.
  • the user real time avatar on the login interface shown in FIG. 5 will change the display position.
  • the user can move the real-time avatar to the right on the screen by turning to the right, and moving the real-time avatar down on the screen by lowering the head.
  • the App performs face verification on the real-time avatar of the user, and verifies the real-time avatar of the user, and completes the user login after the verification succeeds.
  • the process shown in FIG. 6 is used to implement the sliding of the user's real-time avatar.
  • Step 610 Read the output of the neural network algorithm in a certain period, and obtain a proportional coordinate (sX, sY) corresponding to a rotation angle of the current user's real-time avatar in the horizontal direction and a rotation angle in the vertical direction.
  • step 620 the scale coordinates (sX, sY) obtained in the current cycle are converted into the screen position coordinates of the period (sX*W, sY*H).
  • the screen position coordinates of the above cycle are the starting point A, and the screen position coordinates of this cycle are the ending point B.
  • step 630 it is determined whether the distance between the ABs is less than a predetermined threshold. If yes, the real-time avatar of the user is continuously displayed at the current position, and the sliding display of the current period is not performed, and the process proceeds to step 610 to wait for the position information of the next cycle; , go to step 640.
  • Step 640 it is determined whether the sliding display of the last cycle is ended, if yes, go to step 660; if not, go to step 650;
  • step 650 the starting point A is set as the coordinate of the display position of the current user's real-time avatar, and the last cycle is cancelled to display the part that has not been completed after the current position.
  • step 660 one of the averaging, accelerating, and decelerating hashing modes is used to generate a predetermined number of hash points according to the selected hashing manner.
  • step 670 the real-time avatar of the user is displayed in a predetermined single-point display duration, and is sequentially displayed at point A, each hash point from the A to B direction, and point B, thereby forming a display effect of the user's real-time avatar sliding from point A to point B. .
  • Step 680 Determine whether the real-time avatar of the user reaches the predetermined marked position. If yes, go to step 690; if no, go to step 610 to perform the processing of the next cycle.
  • Step 690 Perform face verification according to the real-time avatar of the user. If successful, the user logs in. If the user fails, the user fails to log in.
  • an embodiment of the present application further provides an apparatus for displaying a presentation object according to real-time information.
  • the device can be implemented by software, or can be implemented by hardware or a combination of hardware and software.
  • the CPU or the CPU (Central Process Unit) of the terminal or the server reads the corresponding computer program instructions into the memory.
  • the terminal in which the device is located usually includes other hardware such as a chip for transmitting and receiving wireless signals, and the server on which the device is located is usually It also includes other hardware such as boards for implementing network communication functions.
  • FIG. 8 is a diagram of an apparatus for displaying a display object according to real-time information according to an embodiment of the present disclosure, including a location information acquisition unit, a start and end point determination unit, and a sliding display unit, where the location information acquisition unit is configured to acquire a corresponding period in a certain period.
  • the position information of the real-time information; the starting and ending point determining unit is configured to determine the starting point according to the position information acquired in the last cycle, and determine the ending point according to the position information acquired in the current cycle;
  • the sliding display unit is configured to display the object Slide display between point and end point.
  • the device further includes a starting point modifying unit, where the sliding display of the last period is not completed at the beginning of the current period, and the starting point is modified to be the display position of the display object at the beginning of the period, and canceled Cycles show the parts that have not yet completed.
  • a starting point modifying unit where the sliding display of the last period is not completed at the beginning of the current period, and the starting point is modified to be the display position of the display object at the beginning of the period, and canceled Cycles show the parts that have not yet completed.
  • the device further includes a slide canceling unit, configured to display the display object at the current position when the distance between the start point and the end point is less than a predetermined threshold, and no longer perform the sliding display of the current period.
  • a slide canceling unit configured to display the display object at the current position when the distance between the start point and the end point is less than a predetermined threshold, and no longer perform the sliding display of the current period.
  • the sliding display unit may include a hash point determining module and a hash point display module, wherein: the hash point determining module is configured to determine a sliding track of the display object between the start point and the end point, and select N on the sliding track Hash position points; N is a natural number; the hash point display module is used to display the display object in a certain single-point display duration in the starting point, N hash position points and the end point to form a sliding effect.
  • the hash point determining module may be specifically configured to: determine a sliding trajectory of the display object between the starting point and the ending point, and select N hashing position points on the sliding track in an average, acceleration or deceleration manner.
  • the presentation object includes: a real-time picture and/or a real-time video.
  • the presentation object includes: a real-time avatar video of the user;
  • the location information corresponding to the real-time information includes: an X-axis proportional coordinate corresponding to a rotation angle of a front side of the real-time avatar of the user in a horizontal direction, and The Y-axis proportional coordinate corresponding to the rotation angle of the front side of the avatar in the vertical direction is performed by the user;
  • the start and end point determining unit is specifically configured to: the X-axis proportional coordinate and the Y-axis proportional coordinate of the previous period according to the width and the length of the screen Converted to the position coordinates of the starting point on the screen, the X-axis proportional coordinate and the Y-axis proportional coordinate of the cycle are converted to the position coordinates of the end point on the screen according to the width and length of the screen.
  • the device may further include a verification login unit, configured to perform verification on the real-time avatar of the user when the presentation object is slid to a predetermined indication position on the screen, and complete the user login after the verification succeeds.
  • a verification login unit configured to perform verification on the real-time avatar of the user when the presentation object is slid to a predetermined indication position on the screen, and complete the user login after the verification succeeds.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)
  • Navigation (AREA)
  • Instructional Devices (AREA)

Abstract

一种根据实时信息显示展现对象的方法和装置,包括:以一定周期获取对应于实时信息的位置信息(110);根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点(120);将展现对象在起点和终点之间进行滑动显示(130)。避免了实时信息变化较大时的突变和实时信息变化很小时的抖动现象,提高了展现对象的显示效果。

Description

根据实时信息显示展现对象的方法和装置
本申请要求2015年12月04日递交的申请号为201510884339.7、发明名称为“根据实时信息显示展现对象的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机及网络技术领域,尤其涉及一种根据实时信息显示展现对象的方法和装置。
背景技术
随着摄像头、陀螺仪、加速度传感器、距离传感器等设备在终端上的普及,应用程序可以利用这些提供实时信息的设备,来实现新的功能,或者将已有的功能以全新的方式实现。在一些实现中,可以在实时信息发生变化时,使屏幕上显示的某个展现对象发生相应的变化,从而和用户产生互动,为用户带来直观而新颖的体验。
这些实现中,除了展现对象本身可以根据实时信息变化外,还可以按照实时信息使展现对象在屏幕上的显示位置发生变化。例如,根据实时视频中用户手的位置来决定将一个球显示在屏幕上的哪个位置,好像是用户将球拿在手上一样。
现有技术中,应用程序以某个周期提取从传感器等设备输出的实时信息,按照预置的算法将实时信息映射为对应的屏幕显示位置,并将展现对象现显示在该屏幕显示位置。这样,随着每周期实时信息的变化,展现对象将在屏幕上显示在新的位置,每个周期移动一次,看起来像是逐步移动。
按照这种实现方式,当实时信息变化较大时,展现对象在屏幕上的位置变化也较大,看起来像是展现对象在屏幕上发生跳变;而当实时信息连续几个周期变化很小时,看起来像是展现对象在屏幕上的抖动。在这两种情况下显示效果均不理想。
发明内容
有鉴于此,本申请提供一种根据实时信息显示展现对象的方法,包括:
以一定周期获取对应于实时信息的位置信息;
根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点;
将展现对象在起点和终点之间进行滑动显示。
本申请还提供了一种根据实时信息显示展现对象的装置,包括:
位置信息获取单元,用于以一定周期获取对应于实时信息的位置信息;
起止点确定单元,用于根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点;
滑动显示单元,用于将展现对象在起点和终点之间进行滑动显示。
由以上技术方案可见,本申请的实施例中,根据相邻两个周期的实时信息对应的位置信息来确定起点和终点,并在起点和终点之间滑动显示展现对象,使得所显示的展现对象平滑的从起点滑动到终点,避免了实时信息变化较大时的突变和实时信息变化很小时的抖动现象,提高了展现对象的显示效果。
附图说明
图1是本申请实施例中一种根据实时信息显示展现对象的方法的流程图;
图2是本申请实施例中平均散列方式的散列点分布示意图;
图3是本申请实施例中加速散列方式的散列点分布示意图;
图4是本申请实施例中减速散列方式的散列点分布示意图;
图5是本申请应用示例中一种人脸登录的界面示意图;
图6是本申请应用示例中一种用户实时头像的显示流程图;
图7是终端或服务器的一种硬件结构图;
图8是本申请实施例中一种根据实时信息显示展现对象的装置的逻辑结构图。
具体实施方式
本申请的实施例提出一种新的根据实时信息显示展现对象的方法,在两个相邻周期的实时信息对应的屏幕显示位置之间,以动画方式将展现对上个周期的屏幕显示位置滑动到本周期的屏幕显示位置,这样显示的展现对象不会因实时信息变化过大或过小而发生突变或抖动,具有更好的显示效果,从而解决现有技术中存在的问题。
本申请的实施例可以应用在能够获取到用来决定展现对象显示位置的实时信息、具有计算和存储能力的设备上,包括手机、平板电脑、PC(Personal Computer,个人电脑)、笔记本、服务器等。该设备可以从自身带有的传感器、摄像头的部件上读取实时信息,也可以通过网络从其他设备持续获得目标对象的实时信息。实时信息包括任何可以被设备采集的随时间变化的参数,例如实时视频中某个物体的移动速度、实时图片的灰度、 工业控制过程的各种实时变量、终端的实时定位数据等等。此外,展现对象可以是非实时的图片、视频等,也可以实时图片和/或实时视频。本申请实施例对实时信息的种类及来源、展现对象的具体类型均不做限定。
本申请的实施例中,根据实时信息显示展现对象的方法的流程如图1所示,运行该流程的主体可以是应用程序,可以是应用程序中的进程或线程,还可以是某个服务、或者某个服务中的进程或线程。
步骤110,以一定周期获取对应于实时信息的位置信息。
根据实际应用场景的具体实现,位置信息可以是匹配显示展现对象的屏幕尺寸的屏幕位置坐标,也可以是能够用来确定屏幕位置坐标的其他参数(如屏幕比例坐标等);运行本申请实施例的主体可以从所在设备的传感器、摄像头的部件以一定周期读取实时信息、或者从网络中获得实时信息后,以实时信息为预置算法的输入来得到对应的位置信息,也可以直接从网络中获得对应于实时信息的位置信息,本申请的实施例不做限定。
根据实时信息生成对应的位置信息的方法可参照现有技术实现,不再赘述。
步骤120,根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点。
如果所获取的位置信息是匹配显示展现对象的屏幕尺寸的屏幕位置坐标,则直接将上个周期的位置信息作为起点,将本周期的位置信息作为终点;否则按照位置信息与屏幕位置坐标的相互关系,将上个周期的位置信息转换为屏幕位置坐标后作为起点,将本周期的位置信息转换为屏幕位置坐标后作为终点。
将位置信息转化为屏幕位置坐标的方式通常因具体应用场景的不同而不同。例如,每个周期所获取的位置信息是屏幕比例坐标(sX,sY),其中:sX为X轴比例坐标,取值范围为[0,1),表示该位置点的X轴屏幕坐标相对于横轴X最大值的比例;sY为Y轴比例坐标,取值范围为[0,1),表示该位置点的Y轴屏幕坐标相对于纵轴Y最大值的比例。根据显示展现对象的屏幕的宽度和长度可以将sX和sY分别转换为屏幕位置坐标的横轴坐标值x和纵轴坐标值y;具体而言,设当前屏幕的总宽度为W,总长度为H,则用(sX,sY)转换而成的屏幕位置坐标为(sX*W,sY*H),其单位与W和H的单位相同,如同为像素点或毫米等。
步骤130,将展现对象在起点和终点之间进行滑动显示。
在确定本周期的起点和终点之后,即可进行本周期的滑动显示,将展现对象以动画方式从起点逐渐移到终点,看起来像是展现对象从起点滑动到了终点。进行滑动显示的 具体方式可以参照现有技术中应用于各种动态图像、动画等的技术手段来实现,本申请实施例中不做限定。
在一个例子中,可以先确定起点和终点之间展现对象的滑动轨迹,在滑动轨迹上选择N(N为自然数)个散列位置点;再将展现对象依次以一定的单点显示时长显示在起点、N个散列位置点和终点,形成滑动效果。其中,滑动轨迹可以是直线,也可以是曲线;可以按照期望的滑动显示效果来以何种方式选择散列点,如可以是平均、加速、减速等方式。设A为起点,B为终点,A、B之间的滑动轨迹为直线,平均散列方式的散列点平均分布在A和B点,如图2所示;加速散列方式的散列点沿从A到B的方向,散列点之间的距离逐渐增大,如图3所示;减速散列方式的散列点沿从A到B的方向,散列点之间的距离逐渐减小,如图4所示。
上述例子中,每个周期的滑动轨迹、N的具体数值、选择散列点的方式、单点显示时长可以相同,也可以不同;当采用每周期不同的滑动轨迹、N的具体数值、选择散列点的方式、和/或单点显示时长时,可以是从预置的若干种可选项中轮询或随机选择一种,也可以是根据位置信息或位置信息的变化来确定具体的滑动轨迹、N的具体数值、选择散列点的方式、和/或单点显示时长。本申请实施例均不做限定。
例如,可以根据本周期展现对象的滑动距离(由上个周期的位置信息、本周期的位置信息、以及本周期的滑动轨迹来决定),来选择N的数值和单点显示时长,如果滑动距离长,则增加N的数值,减少单点显示时长;如果滑动距离短,则减少N的数值,增加单点显示时长,从而达到更好的显示效果。
通常展现对象会有自己的形状,占据一定的屏幕显示区域,本申请实施例中所说的将展现对象显示在起点、终点和散列点,是指以起点、终点或散列点来定位展现对象上某个固定点,如中心点、左上边界点等。
需要说明的是,一个周期的滑动总时长,即展现对象从起点开始滑动到终点所需的时间,可以大于、小于或等于周期的长度。也就是说,在本周期开始时,上个周期展现对象的滑动显示可能已经结束、正好结束或者尚未结束。
如果本周期开始时上个周期的滑动显示尚未结束,可以将本周期的起点修改为本周期开始时展现对象的显示位置,并取消上个周期滑动显示尚未完成的部分。这样,展现对象将从本周期初始时所在的显示位置(即上个周期滑动过程的某个中间位置)开始向终点滑动,使得显示位置可以更加及时的反映实时信息的变化,并且仍然能够达到平滑滑动展现对象的显示效果。
在一些应用场景中,在从实时信息得到对应的位置信息的处理过程中采用了机器学习算法,例如在按照对摄像头摄录的实时视频的某些特征值确定位置信息的应用场合,经常采用神经网络算法来进行视频特征值的识别。由于每次采集的实时信息和处理结果都会用来对算法本身做出修正,因此当上个周期与本周期的实时信息完全一致时,算法输出的位置信息可能会有小的差异,这样往往会让用户产生展现对象不稳定的感觉。
为了避免这种情况,可以在确定本周期滑动的起点和终点后,判断起点与终点之间的距离是否小于预定阈值,如果小于,则将展现对象显示在当前位置,不再进行本周期的的滑动显示。其中,预定阈值的选择可以按照机器学习算法自学习过程可能对位置信息造成的影响程度大小来确定。
可见,本申请的实施例中,将每个周期展现对象的位置变化以滑动显示的方式呈现出来,滑动显示的起点和终点由相邻两个周期的实时信息对应的位置信息来确定,从而使得所显示的展现对象平滑的从起点移动到终点,不会因实时信息变化过大或过小而发生展现对象的突变或抖动,提高了显示效果。
在本申请的一个应用示例中,一个终端App(应用程序)采用人脸认证登录,其登录界面如图5所示。App采用人脸识别技术,从来自终端摄像头的实时视频中提取用户头像,将用户实时头像作为展现对象显示在一个圆形框内。用户实时头像在屏幕上显示位置根据实时头像的左右偏转幅度和上下倾幅度来确定。
具体而言,App采用神经网络算法识别用户实时头像的正面在水平方向的转动角度和在垂直方向的转动角度,将水平方向的转动角度对应于X轴比例坐标sX,将垂直方向的转动角度对应于Y轴比例坐标sY。例如,当识别出用户实时头像是水平和垂直方向的正面头像时,对应的比例坐标为(0.5,0.5),用户实时头像将显示在屏幕的正中;当识别出的用户实时头像是水平方向的左侧面和垂直方向的正面,对应的比例坐标为(0,0.5),用户实时头像将显示在屏幕左边界的中部。
这样,当用户改变其头部的左右偏转幅度和上下倾幅度时,图5所示的登录界面上的用户实时头像将随之改变显示位置。例如,用户可以通过向右侧转头将实时头像在屏幕上向右侧移动,通过低头将实时头像在屏幕上向下移动。当用户实时头像移入屏幕下方的预定标示位置时,App对用户实时头像进行人脸校验,对用户实时头像进行校验,校验成功后完成用户登录。
在App识别出实时视频中包括人脸后,采用图6所示的流程来实现用户实时头像的滑动。
步骤610,以一定周期读取神经网络算法的输出,获得与当前用户实时头像在水平方向的转动角度和在垂直方向的转动角度相对应的比例坐标(sX,sY)。
步骤620,将本周期获得的比例坐标(sX,sY)转换为本周期的屏幕位置坐标为(sX*W,sY*H)。以上个周期的屏幕位置坐标为起点A,本周期的屏幕位置坐标为终点B。
步骤630,判断AB之间的距离是否小于预定阈值,如果是,将用户实时头像在当前位置继续显示,不再进行本周期的滑动显示,转步骤610,等待下个周期的位置信息;如果否,执行步骤640。
步骤640,判断上个周期的滑动显示是否结束,如果是,转步骤660;如果否,执行步骤650;
步骤650,将起点A置为当前用户实时头像的显示位置的坐标,取消上个周期滑动显示在当前位置以后、尚未完成的部分。
步骤660,在平均、加速、减速散列方式中任选一种,将AB之间的线段按照选择的散列方式生成预定数目个散列点。
步骤670,将用户实时头像以预定的单点显示时长,依次显示在A点、从A到B方向的各个散列点、B点上,形成用户实时头像从A点滑动到B点的显示效果。
步骤680,判断用户实时头像是否到达预定标示位置,如果是,执行步骤690;如果否,转步骤610,进行下个周期的处理。
步骤690,按照用户实时头像进行人脸校验,如果成功则完成用户登录,如果失败则用户登录失败。
与上述流程实现对应,本申请的实施例还提供了一种根据实时信息显示展现对象的装置。该装置可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为逻辑意义上的装置,是通过终端或服务器的CPU(Central Process Unit,中央处理器)将对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,除了图7所示的CPU、内存以及非易失性存储器之外,该装置所在的终端通常还包括用于进行无线信号收发的芯片等其他硬件,该装置所在的服务器通常还包括用于实现网络通信功能的板卡等其他硬件。
图8所示为本申请实施例提供的一种根据实时信息显示展现对象的装置,包括位置信息获取单元、起止点确定单元和滑动显示单元,其中:位置信息获取单元用于以一定周期获取对应于实时信息的位置信息;起止点确定单元用于根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点;滑动显示单元用于将展现对象在起 点和终点之间进行滑动显示。
可选的,所述装置还包括起点修改单元,用于在本周期开始时上个周期的滑动显示尚未结束的情况下,则将起点修改为本周期开始时展现对象的显示位置,并取消上个周期滑动显示尚未完成的部分。
可选的,所述装置还包括滑动取消单元,用于当所述起点与终点之间的距离小于预定阈值时,将展现对象显示在当前位置,不再进行本周期的滑动显示。
一个例子中,所述滑动显示单元可以包括散列点确定模块和散列点显示模块,其中:散列点确定模块用于确定起点和终点之间展现对象的滑动轨迹,在滑动轨迹上选择N个散列位置点;N为自然数;散列点显示模块用于将展现对象依次以一定的单点显示时长显示在起点、N个散列位置点和终点,形成滑动效果。
上个例子中,所述散列点确定模块可以具体用于:确定起点和终点之间展现对象的滑动轨迹,以平均、加速或减速方式在滑动轨迹上选择N个散列位置点。
可选的,所述展现对象包括:实时图片和/或实时视频。
一种实现方式中,所述展现对象包括:用户的实时头像视频;所述对应于实时信息的位置信息包括:与用户实时头像的正面在水平方向的转动角度相对应的X轴比例坐标,以及与用户实施头像的正面在垂直方向的转动角度相对应的Y轴比例坐标;所述起止点确定单元具体用于:根据屏幕的宽度和长度将上个周期的X轴比例坐标和Y轴比例坐标转换为起点在屏幕上的位置坐标,根据屏幕的宽度和长度将本周期的X轴比例坐标和Y轴比例坐标转换为终点在屏幕上的位置坐标。
上述实现方式中,所述装置还可以包括校验登录单元,用于当展现对象滑动到屏幕上的预定标示位置时,对用户的实时头像进行校验,校验成功后完成用户登录。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。 计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。

Claims (16)

  1. 一种根据实时信息显示展现对象的方法,其特征在于,包括:
    以一定周期获取对应于实时信息的位置信息;
    根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点;
    将展现对象在起点和终点之间进行滑动显示。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:如果本周期开始时上个周期的滑动显示尚未结束,则将起点修改为本周期开始时展现对象的显示位置,并取消上个周期滑动显示尚未完成的部分。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:如果所述起点与终点之间的距离小于预定阈值,将展现对象显示在当前位置,不再进行本周期的滑动显示。
  4. 根据权利要求1所述的方法,其特征在于,所述将展现对象在起点和终点之间进行滑动显示,包括:
    确定起点和终点之间展现对象的滑动轨迹,在滑动轨迹上选择N个散列位置点;N为自然数;
    将展现对象依次以一定的单点显示时长显示在起点、N个散列位置点和终点,形成滑动效果。
  5. 根据权利要求4所述的方法,所述在滑动轨迹上选择N个散列位置点,包括:以平均、加速或减速方式在滑动轨迹上选择N个散列位置点。
  6. 根据权利要求1所述的方法,其特征在于,所述展现对象包括:实时图片和/或实时视频。
  7. 根据权利要求1所述的方法,其特征在于,所述展现对象包括:用户的实时头像视频;
    所述对应于实时信息的位置信息包括:与用户实时头像的正面在水平方向的转动角度相对应的X轴比例坐标,以及与用户实施头像的正面在垂直方向的转动角度相对应的Y轴比例坐标;
    所述根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点,包括:根据屏幕的宽度和长度将上个周期的X轴比例坐标和Y轴比例坐标转换为起点在屏幕上的位置坐标,根据屏幕的宽度和长度将本周期的X轴比例坐标和Y轴比例坐标转换为终点在屏幕上的位置坐标。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:当展现对象滑动到屏幕上的预定标示位置时,对用户的实时头像进行校验,校验成功后完成用户登录。
  9. 一种根据实时信息显示展现对象的装置,其特征在于,包括:
    位置信息获取单元,用于以一定周期获取对应于实时信息的位置信息;
    起止点确定单元,用于根据上个周期获取的位置信息确定起点,根据本周期获取的位置信息确定终点;
    滑动显示单元,用于将展现对象在起点和终点之间进行滑动显示。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:起点修改单元,用于在本周期开始时上个周期的滑动显示尚未结束的情况下,则将起点修改为本周期开始时展现对象的显示位置,并取消上个周期滑动显示尚未完成的部分。
  11. 根据权利要求9或10所述的装置,其特征在于,所述装置还包括:滑动取消单元,用于当所述起点与终点之间的距离小于预定阈值时,将展现对象显示在当前位置,不再进行本周期的滑动显示。
  12. 根据权利要求9所述的装置,其特征在于,所述滑动显示单元包括:
    散列点确定模块,用于确定起点和终点之间展现对象的滑动轨迹,在滑动轨迹上选择N个散列位置点;N为自然数;
    散列点显示模块,用于将展现对象依次以一定的单点显示时长显示在起点、N个散列位置点和终点,形成滑动效果。
  13. 根据权利要求12所述的装置,所述散列点确定模块具体用于:确定起点和终点之间展现对象的滑动轨迹,以平均、加速或减速方式在滑动轨迹上选择N个散列位置点。
  14. 根据权利要求9所述的装置,其特征在于,所述展现对象包括:实时图片和/或实时视频。
  15. 根据权利要求9所述的装置,其特征在于,所述展现对象包括:用户的实时头像视频;
    所述对应于实时信息的位置信息包括:与用户实时头像的正面在水平方向的转动角度相对应的X轴比例坐标,以及与用户实施头像的正面在垂直方向的转动角度相对应的Y轴比例坐标;
    所述起止点确定单元具体用于:根据屏幕的宽度和长度将上个周期的X轴比例坐标和Y轴比例坐标转换为起点在屏幕上的位置坐标,根据屏幕的宽度和长度将本周期的X 轴比例坐标和Y轴比例坐标转换为终点在屏幕上的位置坐标。
  16. 根据权利要求15所述的装置,其特征在于,所述装置还包括:校验登录单元,用于当展现对象滑动到屏幕上的预定标示位置时,对用户的实时头像进行校验,校验成功后完成用户登录。
PCT/CN2016/107014 2015-12-04 2016-11-24 根据实时信息显示展现对象的方法和装置 WO2017092597A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
EP16869907.2A EP3367228B1 (en) 2015-12-04 2016-11-24 Method and apparatus for displaying display object according to real-time information
SG11201804351PA SG11201804351PA (en) 2015-12-04 2016-11-24 Method and apparatus for displaying display object according to real-time information
KR1020187018757A KR102148583B1 (ko) 2015-12-04 2016-11-24 실시간 정보에 따라 디스플레이 객체를 디스플레이하는 방법 및 장치
JP2018528716A JP6640356B2 (ja) 2015-12-04 2016-11-24 リアルタイム情報に従って表示オブジェクトを表示する装置および方法
AU2016363434A AU2016363434B2 (en) 2015-12-04 2016-11-24 Method and apparatus for displaying display object according to real-time information
US15/993,071 US10551912B2 (en) 2015-12-04 2018-05-30 Method and apparatus for displaying display object according to real-time information
PH12018501175A PH12018501175A1 (en) 2015-12-04 2018-06-04 Method and apparatus for displaying display object according to real-time information
AU2019101558A AU2019101558A4 (en) 2015-12-04 2019-12-12 Method and apparatus for displaying display object according to real-time information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510884339.7A CN106843709B (zh) 2015-12-04 2015-12-04 根据实时信息显示展现对象的方法和装置
CN201510884339.7 2015-12-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/993,071 Continuation US10551912B2 (en) 2015-12-04 2018-05-30 Method and apparatus for displaying display object according to real-time information

Publications (1)

Publication Number Publication Date
WO2017092597A1 true WO2017092597A1 (zh) 2017-06-08

Family

ID=58796282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/107014 WO2017092597A1 (zh) 2015-12-04 2016-11-24 根据实时信息显示展现对象的方法和装置

Country Status (10)

Country Link
US (1) US10551912B2 (zh)
EP (1) EP3367228B1 (zh)
JP (1) JP6640356B2 (zh)
KR (1) KR102148583B1 (zh)
CN (1) CN106843709B (zh)
AU (2) AU2016363434B2 (zh)
MY (1) MY181211A (zh)
PH (1) PH12018501175A1 (zh)
SG (1) SG11201804351PA (zh)
WO (1) WO2017092597A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506025A (zh) * 2017-07-25 2017-12-22 北京小鸟看看科技有限公司 一种头戴显示设备的信息显示方法、装置及头戴显示设备
CN107952238B (zh) * 2017-11-23 2020-11-17 香港乐蜜有限公司 视频生成方法、装置和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425403A (zh) * 2012-05-14 2013-12-04 华为技术有限公司 一种屏幕间显示内容的穿越方法、装置及系统
WO2014000513A1 (zh) * 2012-06-29 2014-01-03 北京汇冠新技术股份有限公司 一种触摸轨迹跟踪方法
CN104777984A (zh) * 2015-04-30 2015-07-15 青岛海信电器股份有限公司 一种触摸轨迹跟踪的方法、装置及触屏设备
CN104915001A (zh) * 2015-06-03 2015-09-16 北京嘿哈科技有限公司 一种屏幕操控方法及装置

Family Cites Families (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0461577B1 (en) * 1990-06-11 1998-12-02 Hitachi, Ltd. Apparatus for generating object motion path
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
FI117488B (fi) 2001-05-16 2006-10-31 Myorigo Sarl Informaation selaus näytöllä
US7173623B2 (en) * 2003-05-09 2007-02-06 Microsoft Corporation System supporting animation of graphical display elements through animation object instances
KR20070055420A (ko) * 2004-06-08 2007-05-30 쓰리-비 인터내셔날 리미티드 그래픽 텍스쳐들의 디스플레이
JP5631535B2 (ja) 2005-02-08 2014-11-26 オブロング・インダストリーズ・インコーポレーテッド ジェスチャベースの制御システムのためのシステムおよび方法
JP4989052B2 (ja) * 2005-07-28 2012-08-01 任天堂株式会社 入力データ処理プログラムおよび情報処理装置
WO2007022306A2 (en) 2005-08-17 2007-02-22 Hillcrest Laboratories, Inc. Hover-buttons for user interfaces
US8411913B2 (en) 2008-06-17 2013-04-02 The Hong Kong Polytechnic University Partial fingerprint recognition
JP2011134278A (ja) 2009-12-25 2011-07-07 Toshiba Corp 情報処理装置およびポインティング制御方法
JP5382007B2 (ja) * 2010-02-22 2014-01-08 株式会社デンソー 移動軌跡表示装置
WO2011106797A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
KR101005719B1 (ko) 2010-05-18 2011-01-06 주식회사 슈프리마 정합 및 합성의 시작과 종료를 자동으로 인식하는 회전 지문 획득 장치 및 방법
JP2012008893A (ja) * 2010-06-25 2012-01-12 Hakko Denki Kk 作画支援システム、および作画支援システムにおける支援装置
BR112013001537B8 (pt) 2010-07-19 2021-08-24 Risst Ltd sensores de impressão digital e sistemas incorporando sensores de impressão digital
WO2012030958A1 (en) 2010-08-31 2012-03-08 Activate Systems, Inc. Methods and apparatus for improved motion capture
US9316827B2 (en) 2010-09-20 2016-04-19 Kopin Corporation LifeBoard—series of home pages for head mounted displays (HMD) that respond to head tracking
JP5418532B2 (ja) * 2011-03-29 2014-02-19 アイシン・エィ・ダブリュ株式会社 表示装置および表示装置の制御方法並びにプログラム
US8643680B2 (en) * 2011-04-08 2014-02-04 Amazon Technologies, Inc. Gaze-based content display
KR101788006B1 (ko) * 2011-07-18 2017-10-19 엘지전자 주식회사 원격제어장치 및 원격제어장치로 제어 가능한 영상표시장치
US9082235B2 (en) * 2011-07-12 2015-07-14 Microsoft Technology Licensing, Llc Using facial data for device authentication or subject identification
US9305361B2 (en) * 2011-09-12 2016-04-05 Qualcomm Incorporated Resolving homography decomposition ambiguity based on orientation sensors
US8261090B1 (en) * 2011-09-28 2012-09-04 Google Inc. Login to a computing device based on facial recognition
US8743055B2 (en) * 2011-10-13 2014-06-03 Panasonic Corporation Hybrid pointing system and method
US8666719B2 (en) * 2011-10-25 2014-03-04 Livermore Software Technology Corp. Methods and systems for numerically simulating muscle movements along bones and around joints
CN103376994A (zh) * 2012-04-11 2013-10-30 宏碁股份有限公司 电子装置及控制电子装置的方法
US9417660B2 (en) 2012-04-25 2016-08-16 Kopin Corporation Collapsible head set computer
CN102799376A (zh) * 2012-07-11 2012-11-28 广东欧珀移动通信有限公司 一种触控设备的快捷功能设定方法
CN102830797B (zh) * 2012-07-26 2015-11-25 深圳先进技术研究院 一种基于视线判断的人机交互方法及系统
JP2014092940A (ja) * 2012-11-02 2014-05-19 Sony Corp 画像表示装置及び画像表示方法、並びにコンピューター・プログラム
US9569992B2 (en) * 2012-11-15 2017-02-14 Semiconductor Energy Laboratory Co., Ltd. Method for driving information processing device, program, and information processing device
JPWO2014084224A1 (ja) * 2012-11-27 2017-01-05 京セラ株式会社 電子機器および視線入力方法
US9058519B2 (en) 2012-12-17 2015-06-16 Qualcomm Incorporated System and method for passive live person verification using real-time eye reflection
US9134793B2 (en) 2013-01-04 2015-09-15 Kopin Corporation Headset computer with head tracking input used for inertial control
CN103941988A (zh) * 2013-01-20 2014-07-23 上海博路信息技术有限公司 一种手势解锁的方法
US9208583B2 (en) * 2013-02-13 2015-12-08 Blackberry Limited Device with enhanced augmented reality functionality
KR102214503B1 (ko) 2013-03-26 2021-02-09 삼성전자주식회사 지문 인식 방법 및 그 전자 장치
KR102121592B1 (ko) 2013-05-31 2020-06-10 삼성전자주식회사 시력 보호 방법 및 장치
JP2015012304A (ja) * 2013-06-26 2015-01-19 ソニー株式会社 画像処理装置、画像処理方法、及び、プログラム
US9857876B2 (en) * 2013-07-22 2018-01-02 Leap Motion, Inc. Non-linear motion capture using Frenet-Serret frames
US9898642B2 (en) 2013-09-09 2018-02-20 Apple Inc. Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs
JP5924555B2 (ja) * 2014-01-06 2016-05-25 コニカミノルタ株式会社 オブジェクトの停止位置制御方法、操作表示装置およびプログラム
JP2015162071A (ja) * 2014-02-27 2015-09-07 株式会社ニコン 電子機器
JP6057937B2 (ja) 2014-03-14 2017-01-11 株式会社コロプラ ゲームオブジェクト制御プログラム及びゲームオブジェクト制御方法
CN104142807A (zh) * 2014-08-02 2014-11-12 合一网络技术(北京)有限公司 基于安卓控件利用OpenGL绘制图像的方法和系统
US9672415B2 (en) * 2014-11-13 2017-06-06 Intel Corporation Facial liveness detection in image biometrics
CN104915099A (zh) * 2015-06-16 2015-09-16 努比亚技术有限公司 一种图标整理方法和终端设备
CN104951773B (zh) * 2015-07-12 2018-10-02 上海微桥电子科技有限公司 一种实时人脸识别监视系统
CN105100934A (zh) * 2015-08-29 2015-11-25 天脉聚源(北京)科技有限公司 显示参与互动的用户信息的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425403A (zh) * 2012-05-14 2013-12-04 华为技术有限公司 一种屏幕间显示内容的穿越方法、装置及系统
WO2014000513A1 (zh) * 2012-06-29 2014-01-03 北京汇冠新技术股份有限公司 一种触摸轨迹跟踪方法
CN104777984A (zh) * 2015-04-30 2015-07-15 青岛海信电器股份有限公司 一种触摸轨迹跟踪的方法、装置及触屏设备
CN104915001A (zh) * 2015-06-03 2015-09-16 北京嘿哈科技有限公司 一种屏幕操控方法及装置

Also Published As

Publication number Publication date
JP6640356B2 (ja) 2020-02-05
US10551912B2 (en) 2020-02-04
CN106843709B (zh) 2020-04-14
AU2016363434B2 (en) 2019-10-31
SG11201804351PA (en) 2018-06-28
EP3367228A4 (en) 2019-06-26
AU2019101558A4 (en) 2020-01-23
PH12018501175A1 (en) 2019-01-21
JP2018536243A (ja) 2018-12-06
US20180275750A1 (en) 2018-09-27
AU2016363434A1 (en) 2018-06-21
KR20180088450A (ko) 2018-08-03
EP3367228B1 (en) 2022-02-09
EP3367228A1 (en) 2018-08-29
MY181211A (en) 2020-12-21
CN106843709A (zh) 2017-06-13
KR102148583B1 (ko) 2020-08-27

Similar Documents

Publication Publication Date Title
US11893689B2 (en) Automated three dimensional model generation
CN110495166B (zh) 一种计算机实现的方法、计算装置以及可读存储介质
US9789403B1 (en) System for interactive image based game
US20170371450A1 (en) Detecting tap-based user input on a mobile device based on motion sensor data
US8660362B2 (en) Combined depth filtering and super resolution
CN105242888B (zh) 一种系统控制方法及电子设备
WO2017092679A1 (zh) 一种眼球跟踪的方法及装置、设备
US9824723B1 (en) Direction indicators for panoramic images
WO2017173933A1 (zh) 一种物品图像的展示方法、装置及系统
US11200414B2 (en) Process for capturing content from a document
US8885878B2 (en) Interactive secret sharing
WO2017092597A1 (zh) 根据实时信息显示展现对象的方法和装置
US10410425B1 (en) Pressure-based object placement for augmented reality applications
US10895913B1 (en) Input control for augmented reality applications
JP2015118577A5 (zh)
US9857869B1 (en) Data optimization
CN104978130B (zh) 一种信息处理的方法及智能终端
CN115756181A (zh) 一种控制方法、装置、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869907

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2016869907

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 11201804351P

Country of ref document: SG

WWE Wipo information: entry into national phase

Ref document number: 2018528716

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 12018501175

Country of ref document: PH

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2016363434

Country of ref document: AU

Date of ref document: 20161124

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20187018757

Country of ref document: KR

Kind code of ref document: A