WO2019041982A1 - 显示内容控制方法及装置、系统、存储介质和电子设备 - Google Patents

显示内容控制方法及装置、系统、存储介质和电子设备 Download PDF

Info

Publication number
WO2019041982A1
WO2019041982A1 PCT/CN2018/092232 CN2018092232W WO2019041982A1 WO 2019041982 A1 WO2019041982 A1 WO 2019041982A1 CN 2018092232 W CN2018092232 W CN 2018092232W WO 2019041982 A1 WO2019041982 A1 WO 2019041982A1
Authority
WO
WIPO (PCT)
Prior art keywords
display content
target
determining
instruction
smart device
Prior art date
Application number
PCT/CN2018/092232
Other languages
English (en)
French (fr)
Inventor
张尧
Original Assignee
北京京东金融科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东金融科技控股有限公司 filed Critical 北京京东金融科技控股有限公司
Publication of WO2019041982A1 publication Critical patent/WO2019041982A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present disclosure relates to the field of display control technologies, and in particular, to a display content control method, a display content control device, a display content control system, a storage medium, and an electronic device.
  • the content and progress of the video playback can be controlled through the background.
  • the content of the video playback is fixed, that is, the customer cannot control the video content according to their own needs.
  • An object of the present disclosure is to provide a display content control method, a display content control device, a display content control system, a storage medium, and an electronic device, thereby at least partially overcoming one or more of the limitations and disadvantages of the related art. problem.
  • a display content control method including:
  • the display content is controlled according to an instruction corresponding to the target gesture operation.
  • the target gesture operation is a target gesture action.
  • determining the target gesture operation based on the human motion trajectory comprises:
  • the target gesture action is determined based on the direction of motion of the hand.
  • determining the target gesture action based on the direction of motion of the hand comprises:
  • the gesture motion corresponding to the movement direction of the hand is determined as the target gesture motion.
  • controlling the display content according to the instruction corresponding to the target gesture operation comprises:
  • the target display content corresponding to the instruction is sent to the smart device to display the target display content.
  • sending the target display content corresponding to the instruction to the smart device to display the target display content comprises:
  • the target display content corresponding to the instruction is sent to the smart device to display the target display content.
  • a display content control apparatus including:
  • a coordinate receiving module configured to receive coordinates of a human skeleton point captured by the somatosensory camera
  • a motion trajectory determining module configured to determine a human motion trajectory according to a human bone point coordinate
  • a posture operation determining module for determining a target posture operation based on a human motion trajectory
  • the display content control module is configured to control the display content according to an instruction corresponding to the target gesture operation.
  • the target gesture operation is a target gesture action.
  • the gesture operation determining module comprises:
  • the coordinate system establishes a sub-module for establishing a three-dimensional coordinate system and setting the position of the somatosensory camera as the initial coordinate point;
  • the motion direction determining sub-module is configured to determine the moving direction of the hand according to the human body motion trajectory centering on the elbow coordinate;
  • the gesture action determining sub-module is configured to determine the target gesture action based on the direction of motion of the hand.
  • the gesture action determining submodule comprises:
  • a deflection angle determining unit configured to determine whether a deflection angle of the hand is greater than a predetermined deflection angle
  • the gesture action determining unit is configured to determine a gesture action corresponding to the movement direction of the hand as the target gesture action when determining that the deflection angle of the hand is greater than the preset deflection angle.
  • the display content control module comprises:
  • displaying a content sending submodule configured to send the target display content corresponding to the instruction to the smart device to display the target display content.
  • the display content sending submodule comprises:
  • a command request sending unit configured to send a command request related to the instruction to the smart device
  • An acquisition request receiving unit configured to receive a target display content acquisition request sent by the smart device in response to the command request
  • the display content transmitting unit is configured to send the target display content corresponding to the instruction to the smart device to display the target display content.
  • a display content control system including:
  • Somatosensory camera for capturing the coordinates of human bone points and sending them
  • a server configured to receive a human skeleton point coordinate and determine a human motion trajectory according to the human skeleton point coordinate; determine a target posture operation based on the human motion trajectory; and determine an instruction corresponding to the target posture operation;
  • a smart device that controls the display content according to the instructions.
  • the target gesture operation is a target gesture action.
  • the server when determining the target gesture operation, is further configured to establish a three-dimensional coordinate system, and set the position of the somatosensory camera as the initial coordinate point; and determine the movement direction of the hand according to the human motion trajectory centering on the elbow coordinate; The target gesture action is determined based on the direction of motion of the hand.
  • the server when determining the target gesture action, is further configured to determine whether the deflection angle of the hand is greater than a predetermined deflection angle; and when determining that the deflection angle of the hand is greater than the preset deflection angle, the gesture action corresponding to the movement direction of the hand Determine the action as the target gesture.
  • the server is further configured to send the target display content corresponding to the instruction to the smart device; the smart device is further configured to display the target display content.
  • the server is further configured to send a command request related to the instruction to the smart device, receive a target display content acquisition request sent by the smart device in response to the command request, and send the target display content acquisition request to the smart device;
  • the smart device is further configured to send the target display content acquisition request to the server in response to the command request sent by the server, and receive the target display content sent by the server in response to the target display content acquisition request.
  • a storage medium having stored thereon a computer program that, when executed by a processor, implements the display content control method of any of the above.
  • an electronic device including:
  • a memory for storing executable instructions of the processor
  • the processor is configured to execute the display content control method of any of the above by executing executable instructions.
  • the gesture operation is determined by the human motion trajectory, and the display content is controlled according to the gesture operation.
  • the association between the human body motion and the display content is realized, and the human body gesture can be operated.
  • Controlling the display content on the other hand, the solution shown in the present disclosure is simple and easy to implement, and the user can control the display content conveniently and quickly, thereby intuitively obtaining the information that is desired to be understood.
  • FIG. 1 schematically illustrates a flowchart of a display content control method according to an exemplary embodiment of the present disclosure
  • FIG. 2 schematically shows a block diagram of a display content control apparatus according to an exemplary embodiment of the present disclosure
  • FIG. 3 schematically shows a block diagram of a gesture operation determination module according to an exemplary embodiment of the present disclosure
  • FIG. 4 schematically illustrates a block diagram of a gesture action determination sub-module according to an exemplary embodiment of the present disclosure
  • FIG. 5 schematically shows a block diagram of a display content control module according to an exemplary embodiment of the present disclosure
  • FIG. 6 is a block diagram schematically showing a display content transmitting submodule according to an exemplary embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of a display content control system according to an exemplary embodiment of the present disclosure
  • FIG. 8 illustrates a schematic diagram of a storage medium according to an exemplary embodiment of the present disclosure
  • FIG. 9 schematically illustrates a block diagram of an electronic device in accordance with an exemplary embodiment of the present disclosure.
  • the display content control method of the present disclosure will be described below by taking the example of switching the product introduction video in the shopping mall as an example.
  • the present disclosure is not limited thereto, and the content described below can also be applied to any scene in which display content is controlled.
  • the display content control method described in the present disclosure can be applied to an airport flight inquiry system to facilitate passengers to query flight information, and the like.
  • FIG. 1 schematically shows a display content control method of an exemplary embodiment of the present disclosure.
  • the display content control method may include the following steps:
  • a somatosensory camera can acquire bone point coordinate data of a human body through an SDK (Software Development Kit) provided by the somatosensory camera.
  • SDK Software Development Kit
  • the somatosensory camera can acquire the skeleton point coordinate data of the human body movement or the posture motion through some function interfaces.
  • the somatosensory camera may be a Kinect device.
  • the somatosensory camera may be other image capturing devices such as a Leap Motion device, a RealSense device, or the like.
  • the developer can set the actual captured user position.
  • the user who is facing the somatosensory camera can be determined as the object actually captured by the somatosensory camera.
  • the human skeleton point coordinate data can be sent to the server.
  • the server can analyze the skeleton point coordinate data to determine the human body motion trajectory.
  • the analyzed human motion trajectory may be that the arm swings 90 degrees from left to right, the human body moves from the A position to the B position, or the angle at which the human hand swings is 5 degrees, and the like.
  • the target gesture operation may be an action operation that the developer specifies itself that satisfies certain conditions.
  • the target gesture operation may be a target gesture action.
  • the target gesture operation may also be other motion operations, such as kicking, shaking the head, moving the human body, and the like.
  • the server can establish a three-dimensional coordinate system based on position information, and can set the position of the somatosensory camera as the initial coordinate point; next, the movement direction of the hand can be determined according to the human body motion trajectory centering on the user's elbow coordinate. Then, the target gesture can be determined based on the determined direction of movement of the hand. Specifically, it can be determined whether the deflection angle of the hand is greater than a predetermined deflection angle. When determining that the deflection angle of the hand is greater than the preset deflection angle, The gesture action corresponding to the direction of motion of the hand is determined as the target gesture action. By setting the preset deflection angle, the direction of motion of the hand can be better divided, thereby reducing the influence of uncertain factors on the control operation.
  • the server can recognize the left hand elbow coordinates of the user, and when the user's left hand swings from left to right, the server can acquire the coordinate position of the left hand in real time. Assuming that the preset deflection angle is 60 degrees, when the deflection angle of the left hand swing is 40 degrees, although the left hand is generated as a displacement, the server cannot judge that the user performs a left-to-right wave motion, when the swing deflection angle is greater than 60 degrees. At the time, the server can determine that the target gesture action is swung from left to right.
  • the user has the same effect of swinging the angle of the arm by 70 degrees and 100 degrees, respectively, and determining that the target gesture is from left to right.
  • the display content control method of the present disclosure may further include the step of constructing a gesture operation and an instruction mapping relationship.
  • the gesture operation is a gesture action
  • the gesture motion of the left hand waving from left to right may be defined as "L", and so on.
  • each of the instructions may be in one-to-one correspondence with the display content.
  • the display content of the present disclosure may be one of a text, a picture, a video, or a combination thereof, which is not specifically limited in the exemplary embodiment.
  • the instructions may also correspond to other display control operations, such as fast forward, pause, switch sounds, switch adjacent stored display content, and the like.
  • the server may send the target display content corresponding to the instruction to a smart device, and the smart device may be connected to the display screen for displaying the target display content, and the smart device may be a display control device having an information transceiving function.
  • the server stores the display content in the form of a message queue, and can establish a connection with the smart device through a WebSocket connection.
  • the server can actively send the target display content to the smart device, so that the smart device sends the target display content to the display screen for the user to understand the corresponding information.
  • the server may store the target gesture operations in Redis in chronological order, and may send a command request related to the instruction of the target gesture operation to the smart device; the smart device may generate a target display content acquisition request in response to the command request. And sending the target display content acquisition request to the server, where the target display content acquisition request may include information related to the instruction; after receiving the target display content acquisition request, the server analyzes the request to determine the corresponding display content. The target display content is sent to the smart device, and the target display content is displayed by the smart device control.
  • the scheme for transmitting the target display content to the smart device may be divided into two cases: the server actively sends the smart device and the server sends the acquisition request to the smart device, and may be processed according to the hardware condition and the information processing state. By adjusting the processing method, the flexibility of the smart device to interact with the server is improved.
  • the customer finds that the product introduction video played on the display screen is not the product that he wants to know, he can go to the display screen and be in the position where the display screen is directly opposite.
  • the somatosensory camera can be placed directly above the display screen to capture The coordinates of the bones of the customer to the position of the display screen; next, the customer can swing the arm, and the somatosensory camera can transmit the coordinate data of the skeleton points of the arm to the server, wherein when the deflection angle of the customer's arm meets the preset deflection angle requirement
  • the server analyzes the target gesture operation according to the motion track of the arm, and determines an instruction corresponding to the target gesture operation, and sends the target video corresponding to the instruction to the smart device; then, the smart device sends the target video to the display screen to implement the display content. Switch control.
  • the display content in the solution corresponding to the gesture operation and the display control operation, for example, when the customer swings the arm from left to right, the display content may be switched from the current video to the previous video stored adjacent to the current video; When the customer swings the arm from right to left, the display content can be switched from the current video to the next video stored adjacent to the current video.
  • the gesture operation may correspond to operations such as fast forward, pause, switch sound, etc. to control the display content of the current video.
  • the gesture operation is determined by the human motion trajectory, and the display content is controlled according to the gesture operation.
  • the association between the human body motion and the display content is realized, and the display content can be controlled by the human body posture operation;
  • the solution shown in the present disclosure is simple and easy to implement, and the user can control the display content conveniently and quickly, thereby intuitively obtaining the information that is desired to be understood.
  • a display content control apparatus is further provided in the exemplary embodiment.
  • FIG. 2 schematically shows a block diagram of a display content control device of an exemplary embodiment of the present disclosure.
  • the display content control device 2 may include a coordinate receiving module 21, a motion trajectory determining module 23, a gesture operation determining module 25, and a display content control module 27, wherein:
  • the coordinate receiving module 21 can be configured to receive coordinates of a human skeleton point captured by the somatosensory camera;
  • the motion trajectory determining module 23 can be configured to determine a human motion trajectory according to the coordinates of the human skeleton point;
  • a gesture operation determining module 25 configured to determine a target gesture operation based on a human motion trajectory
  • the display content control module 27 can be configured to control the display content according to an instruction corresponding to the target gesture operation.
  • the display content control device of the present disclosure on the one hand, the association between the human body motion and the display content is realized, and the display content can be controlled by the human body posture operation; on the other hand, the solution shown in the present disclosure is simple and easy to implement, and the user can conveniently And quickly control the display content to intuitively get the information you want to know.
  • the target gesture operation may be a target gesture operation.
  • the gesture operation as a gesture operation, it is convenient for the user to implement the control process for displaying the content.
  • the gesture operation determining module 25 may include a coordinate system establishing sub-module 301, a motion direction determining sub-module 303, and a gesture action determining sub-module 305, where:
  • a coordinate system establishing sub-module 301 which can be used to establish a three-dimensional coordinate system and set the position of the somatosensory camera as an initial coordinate point;
  • the motion direction determining sub-module 303 can be configured to determine the moving direction of the hand according to the human body motion trajectory centering on the elbow coordinate;
  • the gesture action determining sub-module 305 can be configured to determine a target gesture action based on the direction of motion of the hand.
  • the gesture action determination sub-module 305 may include a deflection angle determination unit 4001 and a gesture action determination unit 4003, where:
  • the deflection angle determining unit 4001 can be configured to determine whether the deflection angle of the hand is greater than a predetermined deflection angle
  • the gesture action determining unit 4003 can be configured to determine a gesture action corresponding to the motion direction of the hand as the target gesture action when determining that the deflection angle of the hand is greater than the preset deflection angle.
  • the direction of motion of the hand can be better divided, thereby reducing the influence of uncertain factors on the control operation.
  • the display content control module 27 may include an instruction determination sub-module 501 and a display content transmission sub-module 503, wherein:
  • the instruction determining submodule 501 can be configured to determine an instruction that is mapped to the target gesture operation
  • the display content sending submodule 503 can be configured to send the target display content corresponding to the instruction to the smart device to display the target display content.
  • the display content transmitting sub-module 503 may include a command request transmitting unit 6001, an acquisition request receiving unit 6003, and a display content transmitting unit 6005, wherein:
  • the command request sending unit 6001 may be configured to send a command request related to the instruction to the smart device;
  • the obtaining request receiving unit 6003 is configured to receive a target display content obtaining request sent by the smart device in response to the command request;
  • the display content transmitting unit 6005 is configured to send the target display content corresponding to the instruction to the smart device to display the target display content.
  • the scheme for transmitting the target display content to the smart device may be divided into two cases: the server actively sends the smart device and the server sends the acquisition request to the smart device, and may be processed according to the hardware condition and the information processing state. By adjusting the processing method, the flexibility of the smart device to interact with the server is improved.
  • the function modules of the program running performance analysis device of the embodiment of the present invention are the same as those of the above-described method embodiment of the present invention, and thus are not described herein again.
  • a display content control system is further provided in the exemplary embodiment.
  • FIG. 7 schematically shows a block diagram of a display content control system of an exemplary embodiment of the present disclosure.
  • a display content control system may include a somatosensory camera 71, a server 72, and a smart device 73, in which:
  • the somatosensory camera 71 can be used to capture the coordinates of the human skeleton and send it;
  • the server 72 may be configured to receive coordinates of a human bone point and determine a motion track of the human body according to coordinates of the human bone point; determine a target posture operation based on the motion track of the human body; and determine an instruction corresponding to the target posture operation;
  • the smart device 73 can be used to control the display content according to the command.
  • the display content control system shown in the present disclosure may further include a display screen 74.
  • the display content control system of the present disclosure on the one hand, the association between the human body action and the display content is realized, and the display content can be controlled by the human body posture operation; on the other hand, the solution shown in the present disclosure is simple and easy to implement, and the user can conveniently And quickly control the display content to intuitively get the information you want to know.
  • the target gesture operation is a target gesture action.
  • the gesture operation as a gesture operation, it is convenient for the user to implement the control process for displaying the content.
  • the server 72 when determining the target gesture operation, is further configured to establish a three-dimensional coordinate system, and set the position of the somatosensory camera 71 as an initial coordinate point; centering on the elbow coordinate, according to the human body
  • the motion trajectory determines the direction of motion of the hand; the target gesture motion is determined based on the direction of motion of the hand.
  • the server 72 when determining the target gesture action, is further configured to determine whether the deflection angle of the hand is greater than a predetermined deflection angle; and when determining that the deflection angle of the hand is greater than the preset deflection angle, The gesture action corresponding to the direction of motion is determined as the target gesture action.
  • the direction of motion of the hand can be better divided, thereby reducing the influence of uncertain factors on the control operation.
  • the server 72 is further configured to transmit the target display content corresponding to the instruction to the smart device 73; the smart device 73 is further configured to display the target display content.
  • the server 72 is further configured to send a command request related to the instruction to the smart device 73, receive the target display content acquisition request sent by the smart device 73 in response to the command request, and send the target display content acquisition request.
  • a command request related to the instruction to the smart device 73;
  • the smart device 73 is further configured to transmit a target display content acquisition request to the server 72 in response to a command request sent by the server 72, and receive the target display content transmitted by the server 72 in response to the target display content acquisition request.
  • the scheme for transmitting the target display content to the smart device may be divided into two cases: the server actively sends the smart device and the server sends the acquisition request to the smart device, and may be processed according to the hardware condition and the information processing state. By adjusting the processing method, the flexibility of the smart device to interact with the server is improved.
  • a computer readable storage medium having stored thereon a program product capable of implementing the above method of the present specification.
  • aspects of the present invention may also be embodied in the form of a program product comprising program code for causing said program product to run on a terminal device The terminal device performs the steps according to various exemplary embodiments of the present invention described in the "Exemplary Method" section of the present specification.
  • a program product 800 for implementing the above method which may employ a portable compact disk read only memory (CD-ROM) and includes program code, and may be in a terminal device, is illustrated in accordance with an embodiment of the present invention.
  • CD-ROM portable compact disk read only memory
  • the program product of the present invention is not limited thereto, and in the present document, the readable storage medium may be any tangible medium containing or storing a program that can be used by or in connection with an instruction execution system, apparatus or device.
  • the program product can employ any combination of one or more readable media.
  • the readable medium can be a readable signal medium or a readable storage medium.
  • the readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples (non-exhaustive lists) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium can also be any readable medium other than a readable storage medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a readable medium can be transmitted using any suitable medium, including but not limited to wireless, wireline, optical cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for performing the operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++, etc., including conventional procedural Programming language—such as the "C" language or a similar programming language.
  • the program code can execute entirely on the user computing device, partially on the user device, as a stand-alone software package, partially on the remote computing device on the user computing device, or entirely on the remote computing device or server. Execute on.
  • the remote computing device can be connected to the user computing device via any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computing device (eg, provided using an Internet service) Businesses are connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • Businesses are connected via the Internet.
  • an electronic device capable of implementing the above method is also provided.
  • FIG. 9 An electronic device 900 in accordance with such an embodiment of the present invention is described below with reference to FIG. 9 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
  • electronic device 900 is represented in the form of a general purpose computing device.
  • the components of the electronic device 900 may include, but are not limited to, the at least one processing unit 910, the at least one storage unit 920, the bus 930 connecting the different system components (including the storage unit 920 and the processing unit 910), and the display unit 940.
  • the storage unit stores program code, which can be executed by the processing unit 910, such that the processing unit 910 performs various exemplary embodiments according to the present invention described in the "Exemplary Method" section of the present specification.
  • the processing unit 910 may perform step S10 as shown in FIG. 1 : receiving human body skeleton point coordinates captured by the somatosensory camera; step S12 : determining human body motion trajectory according to human skeleton point coordinates; step S14 : determining based on the human body motion trajectory Target posture operation; Step S16: Controlling the display content according to an instruction corresponding to the target posture operation.
  • the storage unit 920 can include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 9201 and/or a cache storage unit 9202, and can further include a read only storage unit (ROM) 9203.
  • RAM random access storage unit
  • ROM read only storage unit
  • the storage unit 920 can also include a program/utility 9204 having a set (at least one) of the program modules 9205, such as but not limited to: an operating system, one or more applications, other program modules, and program data, Implementations of the network environment may be included in each or some of these examples.
  • a program/utility 9204 having a set (at least one) of the program modules 9205, such as but not limited to: an operating system, one or more applications, other program modules, and program data, Implementations of the network environment may be included in each or some of these examples.
  • Bus 930 may represent one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any of a variety of bus structures. bus.
  • the electronic device 900 can also communicate with one or more external devices 1000 (eg, a keyboard, pointing device, Bluetooth device, etc.), and can also communicate with one or more devices that enable the user to interact with the electronic device 900, and/or with The electronic device 900 is enabled to communicate with any device (e.g., router, modem, etc.) that is in communication with one or more other computing devices. This communication can take place via an input/output (I/O) interface 950. Also, electronic device 900 can communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through network adapter 960.
  • LAN local area network
  • WAN wide area network
  • public network such as the Internet
  • network adapter 960 communicates with other modules of electronic device 900 via bus 930. It should be understood that although not shown in the figures, other hardware and/or software modules may be utilized in conjunction with electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives. And data backup storage systems, etc.
  • the technical solution according to an embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) or on a network.
  • a non-volatile storage medium which may be a CD-ROM, a USB flash drive, a mobile hard disk, etc.
  • a number of instructions are included to cause a computing device (which may be a personal computer, server, terminal device, or network device, etc.) to perform a method in accordance with an embodiment of the present disclosure.
  • modules or units of equipment for action execution are mentioned in the detailed description above, such division is not mandatory. Indeed, in accordance with embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one of the modules or units described above may be further divided into multiple modules or units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

公开了一种显示内容控制方法及装置、系统、存储介质和电子设备,涉及显示控制技术领域。该显示内容控制方法包括:接收体感摄像机捕捉的人体骨骼点坐标;根据人体骨骼点坐标确定人体运动轨迹;基于人体运动轨迹确定目标姿势操作;以及根据与目标姿势操作对应的指令控制显示内容。本公开可以实现人体姿势操作对显示内容的控制。

Description

显示内容控制方法及装置、系统、存储介质和电子设备 技术领域
本公开涉及显示控制技术领域,具体而言,涉及一种显示内容控制方法、显示内容控制装置、显示内容控制系统、存储介质和电子设备。
背景技术
随着显示技术的发展,商场、车站、店铺、超市等处皆配备了显示设备(例如,显示屏),以方便人们了解各种信息。以商场为例,当顾客想要了解一款商品时,可以询问导购人员或者可以从该商品的标签上获得一些基本信息,如果想要了解更加详细的商品信息,可以选择观看商品介绍视频,视频中可以包含该商品的型号、制造厂家、使用方法、售价等信息以便顾客可以全面了解商品。
视频播放的内容和进度可以通过后台进行控制。然而,对于顾客而言,视频播放的内容是固定的,也就是说,顾客无法根据自己的需要控制视频内容。
鉴于此,需要一种新的显示内容控制方法。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开的目的在于提供一种显示内容控制方法、显示内容控制装置、显示内容控制系统、存储介质和电子设备,进而至少在一定程度上克服由于相关技术的限制和缺陷而导致的一个或者多个问题。
根据本公开的一个方面,提供一种显示内容控制方法,包括:
接收体感摄像机捕捉的人体骨骼点坐标;
根据人体骨骼点坐标确定人体运动轨迹;
基于人体运动轨迹确定目标姿势操作;以及
根据与目标姿势操作对应的指令控制显示内容。
优选地,目标姿势操作为目标手势动作。
优选地,基于人体运动轨迹确定目标姿势操作包括:
建立三维坐标系,并将体感摄像机的位置设定为初始坐标点;
以手肘部坐标为中心,根据人体运动轨迹确定手的运动方向;
基于手的运动方向确定目标手势动作。
优选地,基于手的运动方向确定目标手势动作包括:
判断手的偏转角度是否大于一预设偏转角度;
在判断出手的偏转角度大于预设偏转角度时,将与手的运动方向对应的手势动作确定 为目标手势动作。
优选地,根据与目标姿势操作对应的指令控制显示内容包括:
确定与目标姿势操作成映射关系的指令;
将与指令对应的目标显示内容发送至智能设备以显示目标显示内容。
优选地,将与指令对应的目标显示内容发送至智能设备以显示目标显示内容包括:
向智能设备发送与指令相关的命令请求;
接收智能设备响应命令请求而发送的目标显示内容获取请求;
将与指令对应的目标显示内容发送至智能设备以显示目标显示内容。
根据本公开的一个方面,提供一种显示内容控制装置,包括:
坐标接收模块,用于接收体感摄像机捕捉的人体骨骼点坐标;
运动轨迹确定模块,用于根据人体骨骼点坐标确定人体运动轨迹;
姿势操作确定模块,用于基于人体运动轨迹确定目标姿势操作;以及
显示内容控制模块,用于根据与目标姿势操作对应的指令控制显示内容。
优选地,目标姿势操作为目标手势动作。
优选地,姿势操作确定模块包括:
坐标系建立子模块,用于建立三维坐标系,并将体感摄像机的位置设定为初始坐标点;
运动方向确定子模块,用于以手肘部坐标为中心,根据人体运动轨迹确定手的运动方向;
手势动作确定子模块,用于基于手的运动方向确定目标手势动作。
优选地,手势动作确定子模块包括:
偏转角度判断单元,用于判断手的偏转角度是否大于一预设偏转角度;
手势动作确定单元,用于在判断出手的偏转角度大于预设偏转角度时,将与手的运动方向对应的手势动作确定为目标手势动作。
优选地,显示内容控制模块包括:
指令确定子模块,用于确定与目标姿势操作成映射关系的指令;
显示内容发送子模块,用于将与指令对应的目标显示内容发送至智能设备以显示目标显示内容。
优选地,显示内容发送子模块包括:
命令请求发送单元,用于向智能设备发送与指令相关的命令请求;
获取请求接收单元,用于接收智能设备响应命令请求而发送的目标显示内容获取请求;
显示内容发送单元,用于将与指令对应的目标显示内容发送至智能设备以显示目标显示内容。
根据本公开的一个方面,提供一种显示内容控制系统,包括:
体感摄像机,用于捕捉人体骨骼点坐标并发送;
服务器,用于接收人体骨骼点坐标并根据人体骨骼点坐标确定人体运动轨迹;基于人体运动轨迹确定目标姿势操作;确定目标姿势操作对应的指令;以及
智能设备,用于根据指令控制显示内容。
优选地,目标姿势操作为目标手势动作。
优选地,在确定目标姿势操作时,服务器还用于建立三维坐标系,并将体感摄像机的位置设定为初始坐标点;以手肘部坐标为中心,根据人体运动轨迹确定手的运动方向;基于手的运动方向确定目标手势动作。
优选地,在确定目标手势动作时,服务器还用于判断手的偏转角度是否大于一预设偏转角度;在判断出手的偏转角度大于预设偏转角度时,将与手的运动方向对应的手势动作确定为目标手势动作。
优选地,服务器还用于将与指令对应的目标显示内容发送至智能设备;智能设备还用于显示目标显示内容。
优选地,服务器还用于将与指令相关的命令请求发送至智能设备,接收智能设备响应命令请求发送的目标显示内容获取请求,并将目标显示内容获取请求发送至智能设备;以及
智能设备还用于响应服务器发送的命令请求将目标显示内容获取请求发送至服务器,并接收服务器响应目标显示内容获取请求而发送的目标显示内容。
根据本公开的一个方面,提供一种存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述任意一项的显示内容控制方法。
根据本公开的一个方面,提供一种电子设备,包括:
处理器;以及
存储器,用于存储处理器的可执行指令;
其中,处理器配置为经由执行可执行指令来执行上述任意一项的显示内容控制方法。
在本公开的一些实施例所提供的技术方案中,通过人体运动轨迹确定出姿势操作,并根据姿势操作来控制显示内容,一方面,实现了人体动作与显示内容的关联,可以通过人体姿势操作控制显示内容;另一方面,本公开所示的方案简便且易实施,用户可以方便且快速地对显示内容进行控制,进而直观地获取想要了解的信息。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1示意性示出了根据本公开的示例性实施方式的显示内容控制方法的流程图;
图2示意性示出了根据本公开的示例性实施方式的显示内容控制装置的方框图;
图3示意性示出了根据本公开的示例性实施方式的姿势操作确定模块的方框图;
图4示意性示出了根据本公开的示例性实施方式的手势动作确定子模块的方框图;
图5示意性示出了根据本公开的示例性实施方式的显示内容控制模块的方框图;
图6示意性示出了根据本公开的示例性实施方式的显示内容发送子模块的方框图;
图7示出了根据本公开的示例性实施方式的显示内容控制系统的示意图;
图8示出了根据本公开的示例性实施方式的存储介质的示意图;以及
图9示意性示出了根据本公开的示例性实施方式的电子设备的方框图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
附图中所示的流程图仅是示例性说明,不是必须包括所有的步骤。例如,有的步骤还可以分解,而有的步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面将以商场中切换商品介绍视频为例对本公开的显示内容控制方法进行说明。然而,本公开不限于此,下面所述的内容还可以应用于任意对显示内容进行控制的场景中。例如,可以将本公开所述的显示内容控制方法应用于机场的航班查询系统,以方便旅客对航班信息进行查询,等等。
图1示意性示出了本公开的示例性实施方式的显示内容控制方法。参考图1,显示内容控制方法可以包括以下步骤:
S10.接收体感摄像机捕捉的人体骨骼点坐标。
在本公开的示例性实施方式中,体感摄像机可以通过其提供的SDK(Software  Development Kit,软件开发工具包)获取人体的骨骼点坐标数据。在这种情况下,当人体移动或进行手势操作等姿势动作时,体感摄像机可以通过一些功能接口获取人体移动或姿势动作的骨骼点坐标数据。
根据本公开的一些实施例,体感摄像机可以是Kinect设备。然而,不限于此,体感摄像机还可以是其他图像捕捉设备,例如,Leap Motion设备、RealSense设备等。
另外,为了避免体感摄像机可能捕捉多个用户的情况,开发人员可以设定实际捕捉的用户位置,通常,可以将正对体感摄像机的用户确定为体感摄像机实际捕捉的对象。
在体感摄像机捕捉到人体骨骼点坐标后,可以将人体骨骼点坐标数据发送至服务器。
S12.根据人体骨骼点坐标确定人体运动轨迹。
服务器在实时接收到体感摄像机发送的骨骼点坐标数据后,可以对骨骼点坐标数据进行分析,以确定人体运动轨迹。例如,分析出的人体运动轨迹可以是手臂从左向右挥动了90度、人体从A位置移动到了B位置或者人手挥动的角度为5度,等等。
S14.基于人体运动轨迹确定目标姿势操作。
在本公开的示例性实施方式中,目标姿势操作可以是开发人员自行规定的满足一定条件的动作操作。具体的,为了方便用户实施体感控制过程,目标姿势操作可以为目标手势动作。然而,目标姿势操作还可以是其他动作操作,例如,踢腿、摇头、人体移动等。
首先,服务器可以建立基于位置信息的三维坐标系,并可以将体感摄像机的位置设定为初始坐标点;接下来,可以以用户的手肘部坐标为中心,根据人体运动轨迹确定手的运动方向;随后,可以基于判断出的手的运动方向确定目标手势动作,具体的,可以判断手的偏转角度是否大于一预设偏转角度,在判断出手的偏转角度大于该预设偏转角度时,可以将与该手的运动方向对应的手势动作确定为目标手势动作。通过预设偏转角度的设置,可以更好地区分手的运动方向,进而减少不确定因素对控制操作的影响。
例如,在服务器建立三维坐标系后,服务器可以识别用户左手手肘部坐标,当用户左手由左向右挥动时,服务器可以实时获取左手的坐标位置。假设预设偏转角度为60度,当左手挥动的偏转角为40度时,虽然左手产生为位移,但服务器仍不能判断出用户执行由左向右的挥手动作,当挥动的偏转角大于60度时,服务器可以确定出目标手势动作为由左向右挥动。此外,应当理解的是,在本公开的一些实施例中,用户分别挥动手臂的偏转角为70度和100度所产生的效果相同,均是确定出目标手势动作由左向右。
容易理解的是,上述将体感摄像机的位置设置为初始坐标点以及左手由左向右挥动等内容仅是示例性说明本公开所要描述的方案,本公开的范围不以此为限。
S16.根据与目标姿势操作对应的指令控制显示内容。
在确定出目标姿势操作后,可以确定出与该目标姿势操作成映射关系的指令。也就是说,本公开所述的显示内容控制方法还可以包括构建姿势操作与指令映射关系的步骤。例如,在姿势操作为手势动作的情况下,可以将左手由左向右挥动的手势动作定义为“L”,等等。此外,每一指令可以与显示内容一一对应,本公开所述的显示内容可以为文字、图 片、视频中之一或它们的组合,本示例性实施方式中对此不做特殊限定。然而,应当理解的是,指令还可以与其他显示控制操作对应,例如,快进、暂停、开关声音、切换相邻存放的显示内容等。
接下来,服务器可以将指令对应的目标显示内容发送至一智能设备,智能设备可以与显示屏连接用于显示目标显示内容,该智能设备可以为具有信息收发功能的显示控制设备。
根据本公开的一些实施例,服务器以消息队列的形式存储显示内容,并且可以通过WebSocket连线的方式与智能设备建立连接。在这种情况下,服务器可以主动向智能设备发送目标显示内容,以使智能设备将目标显示内容发送至显示屏,供用户了解相应的信息。
根据另外一些实施例,服务器可以将目标姿势操作按时间顺序存放在Redis中,并且可以向智能设备发送与目标姿势操作的指令相关的命令请求;智能设备可以响应该命令请求生成目标显示内容获取请求,并向服务器发送该目标显示内容获取请求,其中,该目标显示内容获取请求可以包括与指令相关的信息;服务器接收到该目标显示内容获取请求后,对该请求进行分析以确定对应的显示内容作为目标显示内容发送至智能设备,由智能设备控制显示该目标显示内容。
本公开所示的示例性实施方式的向智能设备发送目标显示内容的方案可以分为服务器主动向智能设备发送以及服务器接收获取请求后向智能设备发送两种情况,可以根据硬件条件及信息处理状态调整处理方法,智能设备与服务器交互的灵活性得到了提高。
下面将以商场中切换商品介绍视频为例对本公开的显示内容控制方法的整个过程进行说明。
首先,顾客在发现显示屏播放的商品介绍视频不是自己想要了解的商品时,可以走向该显示屏并处于显示屏正对的位置,其中,体感摄像机可以置于显示屏的正上方以能够捕捉到显示屏正对位置的顾客的骨骼点坐标;接下来,顾客可以挥动手臂,体感摄像机可以将手臂的骨骼点坐标数据发送给服务器,其中,当顾客手臂的偏转角度满足预设偏转角度要求时,服务器根据手臂的运动轨迹分析出目标姿势操作,并确定目标姿势操作对应的指令,将该指令对应的目标视频发送至智能设备;随后,智能设备将目标视频发送至显示屏以实现显示内容的切换控制。
根据另外一些实施例,在姿势操作与显示控制操作对应的方案中,例如,当顾客由左向右挥动手臂时,显示内容可以由当前视频切换为与该当前视频相邻存储的上一视频;当顾客由右向左挥动手臂时,显示内容可以由当前视频切换为与该当前视频相邻存储的下一视频。
根据又一些实施例,姿势操作可以与快进、暂停、开关声音等操作对应,以对当前视频的显示内容进行控制。
在本公开的显示内容控制方法中,通过人体运动轨迹确定出姿势操作,并根据姿势操作来控制显示内容,一方面,实现了人体动作与显示内容的关联,可以通过人体姿势操作 控制显示内容;另一方面,本公开所示的方案简便且易实施,用户可以方便且快速地对显示内容进行控制,进而直观地获取想要了解的信息。
应当注意,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。
进一步的,本示例实施方式中还提供了一种显示内容控制装置。
图2示意性示出了本公开的示例性实施方式的显示内容控制装置的方框图。参考图2,根据本公开的示例性实施方式的显示内容控制装置2可以包括坐标接收模块21、运动轨迹确定模块23、姿势操作确定模块25和显示内容控制模块27,其中:
坐标接收模块21,可以用于接收体感摄像机捕捉的人体骨骼点坐标;
运动轨迹确定模块23,可以用于根据人体骨骼点坐标确定人体运动轨迹;
姿势操作确定模块25,可以用于基于人体运动轨迹确定目标姿势操作;以及
显示内容控制模块27,可以用于根据与目标姿势操作对应的指令控制显示内容。
在本公开的显示内容控制装置中,一方面,实现了人体动作与显示内容的关联,可以通过人体姿势操作控制显示内容;另一方面,本公开所示的方案简便且易实施,用户可以方便且快速地对显示内容进行控制,进而直观地获取想要了解的信息。
根据本公开的示例性实施例,目标姿势操作可以为目标手势操作。
通过将姿势操作限定为手势操作,有助于用户方便地实施对显示内容的控制过程。
根据本公开的示例性实施例,参考图3,姿势操作确定模块25可以包括坐标系建立子模块301、运动方向确定子模块303和手势动作确定子模块305,其中:
坐标系建立子模块301,可以用于建立三维坐标系,并将体感摄像机的位置设定为初始坐标点;
运动方向确定子模块303,可以用于以手肘部坐标为中心,根据人体运动轨迹确定手的运动方向;
手势动作确定子模块305,可以用于基于手的运动方向确定目标手势动作。
根据本公开的示例性实施例,参考图4,手势动作确定子模块305可以包括偏转角度判断单元4001和手势动作确定单元4003,其中:
偏转角度判断单元4001,可以用于判断手的偏转角度是否大于一预设偏转角度;
手势动作确定单元4003,可以用于在判断出手的偏转角度大于预设偏转角度时,将与手的运动方向对应的手势动作确定为目标手势动作。
通过预设偏转角度的设置,可以更好地区分手的运动方向,进而减少不确定因素对控制操作的影响。
根据本公开的示例性实施例,参考图5,显示内容控制模块27可以包括指令确定子模块501和显示内容发送子模块503,其中:
指令确定子模块501,可以用于确定与目标姿势操作成映射关系的指令;
显示内容发送子模块503,可以用于将与指令对应的目标显示内容发送至智能设备以显示目标显示内容。
根据本公开的示例性实施例,参考图6,显示内容发送子模块503可以包括命令请求发送单元6001、获取请求接收单元6003和显示内容发送单元6005,其中:
命令请求发送单元6001,可以用于向智能设备发送与指令相关的命令请求;
获取请求接收单元6003,可以用于接收智能设备响应命令请求而发送的目标显示内容获取请求;
显示内容发送单元6005,可以用于将与指令对应的目标显示内容发送至智能设备以显示目标显示内容。
本公开所示的示例性实施方式的向智能设备发送目标显示内容的方案可以分为服务器主动向智能设备发送以及服务器接收获取请求后向智能设备发送两种情况,可以根据硬件条件及信息处理状态调整处理方法,智能设备与服务器交互的灵活性得到了提高。
由于本发明实施方式的程序运行性能分析装置的各个功能模块与上述方法发明实施方式中相同,因此在此不再赘述。
进一步的,本示例实施方式中还提供了一种显示内容控制系统。
图7示意性示出了本公开的示例性实施方式的显示内容控制系统的方框图。参考图7,根据本公开的示例性实施方式的显示内容控制系统可以包括体感摄像机71、服务器72、智能设备73,其中:
体感摄像机71,可以用于捕捉人体骨骼点坐标并发送;
服务器72,可以用于接收人体骨骼点坐标并根据人体骨骼点坐标确定人体运动轨迹;基于人体运动轨迹确定目标姿势操作;确定目标姿势操作对应的指令;以及
智能设备73,可以用于根据指令控制显示内容。
此外,如图7所示,本公开所示的显示内容控制系统还可以包括显示屏74。
在本公开的显示内容控制系统中,一方面,实现了人体动作与显示内容的关联,可以通过人体姿势操作控制显示内容;另一方面,本公开所示的方案简便且易实施,用户可以方便且快速地对显示内容进行控制,进而直观地获取想要了解的信息。
根据本公开的示例性实施例,目标姿势操作为目标手势动作。
通过将姿势操作限定为手势操作,有助于用户方便地实施对显示内容的控制过程。
根据本公开的示例性实施例,在确定目标姿势操作时,服务器72还用于建立三维坐标系,并将体感摄像机71的位置设定为初始坐标点;以手肘部坐标为中心,根据人体运动轨迹确定手的运动方向;基于手的运动方向确定目标手势动作。
根据本公开的示例性实施例,在确定目标手势动作时,服务器72还用于判断手的偏转角度是否大于一预设偏转角度;在判断出手的偏转角度大于预设偏转角度时,将与手的运动方向对应的手势动作确定为目标手势动作。
通过预设偏转角度的设置,可以更好地区分手的运动方向,进而减少不确定因素对控制操作的影响。
根据本公开的示例性实施例,服务器72还用于将与指令对应的目标显示内容发送至智能设备73;智能设备73还用于显示目标显示内容。
根据本公开的示例性实施例,服务器72还用于将与指令相关的命令请求发送至智能设备73,接收智能设备73响应命令请求发送的目标显示内容获取请求,并将目标显示内容获取请求发送至智能设备73;以及
智能设备73还用于响应服务器72发送的命令请求将目标显示内容获取请求发送至服务器72,并接收服务器72响应目标显示内容获取请求而发送的目标显示内容。
本公开所示的示例性实施方式的向智能设备发送目标显示内容的方案可以分为服务器主动向智能设备发送以及服务器接收获取请求后向智能设备发送两种情况,可以根据硬件条件及信息处理状态调整处理方法,智能设备与服务器交互的灵活性得到了提高。
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施方式中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施方式的步骤。
参考图8所示,描述了根据本发明的实施方式的用于实现上述方法的程序产品800,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
在本公开的示例性实施例中,还提供了一种能够实现上述方法的电子设备。
所属技术领域的技术人员能够理解,本发明的各个方面可以实现为系统、方法或程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
下面参照图9来描述根据本发明的这种实施方式的电子设备900。图9显示的电子设备900仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图9所示,电子设备900以通用计算设备的形式表现。电子设备900的组件可以包括但不限于:上述至少一个处理单元910、上述至少一个存储单元920、连接不同系统组件(包括存储单元920和处理单元910)的总线930、显示单元940。
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元910执行,使得所述处理单元910执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施方式的步骤。例如,所述处理单元910可以执行如图1中所示的步骤S10:接收体感摄像机捕捉的人体骨骼点坐标;步骤S12:根据人体骨骼点坐标确定人体运动轨迹;步骤S14:基于人体运动轨迹确定目标姿势操作;步骤S16:根据与目标姿势操作对应的指令控制显示内容。
存储单元920可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)9201和/或高速缓存存储单元9202,还可以进一步包括只读存储单元(ROM)9203。
存储单元920还可以包括具有一组(至少一个)程序模块9205的程序/实用工具9204,这样的程序模块9205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线930可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备900也可以与一个或多个外部设备1000(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备900交互的设备通信,和/或与 使得该电子设备900能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口950进行。并且,电子设备900还可以通过网络适配器960与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器960通过总线930与电子设备900的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备900使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施方式的方法。
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。

Claims (20)

  1. 一种显示内容控制方法,其特征在于,包括:
    接收体感摄像机捕捉的人体骨骼点坐标;
    根据所述人体骨骼点坐标确定人体运动轨迹;
    基于所述人体运动轨迹确定目标姿势操作;以及
    根据与所述目标姿势操作对应的指令控制显示内容。
  2. 根据权利要求1所述的显示内容控制方法,其特征在于,所述目标姿势操作为目标手势动作。
  3. 根据权利要求2所述的显示内容控制方法,其特征在于,基于所述人体运动轨迹确定目标姿势操作包括:
    建立三维坐标系,并将所述体感摄像机的位置设定为初始坐标点;
    以手肘部坐标为中心,根据所述人体运动轨迹确定手的运动方向;
    基于手的运动方向确定所述目标手势动作。
  4. 根据权利要求3所述的显示内容控制方法,其特征在于,基于手的运动方向确定所述目标手势动作包括:
    判断手的偏转角度是否大于一预设偏转角度;
    在判断出手的偏转角度大于所述预设偏转角度时,将与所述手的运动方向对应的手势动作确定为所述目标手势动作。
  5. 根据权利要求1所述的显示内容控制方法,其特征在于,根据与所述目标姿势操作对应的指令控制显示内容包括:
    确定与所述目标姿势操作成映射关系的指令;
    将与所述指令对应的目标显示内容发送至智能设备以显示所述目标显示内容。
  6. 根据权利要求5所述的显示内容控制方法,其特征在于,将与所述指令对应的目标显示内容发送至智能设备以显示所述目标显示内容包括:
    向智能设备发送与所述指令相关的命令请求;
    接收所述智能设备响应所述命令请求而发送的目标显示内容获取请求;
    将与所述指令对应的目标显示内容发送至智能设备以显示所述目标显示内容。
  7. 一种显示内容控制装置,其特征在于,包括:
    坐标接收模块,用于接收体感摄像机捕捉的人体骨骼点坐标;
    运动轨迹确定模块,用于根据所述人体骨骼点坐标确定人体运动轨迹;
    姿势操作确定模块,用于基于所述人体运动轨迹确定目标姿势操作;以及
    显示内容控制模块,用于根据与所述目标姿势操作对应的指令控制显示内容。
  8. 根据权利要求7所述的显示内容控制装置,其特征在于,所述目标姿势操作为目标手势动作。
  9. 根据权利要求8所述的显示内容控制装置,其特征在于,所述姿势操作确定模块 包括:
    坐标系建立子模块,用于建立三维坐标系,并将所述体感摄像机的位置设定为初始坐标点;
    运动方向确定子模块,用于以手肘部坐标为中心,根据所述人体运动轨迹确定手的运动方向;
    手势动作确定子模块,用于基于手的运动方向确定所述目标手势动作。
  10. 根据权利要求9所述的显示内容控制装置,其特征在于,所述手势动作确定子模块包括:
    偏转角度判断单元,用于判断手的偏转角度是否大于一预设偏转角度;
    手势动作确定单元,用于在判断出手的偏转角度大于所述预设偏转角度时,将与所述手的运动方向对应的手势动作确定为所述目标手势动作。
  11. 根据权利要求7所述的显示内容控制装置,其特征在于,所述显示内容控制模块包括:
    指令确定子模块,用于确定与所述目标姿势操作成映射关系的指令;
    显示内容发送子模块,用于将与所述指令对应的目标显示内容发送至智能设备以显示所述目标显示内容。
  12. 根据权利要求11所述的显示内容控制装置,其特征在于,所述显示内容发送子模块包括:
    命令请求发送单元,用于向智能设备发送与所述指令相关的命令请求;
    获取请求接收单元,用于接收所述智能设备响应所述命令请求而发送的目标显示内容获取请求;
    显示内容发送单元,用于将与所述指令对应的目标显示内容发送至智能设备以显示所述目标显示内容。
  13. 一种显示内容控制系统,其特征在于,包括:
    体感摄像机,用于捕捉人体骨骼点坐标并发送;
    服务器,用于接收所述人体骨骼点坐标并根据所述人体骨骼点坐标确定人体运动轨迹;基于所述人体运动轨迹确定目标姿势操作;确定所述目标姿势操作对应的指令;以及智能设备,用于根据所述指令控制显示内容。
  14. 根据权利要求13所述的显示内容控制系统,其特征在于,所述目标姿势操作为目标手势动作。
  15. 根据权利要求14所述的显示内容控制系统,其特征在于,在确定所述目标姿势操作时,所述服务器还用于建立三维坐标系,并将所述体感摄像机的位置设定为初始坐标点;以手肘部坐标为中心,根据所述人体运动轨迹确定手的运动方向;基于手的运动方向确定所述目标手势动作。
  16. 根据权利要求15所述的显示内容控制系统,其特征在于,在确定所述目标手势 动作时,所述服务器还用于判断手的偏转角度是否大于一预设偏转角度;在判断出手的偏转角度大于所述预设偏转角度时,将与所述手的运动方向对应的手势动作确定为所述目标手势动作。
  17. 根据权利要求13所述的显示内容控制系统,其特征在于,所述服务器还用于将与所述指令对应的目标显示内容发送至智能设备;所述智能设备还用于显示所述目标显示内容。
  18. 根据权利要求17所述的显示内容控制系统,其特征在于,所述服务器还用于将与所述指令相关的命令请求发送至所述智能设备,接收所述智能设备响应所述命令请求发送的目标显示内容获取请求,并将所述目标显示内容获取请求发送至所述智能设备;以及
    所述智能设备还用于响应所述服务器发送的所述命令请求将所述目标显示内容获取请求发送至所述服务器,并接收所述服务器响应所述目标显示内容获取请求而发送的所述目标显示内容。
  19. 一种存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至6中任一项所述的显示内容控制方法。
  20. 一种电子设备,其特征在于,包括:
    处理器;以及
    存储器,用于存储所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至6中任一项所述的显示内容控制方法。
PCT/CN2018/092232 2017-09-04 2018-06-21 显示内容控制方法及装置、系统、存储介质和电子设备 WO2019041982A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710785976.8A CN107688791A (zh) 2017-09-04 2017-09-04 显示内容控制方法及装置、系统、存储介质和电子设备
CN201710785976.8 2017-09-04

Publications (1)

Publication Number Publication Date
WO2019041982A1 true WO2019041982A1 (zh) 2019-03-07

Family

ID=61156003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/092232 WO2019041982A1 (zh) 2017-09-04 2018-06-21 显示内容控制方法及装置、系统、存储介质和电子设备

Country Status (2)

Country Link
CN (1) CN107688791A (zh)
WO (1) WO2019041982A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543345A (zh) * 2019-08-26 2019-12-06 Oppo广东移动通信有限公司 壁纸生成方法及装置、存储介质和电子设备
CN112869676A (zh) * 2021-01-11 2021-06-01 佛山市顺德区美的洗涤电器制造有限公司 用于洗碗机的控制方法、控制装置、显示装置及洗碗机
CN113058259A (zh) * 2021-04-22 2021-07-02 杭州当贝网络科技有限公司 基于游戏内容的体感动作识别方法、系统及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688791A (zh) * 2017-09-04 2018-02-13 北京京东金融科技控股有限公司 显示内容控制方法及装置、系统、存储介质和电子设备
CN110794951A (zh) * 2018-08-01 2020-02-14 北京京东尚科信息技术有限公司 一种基于用户动作确定购物指令的方法和装置
CN109240494B (zh) * 2018-08-23 2023-09-12 京东方科技集团股份有限公司 电子显示板的控制方法、计算机可读存储介质和控制系统
CN109200576A (zh) * 2018-09-05 2019-01-15 深圳市三宝创新智能有限公司 机器人投影的体感游戏方法、装置、设备和存储介质
CN112089596A (zh) * 2020-05-22 2020-12-18 未来穿戴技术有限公司 颈部按摩仪的好友添加方法及颈部按摩仪、可读存储介质
CN112333511A (zh) * 2020-09-27 2021-02-05 深圳Tcl新技术有限公司 智能电视的控制方法、装置、设备及计算机可读存储介质
CN113058261B (zh) * 2021-04-22 2024-04-19 杭州当贝网络科技有限公司 基于现实场景和游戏场景的体感动作识别方法及系统
CN113515194A (zh) * 2021-06-24 2021-10-19 北京七展国际数字科技有限公司 显示内容控制装置和方法
CN116983609A (zh) * 2022-04-25 2023-11-03 漳州松霖智能家居有限公司 一种用户姿态检测方法及智能垫子系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020648A (zh) * 2013-01-09 2013-04-03 北京东方艾迪普科技发展有限公司 一种动作类型识别方法、节目播出方法及装置
CN103729096A (zh) * 2013-12-25 2014-04-16 京东方科技集团股份有限公司 交互识别系统以及显示装置
CN104503589A (zh) * 2015-01-05 2015-04-08 京东方科技集团股份有限公司 体感识别系统及其识别方法
CN106980377A (zh) * 2017-03-29 2017-07-25 京东方科技集团股份有限公司 一种三维空间的交互系统及其操作方法
CN107688791A (zh) * 2017-09-04 2018-02-13 北京京东金融科技控股有限公司 显示内容控制方法及装置、系统、存储介质和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020648A (zh) * 2013-01-09 2013-04-03 北京东方艾迪普科技发展有限公司 一种动作类型识别方法、节目播出方法及装置
CN103729096A (zh) * 2013-12-25 2014-04-16 京东方科技集团股份有限公司 交互识别系统以及显示装置
CN104503589A (zh) * 2015-01-05 2015-04-08 京东方科技集团股份有限公司 体感识别系统及其识别方法
CN106980377A (zh) * 2017-03-29 2017-07-25 京东方科技集团股份有限公司 一种三维空间的交互系统及其操作方法
CN107688791A (zh) * 2017-09-04 2018-02-13 北京京东金融科技控股有限公司 显示内容控制方法及装置、系统、存储介质和电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543345A (zh) * 2019-08-26 2019-12-06 Oppo广东移动通信有限公司 壁纸生成方法及装置、存储介质和电子设备
CN112869676A (zh) * 2021-01-11 2021-06-01 佛山市顺德区美的洗涤电器制造有限公司 用于洗碗机的控制方法、控制装置、显示装置及洗碗机
CN113058259A (zh) * 2021-04-22 2021-07-02 杭州当贝网络科技有限公司 基于游戏内容的体感动作识别方法、系统及存储介质
CN113058259B (zh) * 2021-04-22 2024-04-19 杭州当贝网络科技有限公司 基于游戏内容的体感动作识别方法、系统及存储介质

Also Published As

Publication number Publication date
CN107688791A (zh) 2018-02-13

Similar Documents

Publication Publication Date Title
WO2019041982A1 (zh) 显示内容控制方法及装置、系统、存储介质和电子设备
JP6713034B2 (ja) スマートテレビの音声インタラクティブフィードバック方法、システム及びコンピュータプログラム
CN106897688B (zh) 交互式投影装置、控制交互式投影的方法和可读存储介质
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
CN110691196A (zh) 一种音频设备的声源定位的方法及音频设备
JP2022031339A (ja) 表示方法およびデバイス
CN110297550B (zh) 一种标注显示方法、装置、投屏设备、终端和存储介质
US20130346858A1 (en) Remote Control of Audio Application and Associated Sub-Windows
KR20130042010A (ko) 제스처 인식을 위한 환경-의존 동적 범위 컨트롤
CN112822529B (zh) 电子设备及其控制方法
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
WO2023279914A1 (zh) 控件编辑方法、装置、设备、可读存储介质及产品
US10261600B2 (en) Remotely operating target device
CN110618780A (zh) 用于对多个信号源进行交互的交互装置和交互方法
CN109992111B (zh) 增强现实扩展方法和电子设备
US20170277614A1 (en) Intelligent test robot system
CN111645521A (zh) 用于智能后视镜的控制方法、装置、电子设备和存储介质
WO2022088974A1 (zh) 一种遥控方法、电子设备及系统
WO2024066754A1 (zh) 交互控制方法、装置及电子设备
CN114365504A (zh) 电子设备及其控制方法
JP2015102742A (ja) 画像処理装置及び画像処理方法
CN110908509B (zh) 多增强现实设备的协作方法及装置、电子设备、存储介质
US10545716B2 (en) Information processing device, information processing method, and program
US20180160133A1 (en) Realtime recording of gestures and/or voice to modify animations
WO2019104533A1 (zh) 一种视频播放方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18851974

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18851974

Country of ref document: EP

Kind code of ref document: A1